Skip to content

Benchmark: forest_fire_mapping #481

@github-actions

Description

@github-actions

Benchmark scenario ID: forest_fire_mapping
Benchmark scenario definition: https://github.com/ESA-APEx/apex_algorithms/blob/e952e851168847c9c4b59c4af7d713fb661870bc/algorithm_catalog/vito/random_forest_firemapping/benchmark_scenarios/random_forest_firemapping.json
openEO backend: openeo.dataspace.copernicus.eu

GitHub Actions workflow run: https://github.com/ESA-APEx/apex_algorithms/actions/runs/24958170316
Workflow artifacts: https://github.com/ESA-APEx/apex_algorithms/actions/runs/24958170316#artifacts

Test start: 2026-04-26 13:47:54.882422+00:00
Test duration: 0:33:19.586550
Test outcome: ❌ failed

Last successful test phase: download-reference
Failure in test phase: compare:derived_from-change

Contact Information

Name Organization Contact
Pratichhya Sharma VITO Contact via VITO (VITO Website, GitHub)

Process Graph

{
  "randomforestfiremapping1": {
    "arguments": {
      "padding_window_size": 33,
      "spatial_extent": {
        "coordinates": [
          [
            [
              -17.996638457335074,
              28.771993378019005
            ],
            [
              -17.960989271845406,
              28.822652746872745
            ],
            [
              -17.913144312372435,
              28.85454938652139
            ],
            [
              -17.842315009623224,
              28.83015783855478
            ],
            [
              -17.781805207936817,
              28.842353612538087
            ],
            [
              -17.728331429702315,
              28.74103487483061
            ],
            [
              -17.766795024572748,
              28.681932277834584
            ],
            [
              -17.75131577297855,
              28.624236885528937
            ],
            [
              -17.756944591740076,
              28.579206335436727
            ],
            [
              -17.838093395552082,
              28.451150708612
            ],
            [
              -17.871397239891113,
              28.480702007110015
            ],
            [
              -17.88969090086607,
              28.57404658490533
            ],
            [
              -17.957705794234517,
              28.658947934558352
            ],
            [
              -18.003674480786984,
              28.76167387695621
            ],
            [
              -18.003674480786984,
              28.76167387695621
            ],
            [
              -17.996638457335074,
              28.771993378019005
            ]
          ]
        ],
        "type": "Polygon"
      },
      "temporal_extent": [
        "2023-07-15",
        "2023-09-15"
      ]
    },
    "namespace": "https://raw.githubusercontent.com/ESA-APEx/apex_algorithms/refs/heads/main/algorithm_catalog/vito/random_forest_firemapping/openeo_udp/random_forest_firemapping.json",
    "process_id": "random_forest_firemapping",
    "result": true
  }
}

Error Logs

scenario = BenchmarkScenario(id='forest_fire_mapping', description='Forest Fire Mapping using Random Forest based on Sentinel-2 a.../apex_algorithms/algorithm_catalog/vito/random_forest_firemapping/benchmark_scenarios/random_forest_firemapping.json'))
connection_factory = <function connection_factory.<locals>.get_connection at 0x7f8296b76a20>
tmp_path = PosixPath('/home/runner/work/apex_algorithms/apex_algorithms/qa/benchmarks/tmp_path_root/test_run_benchmark_forest_fire0')
track_metric = <function track_metric.<locals>.track at 0x7f8296b76b60>
track_phase = <apex_algorithm_qa_tools.pytest.pytest_track_metrics._PhaseTracker object at 0x7f8296b9c9b0>
upload_assets_on_fail = <apex_algorithm_qa_tools.pytest.pytest_upload_assets.upload_assets_on_fail.<locals>._Collector object at 0x7f8296b96420>
request = <FixtureRequest for <Function test_run_benchmark[forest_fire_mapping]>>

    @pytest.mark.parametrize(
        "scenario",
        [
            # Use scenario id as parameterization id to give nicer test names.
            pytest.param(uc, id=uc.id)
            for uc in get_benchmark_scenarios()
        ],
    )
    def test_run_benchmark(
        scenario: BenchmarkScenario,
        connection_factory,
        tmp_path: Path,
        track_metric,
        track_phase,
        upload_assets_on_fail,
        request,
    ):
        track_metric("scenario_id", scenario.id)

        with track_phase(phase="connect"):
            # Check if a backend override has been provided via cli options.
            override_backend = request.config.getoption("--override-backend")
            backend_filter = request.config.getoption("--backend-filter")
            if backend_filter and not re.match(backend_filter, scenario.backend):
                # TODO apply filter during scenario retrieval, but seems to be hard to retrieve cli param
                pytest.skip(
                    f"skipping scenario {scenario.id} because backend {scenario.backend} does not match filter {backend_filter!r}"
                )
            backend = scenario.backend
            if override_backend:
                _log.info(f"Overriding backend URL with {override_backend!r}")
                backend = override_backend

            connection: openeo.Connection = connection_factory(url=backend)

        report_path = None
        if request.config.getoption("--upload-benchmark-report"):
            report_path = tmp_path / "benchmark_report.json"
            report_path.write_text(json.dumps({
                "scenario_id": scenario.id,
                "scenario_description": scenario.description,
                "scenario_backend": scenario.backend,
                "scenario_source": str(scenario.source) if scenario.source else None,
                "reference_data": scenario.reference_data,
                "reference_options": scenario.reference_options,
            }, indent=2))
            upload_assets_on_fail(report_path)

        def _on_phase_exception(phase: str, exc: Exception):
            if report_path is not None:
                report = json.loads(report_path.read_text())
                report["test_failed"] = True
                report["test_failed_phase"] = phase
                report["test_error_message"] = str(exc)
                report_path.write_text(json.dumps(report, indent=2))
                cwd_report_dir = Path("benchmark_reports")
                cwd_report_dir.mkdir(exist_ok=True)
                (cwd_report_dir / f"{scenario.id}_benchmark_report.json").write_text(
                    json.dumps(report, indent=2)
                )
                report_url = upload_assets_on_fail.get_url(report_path)
                if report_url:
                    exc.add_note(f"Benchmark report: {report_url}")

        track_phase.on_exception = _on_phase_exception

        with track_phase(phase="create-job"):
            # TODO #14 scenario option to use synchronous instead of batch job mode?
            job = connection.create_job(
                process_graph=scenario.process_graph,
                title=f"APEx benchmark {scenario.id}",
                additional=scenario.job_options,
            )
            track_metric("job_id", job.job_id)

            if report_path is not None:
                report = json.loads(report_path.read_text())
                report["job_id"] = job.job_id
                report_path.write_text(json.dumps(report, indent=2))

        with track_phase(phase="run-job"):
            # TODO: monitor timing and progress
            # TODO: separate "job started" and run phases?
            max_minutes = request.config.getoption("--maximum-job-time-in-minutes")
            if max_minutes:
                def _timeout_handler(signum, frame):
                    raise TimeoutError(
                        f"Batch job {job.job_id} exceeded maximum allowed time of {max_minutes} minutes"
                    )

                old_handler = signal.signal(signal.SIGALRM, _timeout_handler)
                signal.alarm(max_minutes * 60)
            try:
                job.start_and_wait()
            finally:
                if max_minutes:
                    signal.alarm(0)
                    signal.signal(signal.SIGALRM, old_handler)

        with track_phase(phase="collect-metadata"):
            collect_metrics_from_job_metadata(job, track_metric=track_metric)

            results = job.get_results()
            collect_metrics_from_results_metadata(results, track_metric=track_metric)

        with track_phase(phase="download-actual"):
            # Download actual results
            actual_dir = tmp_path / "actual"
            paths = results.download_files(target=actual_dir, include_stac_metadata=True)

            # Upload assets on failure
            upload_assets_on_fail(*paths)

        # Pre-compute S3 URLs for actual files (used in error messages and benchmark reports)
        actual_s3_urls = {
            str(p.relative_to(actual_dir)): upload_assets_on_fail.get_url(p)
            for p in sorted(actual_dir.rglob("*")) if p.is_file()
        }
        actual_s3_urls = {k: v for k, v in actual_s3_urls.items() if v is not None}

        with track_phase(phase="download-reference"):
            reference_dir = download_reference_data(
                scenario=scenario, reference_dir=tmp_path / "reference"
            )

        if report_path is not None:
            report = json.loads(report_path.read_text())
            report["actual_files"] = {
                str(p.relative_to(actual_dir)): f"{p.stat().st_size / 1024:.1f} kb"
                for p in sorted(actual_dir.rglob("*")) if p.is_file()
            }
            ref_files = {}
            for p in sorted(reference_dir.rglob("*")):
                if not p.is_file():
                    continue
                rel = p.relative_to(reference_dir)
                size_str = f"{p.stat().st_size / 1024:.1f} kb"
                actual_counterpart = actual_dir / rel
                if not actual_counterpart.exists():
                    size_str += " (missing in actual)"
                elif actual_counterpart.stat().st_size != p.stat().st_size:
                    size_str += f" (actual: {actual_counterpart.stat().st_size / 1024:.1f} kb)"
                ref_files[str(rel)] = size_str
            report["reference_files"] = ref_files
            if actual_s3_urls:
                report["actual_data"] = actual_s3_urls
            report_path.write_text(json.dumps(report, indent=2))
            # Also write to CWD so the report is accessible on Jenkins workspace
            cwd_report_dir = Path("benchmark_reports")
            cwd_report_dir.mkdir(exist_ok=True)
            (cwd_report_dir / f"{scenario.id}_benchmark_report.json").write_text(
                json.dumps(report, indent=2)
            )

        with track_phase(
            phase="compare", describe_exception=analyse_results_comparison_exception
        ):
            # Compare actual results with reference data
            try:
                assert_job_results_allclose(
                    actual=actual_dir,
                    expected=reference_dir,
                    tmp_path=tmp_path,
                    rtol=scenario.reference_options.get("rtol", 1e-3),
                    atol=scenario.reference_options.get("atol", 1),
                    pixel_tolerance=scenario.reference_options.get("pixel_tolerance", 1),
                )
            except AssertionError as e:
                msg = str(e)
                if scenario.reference_data:
                    msg += "\n\nReference data URLs:"
                    for name, url in scenario.reference_data.items():
                        msg += f"\n  {name}: {url}"
                if actual_s3_urls:
                    msg += "\n\nActual data S3 URLs (uploaded on failure):"
                    for name, url in actual_s3_urls.items():
                        msg += f"\n  {name}: {url}"
>               raise AssertionError(msg) from None
E               AssertionError: Issues for metadata file 'job-results.json':
E               Differing 'derived_from' links (0 common, 65 only in actual, 70 only in expected):
E                 only in actual: {'S2A_MSIL2A_20230828T115221_N0510_R123_T28RBS_20240912T160127', 'S2A_MSIL2A_20230831T120331_N0510_R023_T27RYM_20240821T112229', 'S2A_MSIL2A_20230831T120331_N0510_R023_T28RBT_20240821T112229', 'S2B_MSIL2A_20230806T120329_N0510_R023_T28RBS_20240822T211304', 'S2B_MSIL2A_20230727T120329_N0510_R023_T28RBS_20240726T091833', 'S2A_MSIL2A_20230719T115221_N0510_R123_T28RBS_20240911T102306', 'S2B_MSIL2A_20230717T120329_N0510_R023_T27RYN_20240817T153903', 'S2A_MSIL2A_20230821T120331_N0510_R023_T27RYM_20240822T132759', 'S2A_MSIL2A_20230910T120331_N0510_R023_T27RYM_20240824T022751', 'S2A_MSIL2A_20230831T120331_N0510_R023_T27RYN_20240821T112229', 'S2B_MSIL2A_20230816T120329_N0510_R023_T27RYM_20240822T003133', 'S2B_MSIL2A_20230724T115229_N0510_R123_T28RBS_20240911T152536', 'S2B_MSIL2A_20230806T120329_N0510_R023_T27RYM_20240822T211304', 'S2A_MSIL2A_20230722T120331_N0510_R023_T28RBT_20240818T130932', 'S1A_IW_GRDH_1SDV_20230904T191401_20230904T191431_050182_060A3B_09C6_COG', 'S1A_IW_GRDH_1SDV_2023081...
E                 only in expected: {'https://services.sentinel-hub.com/api/v1/catalog/1.0.0/collections/sentinel-2-l2a/items/S2B_MSIL2A_20230816T120329_N0509_R023_T27RYN_20230816T173157', 'https://services.sentinel-hub.com/api/v1/catalog/1.0.0/collections/sentinel-2-l2a/items/S2B_MSIL2A_20230806T120329_N0509_R023_T28RBT_20230806T140109', 'https://services.sentinel-hub.com/api/v1/catalog/1.0.0/collections/sentinel-2-l2a/items/S2B_MSIL2A_20230806T120329_N0509_R023_T27RYM_20230806T140109', 'https://services.sentinel-hub.com/api/v1/catalog/1.0.0/collections/sentinel-2-l2a/items/S2B_MSIL2A_20230902T115229_N0509_R123_T28RBS_20230902T135723', 'https://services.sentinel-hub.com/api/v1/catalog/1.0.0/collections/sentinel-2-l2a/items/S2A_MSIL2A_20230811T120331_N0509_R023_T28RBT_20230811T175153', 'https://services.sentinel-hub.com/api/v1/catalog/1.0.0/collections/sentinel-1-grd/items/S1A_EW_GRDM_1SDV_20230824T071919_20230824T072019_050014_060468_9AE8', 'https://services.sentinel-hub.com/api/v1/catalog/1.0.0/collections/sentinel-....
E               Issues for file 'openEO.tif':
E               Left and right DataArray objects are not close
E               Differing values:
E               L
E                   array([[[nan, nan, ...,  0.,  0.],
E                           [nan, nan, ...,  0.,  0.],
E                           ...,
E                           [ 0.,  0., ...,  0.,  0.],
E                           [ 0.,  0., ...,  0.,  0.]]], shape=(1, 4538, 2801), dtype=float32)
E               R
E                   array([[[ 0.,  0., ...,  0.,  0.],
E                           [ 0.,  0., ...,  0.,  0.],
E                           ...,
E                           [ 0.,  0., ..., nan, nan],
E                           [ 0.,  0., ..., nan, nan]]], shape=(1, 4538, 2801), dtype=float32)
E
E               Reference data URLs:
E                 openEO.tif: https://s3.waw3-1.cloudferro.com/apex-benchmarks/gh-18670675142!tests_test_benchmarks.py__test_run_benchmark_forest_fire_mapping_!actual/openEO.tif
E                 job-results.json: https://s3.waw3-1.cloudferro.com/apex-benchmarks/gh-18670675142!tests_test_benchmarks.py__test_run_benchmark_forest_fire_mapping_!actual/job-results.json
E
E               Actual data S3 URLs (uploaded on failure):
E                 job-results.json: https://s3.waw3-1.cloudferro.com/apex-benchmarks/gh-24958170316!tests_test_benchmarks.py__test_run_benchmark_forest_fire_mapping_!actual/job-results.json
E                 openEO.tif: https://s3.waw3-1.cloudferro.com/apex-benchmarks/gh-24958170316!tests_test_benchmarks.py__test_run_benchmark_forest_fire_mapping_!actual/openEO.tif

tests/test_benchmarks.py:201: AssertionError
----------------------------- Captured stdout call -----------------------------
0:00:00 Job 'j-26042613475849f78728cd07f84635f7': send 'start'
0:00:14 Job 'j-26042613475849f78728cd07f84635f7': queued (progress 0%)
0:00:19 Job 'j-26042613475849f78728cd07f84635f7': queued (progress 0%)
0:00:25 Job 'j-26042613475849f78728cd07f84635f7': queued (progress 0%)
0:00:33 Job 'j-26042613475849f78728cd07f84635f7': queued (progress 0%)
0:00:43 Job 'j-26042613475849f78728cd07f84635f7': queued (progress 0%)
0:01:02 Job 'j-26042613475849f78728cd07f84635f7': queued (progress 0%)
0:01:17 Job 'j-26042613475849f78728cd07f84635f7': running (progress N/A)
0:01:36 Job 'j-26042613475849f78728cd07f84635f7': running (progress N/A)
0:02:00 Job 'j-26042613475849f78728cd07f84635f7': running (progress N/A)
0:02:30 Job 'j-26042613475849f78728cd07f84635f7': running (progress N/A)
0:03:08 Job 'j-26042613475849f78728cd07f84635f7': running (progress N/A)
0:03:55 Job 'j-26042613475849f78728cd07f84635f7': running (progress N/A)
0:04:53 Job 'j-26042613475849f78728cd07f84635f7': running (progress N/A)
0:05:53 Job 'j-26042613475849f78728cd07f84635f7': running (progress N/A)
0:06:53 Job 'j-26042613475849f78728cd07f84635f7': running (progress N/A)
0:07:53 Job 'j-26042613475849f78728cd07f84635f7': running (progress N/A)
0:08:54 Job 'j-26042613475849f78728cd07f84635f7': running (progress N/A)
0:09:57 Job 'j-26042613475849f78728cd07f84635f7': running (progress N/A)
0:10:58 Job 'j-26042613475849f78728cd07f84635f7': running (progress N/A)
0:11:58 Job 'j-26042613475849f78728cd07f84635f7': running (progress N/A)
0:12:58 Job 'j-26042613475849f78728cd07f84635f7': running (progress N/A)
0:13:58 Job 'j-26042613475849f78728cd07f84635f7': running (progress N/A)
0:14:58 Job 'j-26042613475849f78728cd07f84635f7': running (progress N/A)
0:15:59 Job 'j-26042613475849f78728cd07f84635f7': running (progress N/A)
0:16:59 Job 'j-26042613475849f78728cd07f84635f7': running (progress N/A)
0:17:59 Job 'j-26042613475849f78728cd07f84635f7': running (progress N/A)
0:18:59 Job 'j-26042613475849f78728cd07f84635f7': running (progress N/A)
0:19:59 Job 'j-26042613475849f78728cd07f84635f7': running (progress N/A)
0:21:00 Job 'j-26042613475849f78728cd07f84635f7': running (progress N/A)
0:22:00 Job 'j-26042613475849f78728cd07f84635f7': running (progress N/A)
0:23:00 Job 'j-26042613475849f78728cd07f84635f7': running (progress N/A)
0:24:01 Job 'j-26042613475849f78728cd07f84635f7': running (progress N/A)
0:25:01 Job 'j-26042613475849f78728cd07f84635f7': running (progress N/A)
0:26:01 Job 'j-26042613475849f78728cd07f84635f7': running (progress N/A)
0:27:01 Job 'j-26042613475849f78728cd07f84635f7': running (progress N/A)
0:28:01 Job 'j-26042613475849f78728cd07f84635f7': running (progress N/A)
0:29:01 Job 'j-26042613475849f78728cd07f84635f7': running (progress N/A)
0:30:02 Job 'j-26042613475849f78728cd07f84635f7': running (progress N/A)
0:31:03 Job 'j-26042613475849f78728cd07f84635f7': running (progress N/A)
0:32:03 Job 'j-26042613475849f78728cd07f84635f7': running (progress N/A)
0:33:09 Job 'j-26042613475849f78728cd07f84635f7': finished (progress 100%)
------------------------------ Captured log call -------------------------------
INFO     conftest:conftest.py:145 Connecting to 'openeo.dataspace.copernicus.eu'
INFO     openeo.config:config.py:193 Loaded openEO client config from sources: []
INFO     conftest:conftest.py:158 Checking for auth_env_var='OPENEO_AUTH_CLIENT_CREDENTIALS_CDSEFED' to drive auth against url='openeo.dataspace.copernicus.eu'.
INFO     conftest:conftest.py:162 Extracted provider_id='CDSE' client_id='openeo-apex-benchmarks-service-account' from auth_env_var='OPENEO_AUTH_CLIENT_CREDENTIALS_CDSEFED'
INFO     openeo.rest.connection:connection.py:302 Found OIDC providers: ['CDSE']
INFO     openeo.rest.auth.oidc:oidc.py:410 Doing 'client_credentials' token request 'https://identity.dataspace.copernicus.eu/auth/realms/CDSE/protocol/openid-connect/token' with post data fields ['grant_type', 'client_id', 'client_secret', 'scope'] (client_id 'openeo-apex-benchmarks-service-account')
INFO     openeo.rest.connection:connection.py:401 Obtained tokens: ['token_type', 'access_token', 'expires_in', 'id_token', 'scope']
INFO     openeo.rest.auth.oidc:oidc.py:410 Doing 'client_credentials' token request 'https://identity.dataspace.copernicus.eu/auth/realms/CDSE/protocol/openid-connect/token' with post data fields ['grant_type', 'client_id', 'client_secret', 'scope'] (client_id 'openeo-apex-benchmarks-service-account')
INFO     openeo.rest.connection:connection.py:401 Obtained tokens: ['token_type', 'access_token', 'expires_in', 'id_token', 'scope']
INFO     openeo.rest.connection:connection.py:747 Obtained new access token (grant 'client_credentials'). Reason: OIDC access token expired (403 TokenInvalid).
INFO     openeo.rest.job:job.py:436 Downloading Job result asset 'openEO.tif' from https://s3.waw3-1.openeo.v1.dataspace.copernicus.eu/openeo-data-prod-waw4-1/batch_jobs/j-26042613475849f78728cd07f84635f7/openEO.tif?X-Proxy-Head-As-Get=true&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=29805b3409a64bbfb06246838a06fbc9%2F20260426%2Fwaw4-1%2Fs3%2Faws4_request&X-Amz-Date=20260426T142108Z&X-Amz-Expires=86400&X-Amz-SignedHeaders=host&X-Amz-Security-Token=eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJyb2xlX2FybiI6ImFybjpvcGVuZW93czppYW06Ojpyb2xlL29wZW5lby1kYXRhLXByb2Qtd2F3NC0xLXdvcmtzcGFjZSIsImluaXRpYWxfaXNzdWVyIjoib3BlbmVvLnByb2Qud2F3My0xLm9wZW5lby1pbnQudjEuZGF0YXNwYWNlLmNvcGVybmljdXMuZXUiLCJodHRwczovL2F3cy5hbWF6b24uY29tL3RhZ3MiOnsicHJpbmNpcGFsX3RhZ3MiOnsiam9iX2lkIjpbImotMjYwNDI2MTM0NzU4NDlmNzg3MjhjZDA3Zjg0NjM1ZjciXSwidXNlcl9pZCI6WyI2YTc3ZmNkMS05YzA4LTQ2ZTktYjg3NS01NGZiOTk5YWIyMDAiXX0sInRyYW5zaXRpdmVfdGFnX2tleXMiOlsidXNlcl9pZCIsImpvYl9pZCJdfSwiaXNzIjoic3RzLndhdzMtMS5vcGVuZW8udjEuZGF0YXNwYWNlLmNvcGVybmljdXMuZXUiLCJzdWIiOiJvcGVuZW8tZHJpdmVyIiwiZXhwIjoxNzc3MjU2NDY4LCJuYmYiOjE3NzcyMTMyNjgsImlhdCI6MTc3NzIxMzI2OCwianRpIjoiMDI3YmY3MDctMmMxNy00MDFjLWE5N2YtODYxMmU0ZTRkODBjIiwiYWNjZXNzX2tleV9pZCI6IjI5ODA1YjM0MDlhNjRiYmZiMDYyNDY4MzhhMDZmYmM5In0.ZqS4_rT55rRkdQ2-_IwZvXJpOtgMWACvFx-kbzHiTvjiDXdWy6AcjrUluSxlzc0lhNFq9D2zWNgMxMHCckD1J9fD4eJrbWldjOeYb5u0S8_t7fwjDLpwTAZac3rwX9GnvBXxvodznRYgpbHB5E-MH8gDUvNKm3cy_Yvsoq707xAWc_PLmLBg0jtycKiQ7yDGtq2jdJYLwZI4gnkBbuLQFwJsvmxjYgAid0SLOPLoC4_EcYmIJiACRTqbNEYh5R-i-u0AHet7jXs6r-fSe1xN49FAHWbVuzTZWj4A3p8M1P6cDugbcM-3U1Q1T6sxpJc6Gj0xy7n19anDvA6zCGUFGA&X-Amz-Signature=9ba9a4418efad44a169dde19bfb1ce0c6d4edcf40378a45b7dc6ab78ed199866 to /home/runner/work/apex_algorithms/apex_algorithms/qa/benchmarks/tmp_path_root/test_run_benchmark_forest_fire0/actual/openEO.tif
INFO     apex_algorithm_qa_tools.scenarios:util.py:345 Downloading reference data for scenario.id='forest_fire_mapping' to reference_dir=PosixPath('/home/runner/work/apex_algorithms/apex_algorithms/qa/benchmarks/tmp_path_root/test_run_benchmark_forest_fire0/reference'): start 2026-04-26 14:21:11.598564
INFO     apex_algorithm_qa_tools.scenarios:util.py:345 Downloading source='https://s3.waw3-1.cloudferro.com/apex-benchmarks/gh-18670675142!tests_test_benchmarks.py__test_run_benchmark_forest_fire_mapping_!actual/openEO.tif' to path=PosixPath('/home/runner/work/apex_algorithms/apex_algorithms/qa/benchmarks/tmp_path_root/test_run_benchmark_forest_fire0/reference/openEO.tif'): start 2026-04-26 14:21:11.598880
INFO     apex_algorithm_qa_tools.scenarios:util.py:351 Downloading source='https://s3.waw3-1.cloudferro.com/apex-benchmarks/gh-18670675142!tests_test_benchmarks.py__test_run_benchmark_forest_fire_mapping_!actual/openEO.tif' to path=PosixPath('/home/runner/work/apex_algorithms/apex_algorithms/qa/benchmarks/tmp_path_root/test_run_benchmark_forest_fire0/reference/openEO.tif'): end 2026-04-26 14:21:13.184767, elapsed 0:00:01.585887
INFO     apex_algorithm_qa_tools.scenarios:util.py:345 Downloading source='https://s3.waw3-1.cloudferro.com/apex-benchmarks/gh-18670675142!tests_test_benchmarks.py__test_run_benchmark_forest_fire_mapping_!actual/job-results.json' to path=PosixPath('/home/runner/work/apex_algorithms/apex_algorithms/qa/benchmarks/tmp_path_root/test_run_benchmark_forest_fire0/reference/job-results.json'): start 2026-04-26 14:21:13.185110
INFO     apex_algorithm_qa_tools.scenarios:util.py:351 Downloading source='https://s3.waw3-1.cloudferro.com/apex-benchmarks/gh-18670675142!tests_test_benchmarks.py__test_run_benchmark_forest_fire_mapping_!actual/job-results.json' to path=PosixPath('/home/runner/work/apex_algorithms/apex_algorithms/qa/benchmarks/tmp_path_root/test_run_benchmark_forest_fire0/reference/job-results.json'): end 2026-04-26 14:21:13.967580, elapsed 0:00:00.782470
INFO     apex_algorithm_qa_tools.scenarios:util.py:351 Downloading reference data for scenario.id='forest_fire_mapping' to reference_dir=PosixPath('/home/runner/work/apex_algorithms/apex_algorithms/qa/benchmarks/tmp_path_root/test_run_benchmark_forest_fire0/reference'): end 2026-04-26 14:21:13.967752, elapsed 0:00:02.369188
INFO     openeo.testing.results:results.py:423 Comparing job results: PosixPath('/home/runner/work/apex_algorithms/apex_algorithms/qa/benchmarks/tmp_path_root/test_run_benchmark_forest_fire0/actual') vs PosixPath('/home/runner/work/apex_algorithms/apex_algorithms/qa/benchmarks/tmp_path_root/test_run_benchmark_forest_fire0/reference')

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions