Skip to content

Conversation

@yngve-sk
Copy link
Contributor

@yngve-sk yngve-sk commented Aug 19, 2025

Issue
Resolves #11477

@codspeed-hq
Copy link

codspeed-hq bot commented Aug 19, 2025

CodSpeed Performance Report

Merging #11513 will not alter performance

Comparing yngve-sk:25.08.save-runmodel-configs-in-storage (95af896) with main (a74f40e)1

Summary

✅ 22 untouched

Footnotes

  1. No successful run was found on main (170d526) during the generation of this report, so a74f40e was used instead as the comparison base. There might be some changes unrelated to this pull request in this report.

@yngve-sk yngve-sk force-pushed the 25.08.save-runmodel-configs-in-storage branch 3 times, most recently from 7310422 to 67e6490 Compare August 27, 2025 05:06
@yngve-sk
Copy link
Contributor Author

yngve-sk commented Sep 1, 2025

Blocked by #11673 , #11674

@yngve-sk yngve-sk force-pushed the 25.08.save-runmodel-configs-in-storage branch 7 times, most recently from bdc43ce to eb97bd7 Compare October 6, 2025 10:43
@yngve-sk yngve-sk force-pushed the 25.08.save-runmodel-configs-in-storage branch from 09e40b8 to d516157 Compare October 8, 2025 13:08
@yngve-sk yngve-sk self-assigned this Oct 8, 2025
@yngve-sk yngve-sk force-pushed the 25.08.save-runmodel-configs-in-storage branch from d516157 to 85ce4b2 Compare October 9, 2025 05:40
@yngve-sk yngve-sk force-pushed the 25.08.save-runmodel-configs-in-storage branch 3 times, most recently from 1aea14c to ef97560 Compare October 9, 2025 10:29
@codecov-commenter
Copy link

codecov-commenter commented Oct 9, 2025

❌ 14 Tests Failed:

Tests completed Failed Passed Skipped
682 14 668 9
View the top 3 failed test(s) by shortest run time
tests/everest/test_api_snapshots.py::test_api_summary_snapshot[config_minimal.yml]@math_func/config_minimal.yml
Stack Traces | 0.119s run time
config_file = 'config_minimal.yml'
snapshot = <pytest_snapshot.plugin.Snapshot object at 0x7f673deb7b10>
cached_example = <function cached_example.<locals>.run_config at 0x7f673cf8f7e0>

    @pytest.mark.integration_test
    @pytest.mark.parametrize(
        "config_file",
        [
            pytest.param(
                "config_advanced.yml",
                marks=pytest.mark.xdist_group("math_func/config_advanced.yml"),
            ),
            pytest.param(
                "config_minimal.yml",
                marks=pytest.mark.xdist_group("math_func/config_minimal.yml"),
            ),
            pytest.param(
                "config_multiobj.yml",
                marks=pytest.mark.xdist_group("math_func/config_multiobj.yml"),
            ),
        ],
    )
    def test_api_summary_snapshot(config_file, snapshot, cached_example):
        config_path, config_file, _, _ = cached_example(f"math_func/{config_file}")
        config = EverestConfig.load_file(Path(config_path) / config_file)
    
        with open_storage(config.storage_dir, mode="w") as storage:
            # Save some summary data to each ensemble
            experiment = next(storage.experiments)
    
            response_config = experiment.response_configuration
            response_config["summary"] = SummaryConfig(keys=["*"])
    
            experiment._storage._write_transaction(
                experiment._path / experiment._responses_file,
                json.dumps(
                    {c.type: c.model_dump(mode="json") for c in response_config.values()},
                    default=str,
                    indent=2,
                ).encode("utf-8"),
            )
    
            smry_data = pl.DataFrame(
                {
                    "response_key": ["FOPR", "FOPR", "WOPR", "WOPR", "FOPT", "FOPT"],
                    "time": pl.Series(
                        [datetime(2000, 1, 1), datetime(2000, 1, 2)] * 3
                    ).dt.cast_time_unit("ms"),
                    "values": pl.Series([0.2, 0.2, 1.0, 1.1, 3.3, 3.3], dtype=pl.Float32),
                }
            )
            for ens in experiment.ensembles:
                for real in range(ens.ensemble_size):
>                   ens.save_response("summary", smry_data.clone(), real)

.../tests/everest/test_api_snapshots.py:142: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
.../ert/storage/mode.py:98: in inner
    return func(self_, *args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.../ert/storage/local_ensemble.py:946: in save_response
    if not self.experiment._has_finalized_response_keys(response_type):
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <ert.storage.local_experiment.LocalExperiment object at 0x7f673cf36b10>
response_type = 'summary'

    def _has_finalized_response_keys(self, response_type: str) -> bool:
        responses_configuration = self.response_configuration
        if response_type not in responses_configuration:
>           raise KeyError(
                f"Response type {response_type} does not "
                "exist in current responses.json"
            )
E           KeyError: 'Response type summary does not exist in current responses.json'

.../ert/storage/local_experiment.py:443: KeyError
tests/everest/test_api_snapshots.py::test_api_summary_snapshot_with_differing_columns_per_batch@math_func/config_minimal.yml
Stack Traces | 0.119s run time
snapshot = <pytest_snapshot.plugin.Snapshot object at 0x7f674f7d2520>
cached_example = <function cached_example.<locals>.run_config at 0x7f674f720ae0>

    @pytest.mark.integration_test
    @pytest.mark.xdist_group("math_func/config_minimal.yml")
    def test_api_summary_snapshot_with_differing_columns_per_batch(
        snapshot, cached_example
    ):
        config_path, config_file, _, _ = cached_example("math_func/config_minimal.yml")
        config = EverestConfig.load_file(Path(config_path) / config_file)
    
        with open_storage(config.storage_dir, mode="w") as storage:
            # Save some summary data to each ensemble
            experiment = next(storage.experiments)
    
            response_config = experiment.response_configuration
            response_config["summary"] = SummaryConfig(keys=["*"])
    
            experiment._storage._write_transaction(
                experiment._path / experiment._responses_file,
                json.dumps(
                    {c.type: c.model_dump(mode="json") for c in response_config.values()},
                    default=str,
                    indent=2,
                ).encode("utf-8"),
            )
    
            smry_data = pl.DataFrame(
                {
                    "response_key": ["FOPR", "FOPR", "WOPR", "WOPR", "FOPT", "FOPT"],
                    "time": pl.Series(
                        [datetime(2000, 1, 1), datetime(2000, 1, 2)] * 3
                    ).dt.cast_time_unit("ms"),
                    "values": pl.Series([0.2, 0.2, 1.0, 1.1, 3.3, 3.3], dtype=pl.Float32),
                }
            )
            for ens in experiment.ensembles:
                for real in range(ens.ensemble_size):
>                   ens.save_response("summary", smry_data.clone(), real)

.../tests/everest/test_api_snapshots.py:234: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
.../ert/storage/mode.py:98: in inner
    return func(self_, *args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.../ert/storage/local_ensemble.py:946: in save_response
    if not self.experiment._has_finalized_response_keys(response_type):
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <ert.storage.local_experiment.LocalExperiment object at 0x7f673cf85ae0>
response_type = 'summary'

    def _has_finalized_response_keys(self, response_type: str) -> bool:
        responses_configuration = self.response_configuration
        if response_type not in responses_configuration:
>           raise KeyError(
                f"Response type {response_type} does not "
                "exist in current responses.json"
            )
E           KeyError: 'Response type summary does not exist in current responses.json'

.../ert/storage/local_experiment.py:443: KeyError
tests/everest/test_api_snapshots.py::test_api_summary_snapshot_missing_batch@math_func/config_minimal.yml
Stack Traces | 0.196s run time
snapshot = <pytest_snapshot.plugin.Snapshot object at 0x7f673cf12d50>
cached_example = <function cached_example.<locals>.run_config at 0x7f673cfb3ba0>

    @pytest.mark.integration_test
    @pytest.mark.xdist_group("math_func/config_minimal.yml")
    def test_api_summary_snapshot_missing_batch(snapshot, cached_example):
        config_path, config_file, _, _ = cached_example("math_func/config_minimal.yml")
        config = EverestConfig.load_file(Path(config_path) / config_file)
    
        with open_storage(config.storage_dir, mode="w") as storage:
            # Save some summary data to each ensemble
            experiment = next(storage.experiments)
    
            response_config = experiment.response_configuration
            response_config["summary"] = SummaryConfig(keys=["*"])
    
            experiment._storage._write_transaction(
                experiment._path / experiment._responses_file,
                json.dumps(
                    {c.type: c.model_dump(mode="json") for c in response_config.values()},
                    default=str,
                    indent=2,
                ).encode("utf-8"),
            )
    
            smry_data = pl.DataFrame(
                {
                    "response_key": ["FOPR", "FOPR", "WOPR", "WOPR", "FOPT", "FOPT"],
                    "time": pl.Series(
                        [datetime(2000, 1, 1), datetime(2000, 1, 2)] * 3
                    ).dt.cast_time_unit("ms"),
                    "values": pl.Series([0.2, 0.2, 1.0, 1.1, 3.3, 3.3], dtype=pl.Float32),
                }
            )
            for ens in experiment.ensembles:
                for real in range(ens.ensemble_size):
>                   ens.save_response("summary", smry_data.clone(), real)

.../tests/everest/test_api_snapshots.py:185: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
.../ert/storage/mode.py:98: in inner
    return func(self_, *args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.../ert/storage/local_ensemble.py:946: in save_response
    if not self.experiment._has_finalized_response_keys(response_type):
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <ert.storage.local_experiment.LocalExperiment object at 0x7f673cf87460>
response_type = 'summary'

    def _has_finalized_response_keys(self, response_type: str) -> bool:
        responses_configuration = self.response_configuration
        if response_type not in responses_configuration:
>           raise KeyError(
                f"Response type {response_type} does not "
                "exist in current responses.json"
            )
E           KeyError: 'Response type summary does not exist in current responses.json'

.../ert/storage/local_experiment.py:443: KeyError
tests/everest/test_api_snapshots.py::test_api_summary_snapshot[config_multiobj.yml]@math_func/config_multiobj.yml
Stack Traces | 0.209s run time
config_file = 'config_multiobj.yml'
snapshot = <pytest_snapshot.plugin.Snapshot object at 0x7f01adb27890>
cached_example = <function cached_example.<locals>.run_config at 0x7f01adb8a160>

    @pytest.mark.integration_test
    @pytest.mark.parametrize(
        "config_file",
        [
            pytest.param(
                "config_advanced.yml",
                marks=pytest.mark.xdist_group("math_func/config_advanced.yml"),
            ),
            pytest.param(
                "config_minimal.yml",
                marks=pytest.mark.xdist_group("math_func/config_minimal.yml"),
            ),
            pytest.param(
                "config_multiobj.yml",
                marks=pytest.mark.xdist_group("math_func/config_multiobj.yml"),
            ),
        ],
    )
    def test_api_summary_snapshot(config_file, snapshot, cached_example):
        config_path, config_file, _, _ = cached_example(f"math_func/{config_file}")
        config = EverestConfig.load_file(Path(config_path) / config_file)
    
        with open_storage(config.storage_dir, mode="w") as storage:
            # Save some summary data to each ensemble
            experiment = next(storage.experiments)
    
            response_config = experiment.response_configuration
            response_config["summary"] = SummaryConfig(keys=["*"])
    
            experiment._storage._write_transaction(
                experiment._path / experiment._responses_file,
                json.dumps(
                    {c.type: c.model_dump(mode="json") for c in response_config.values()},
                    default=str,
                    indent=2,
                ).encode("utf-8"),
            )
    
            smry_data = pl.DataFrame(
                {
                    "response_key": ["FOPR", "FOPR", "WOPR", "WOPR", "FOPT", "FOPT"],
                    "time": pl.Series(
                        [datetime(2000, 1, 1), datetime(2000, 1, 2)] * 3
                    ).dt.cast_time_unit("ms"),
                    "values": pl.Series([0.2, 0.2, 1.0, 1.1, 3.3, 3.3], dtype=pl.Float32),
                }
            )
            for ens in experiment.ensembles:
                for real in range(ens.ensemble_size):
>                   ens.save_response("summary", smry_data.clone(), real)

.../tests/everest/test_api_snapshots.py:142: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
.../ert/storage/mode.py:98: in inner
    return func(self_, *args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.../ert/storage/local_ensemble.py:946: in save_response
    if not self.experiment._has_finalized_response_keys(response_type):
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <ert.storage.local_experiment.LocalExperiment object at 0x7f01adb24e10>
response_type = 'summary'

    def _has_finalized_response_keys(self, response_type: str) -> bool:
        responses_configuration = self.response_configuration
        if response_type not in responses_configuration:
>           raise KeyError(
                f"Response type {response_type} does not "
                "exist in current responses.json"
            )
E           KeyError: 'Response type summary does not exist in current responses.json'

.../ert/storage/local_experiment.py:443: KeyError
tests/everest/test_api_snapshots.py::test_api_summary_snapshot[config_advanced.yml]@math_func/config_advanced.yml
Stack Traces | 0.38s run time
config_file = 'config_advanced.yml'
snapshot = <pytest_snapshot.plugin.Snapshot object at 0x7f7986f9da90>
cached_example = <function cached_example.<locals>.run_config at 0x7f7999a0a8e0>

    @pytest.mark.integration_test
    @pytest.mark.parametrize(
        "config_file",
        [
            pytest.param(
                "config_advanced.yml",
                marks=pytest.mark.xdist_group("math_func/config_advanced.yml"),
            ),
            pytest.param(
                "config_minimal.yml",
                marks=pytest.mark.xdist_group("math_func/config_minimal.yml"),
            ),
            pytest.param(
                "config_multiobj.yml",
                marks=pytest.mark.xdist_group("math_func/config_multiobj.yml"),
            ),
        ],
    )
    def test_api_summary_snapshot(config_file, snapshot, cached_example):
        config_path, config_file, _, _ = cached_example(f"math_func/{config_file}")
        config = EverestConfig.load_file(Path(config_path) / config_file)
    
        with open_storage(config.storage_dir, mode="w") as storage:
            # Save some summary data to each ensemble
            experiment = next(storage.experiments)
    
            response_config = experiment.response_configuration
            response_config["summary"] = SummaryConfig(keys=["*"])
    
            experiment._storage._write_transaction(
                experiment._path / experiment._responses_file,
                json.dumps(
                    {c.type: c.model_dump(mode="json") for c in response_config.values()},
                    default=str,
                    indent=2,
                ).encode("utf-8"),
            )
    
            smry_data = pl.DataFrame(
                {
                    "response_key": ["FOPR", "FOPR", "WOPR", "WOPR", "FOPT", "FOPT"],
                    "time": pl.Series(
                        [datetime(2000, 1, 1), datetime(2000, 1, 2)] * 3
                    ).dt.cast_time_unit("ms"),
                    "values": pl.Series([0.2, 0.2, 1.0, 1.1, 3.3, 3.3], dtype=pl.Float32),
                }
            )
            for ens in experiment.ensembles:
                for real in range(ens.ensemble_size):
>                   ens.save_response("summary", smry_data.clone(), real)

.../tests/everest/test_api_snapshots.py:142: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
.../ert/storage/mode.py:98: in inner
    return func(self_, *args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.../ert/storage/local_ensemble.py:946: in save_response
    if not self.experiment._has_finalized_response_keys(response_type):
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <ert.storage.local_experiment.LocalExperiment object at 0x7f7986f9f890>
response_type = 'summary'

    def _has_finalized_response_keys(self, response_type: str) -> bool:
        responses_configuration = self.response_configuration
        if response_type not in responses_configuration:
>           raise KeyError(
                f"Response type {response_type} does not "
                "exist in current responses.json"
            )
E           KeyError: 'Response type summary does not exist in current responses.json'

.../ert/storage/local_experiment.py:443: KeyError
View the full list of 9 ❄️ flaky test(s)
tests/ert/performance_tests/test_obs_and_responses_performance.py::test_memory_performance_of_doing_enif_update[setup_es_benchmark1]

Flake rate in main: 87.50% (Passed 5 times, Failed 35 times)

Stack Traces | 36.7s run time
setup_es_benchmark = ('medium', <ert.storage.local_ensemble.LocalEnsemble object at 0x7fd0f3d24850>, <ert.storage.local_ensemble.LocalEnsemble object at 0x7fd0f3d24350>, ['param_0'], _ExpectedPerformance(memory_limit_mb=3100, last_measured_memory_mb=2230))
tmp_path = PosixPath('.../pytest-1/popen-gw1/test_memory_performance_of_doi2')

    @pytest.mark.memory_test
    @pytest.mark.skipif(
        sys.platform.startswith("darwin"), reason="Currently failing on mac"
    )
    def test_memory_performance_of_doing_enif_update(setup_es_benchmark, tmp_path):
        _, prior, posterior, gen_kw_names, expected_performance = setup_es_benchmark
        with memray.Tracker(tmp_path / "memray.bin"):
            enif_update(
                prior,
                posterior,
                prior.experiment.observation_keys,
                gen_kw_names,
                12345,
            )
    
        stats = memray._memray.compute_statistics(str(tmp_path / "memray.bin"))
        mem_usage_mb = stats.total_memory_allocated / (1024**2)
>       assert mem_usage_mb < expected_performance.memory_limit_mb
E       assert 3833.42333316803 < 3100
E        +  where 3100 = _ExpectedPerformance(memory_limit_mb=3100, last_measured_memory_mb=2230).memory_limit_mb

.../ert/performance_tests/test_obs_and_responses_performance.py:595: AssertionError
tests/ert/performance_tests/test_obs_and_responses_performance.py::test_memory_performance_of_doing_enif_update[setup_es_benchmark2]

Flake rate in main: 87.50% (Passed 5 times, Failed 35 times)

Stack Traces | 54.3s run time
setup_es_benchmark = ('large', <ert.storage.local_ensemble.LocalEnsemble object at 0x7fd0f3df00d0>, <ert.storage.local_ensemble.LocalEnsemble object at 0x7fd0f3df0c50>, ['param_0'], _ExpectedPerformance(memory_limit_mb=4000, last_measured_memory_mb=3088))
tmp_path = PosixPath('.../pytest-1/popen-gw1/test_memory_performance_of_doi3')

    @pytest.mark.memory_test
    @pytest.mark.skipif(
        sys.platform.startswith("darwin"), reason="Currently failing on mac"
    )
    def test_memory_performance_of_doing_enif_update(setup_es_benchmark, tmp_path):
        _, prior, posterior, gen_kw_names, expected_performance = setup_es_benchmark
        with memray.Tracker(tmp_path / "memray.bin"):
            enif_update(
                prior,
                posterior,
                prior.experiment.observation_keys,
                gen_kw_names,
                12345,
            )
    
        stats = memray._memray.compute_statistics(str(tmp_path / "memray.bin"))
        mem_usage_mb = stats.total_memory_allocated / (1024**2)
>       assert mem_usage_mb < expected_performance.memory_limit_mb
E       assert 6655.4385805130005 < 4000
E        +  where 4000 = _ExpectedPerformance(memory_limit_mb=4000, last_measured_memory_mb=3088).memory_limit_mb

.../ert/performance_tests/test_obs_and_responses_performance.py:595: AssertionError
tests/ert/performance_tests/test_obs_and_responses_performance.py::test_memory_performance_of_doing_enif_update[setup_es_benchmark3]

Flake rate in main: 87.50% (Passed 5 times, Failed 35 times)

Stack Traces | 61.9s run time
setup_es_benchmark = ('large+', <ert.storage.local_ensemble.LocalEnsemble object at 0x7f3b6934e850>, <ert.storage.local_ensemble.LocalEnsemble object at 0x7f3c0c398450>, ['param_0'], _ExpectedPerformance(memory_limit_mb=4500, last_measured_memory_mb=3115))
tmp_path = PosixPath('.../pytest-1/popen-gw0/test_memory_performance_of_doi3')

    @pytest.mark.memory_test
    @pytest.mark.skipif(
        sys.platform.startswith("darwin"), reason="Currently failing on mac"
    )
    def test_memory_performance_of_doing_enif_update(setup_es_benchmark, tmp_path):
        _, prior, posterior, gen_kw_names, expected_performance = setup_es_benchmark
        with memray.Tracker(tmp_path / "memray.bin"):
            enif_update(
                prior,
                posterior,
                prior.experiment.observation_keys,
                gen_kw_names,
                12345,
            )
    
        stats = memray._memray.compute_statistics(str(tmp_path / "memray.bin"))
        mem_usage_mb = stats.total_memory_allocated / (1024**2)
>       assert mem_usage_mb < expected_performance.memory_limit_mb
E       assert 9283.457594871521 < 4500
E        +  where 4500 = _ExpectedPerformance(memory_limit_mb=4500, last_measured_memory_mb=3115).memory_limit_mb

.../ert/performance_tests/test_obs_and_responses_performance.py:595: AssertionError
tests/ert/performance_tests/test_obs_and_responses_performance.py::test_memory_performance_of_doing_es_update[setup_es_benchmark1]

Flake rate in main: 87.50% (Passed 5 times, Failed 35 times)

Stack Traces | 38.5s run time
setup_es_benchmark = ('medium', <ert.storage.local_ensemble.LocalEnsemble object at 0x7fd10807f850>, <ert.storage.local_ensemble.LocalEnsemble object at 0x7fd120f91f50>, ['param_0'], _ExpectedPerformance(memory_limit_mb=3100, last_measured_memory_mb=2230))
tmp_path = PosixPath('.../pytest-1/popen-gw1/test_memory_performance_of_doi0')

    @pytest.mark.memory_test
    @pytest.mark.skipif(
        sys.platform.startswith("darwin"), reason="Currently failing on mac"
    )
    def test_memory_performance_of_doing_es_update(setup_es_benchmark, tmp_path):
        _, prior, posterior, gen_kw_names, expected_performance = setup_es_benchmark
        with memray.Tracker(tmp_path / "memray.bin"):
            smoother_update(
                prior,
                posterior,
                prior.experiment.observation_keys,
                gen_kw_names,
                ObservationSettings(),
                ESSettings(),
            )
    
        stats = memray._memray.compute_statistics(str(tmp_path / "memray.bin"))
        mem_usage_mb = stats.total_memory_allocated / (1024**2)
>       assert mem_usage_mb < expected_performance.memory_limit_mb
E       assert 3606.7290058135986 < 3100
E        +  where 3100 = _ExpectedPerformance(memory_limit_mb=3100, last_measured_memory_mb=2230).memory_limit_mb

.../ert/performance_tests/test_obs_and_responses_performance.py:556: AssertionError
tests/ert/performance_tests/test_obs_and_responses_performance.py::test_memory_performance_of_doing_es_update[setup_es_benchmark2]

Flake rate in main: 87.50% (Passed 5 times, Failed 35 times)

Stack Traces | 53.4s run time
setup_es_benchmark = ('large', <ert.storage.local_ensemble.LocalEnsemble object at 0x7f3c1ebf1f90>, <ert.storage.local_ensemble.LocalEnsemble object at 0x7f3b652c7ec0>, ['param_0'], _ExpectedPerformance(memory_limit_mb=4000, last_measured_memory_mb=3088))
tmp_path = PosixPath('.../pytest-1/popen-gw0/test_memory_performance_of_doi1')

    @pytest.mark.memory_test
    @pytest.mark.skipif(
        sys.platform.startswith("darwin"), reason="Currently failing on mac"
    )
    def test_memory_performance_of_doing_es_update(setup_es_benchmark, tmp_path):
        _, prior, posterior, gen_kw_names, expected_performance = setup_es_benchmark
        with memray.Tracker(tmp_path / "memray.bin"):
            smoother_update(
                prior,
                posterior,
                prior.experiment.observation_keys,
                gen_kw_names,
                ObservationSettings(),
                ESSettings(),
            )
    
        stats = memray._memray.compute_statistics(str(tmp_path / "memray.bin"))
        mem_usage_mb = stats.total_memory_allocated / (1024**2)
>       assert mem_usage_mb < expected_performance.memory_limit_mb
E       assert 6423.977465629578 < 4000
E        +  where 4000 = _ExpectedPerformance(memory_limit_mb=4000, last_measured_memory_mb=3088).memory_limit_mb

.../ert/performance_tests/test_obs_and_responses_performance.py:556: AssertionError
tests/ert/performance_tests/test_obs_and_responses_performance.py::test_memory_performance_of_doing_es_update[setup_es_benchmark3]

Flake rate in main: 87.50% (Passed 5 times, Failed 35 times)

Stack Traces | 64.8s run time
setup_es_benchmark = ('large+', <ert.storage.local_ensemble.LocalEnsemble object at 0x7f3b685edf50>, <ert.storage.local_ensemble.LocalEnsemble object at 0x7f3b6b1e3050>, ['param_0'], _ExpectedPerformance(memory_limit_mb=4500, last_measured_memory_mb=3115))
tmp_path = PosixPath('.../pytest-1/popen-gw0/test_memory_performance_of_doi2')

    @pytest.mark.memory_test
    @pytest.mark.skipif(
        sys.platform.startswith("darwin"), reason="Currently failing on mac"
    )
    def test_memory_performance_of_doing_es_update(setup_es_benchmark, tmp_path):
        _, prior, posterior, gen_kw_names, expected_performance = setup_es_benchmark
        with memray.Tracker(tmp_path / "memray.bin"):
            smoother_update(
                prior,
                posterior,
                prior.experiment.observation_keys,
                gen_kw_names,
                ObservationSettings(),
                ESSettings(),
            )
    
        stats = memray._memray.compute_statistics(str(tmp_path / "memray.bin"))
        mem_usage_mb = stats.total_memory_allocated / (1024**2)
>       assert mem_usage_mb < expected_performance.memory_limit_mb
E       assert 8256.098567008972 < 4500
E        +  where 4500 = _ExpectedPerformance(memory_limit_mb=4500, last_measured_memory_mb=3115).memory_limit_mb

.../ert/performance_tests/test_obs_and_responses_performance.py:556: AssertionError
tests/ert/performance_tests/test_obs_and_responses_performance.py::test_memory_performance_of_joining_observations_and_responses[setup_benchmark1]

Flake rate in main: 87.50% (Passed 5 times, Failed 35 times)

Stack Traces | 30.8s run time
setup_benchmark = ('medium', <ert.storage.local_ensemble.LocalEnsemble object at 0x7fd0688869d0>, ['genobs_0', 'genobs_1', 'genobs_10', ..., 193, 194,
       195, 196, 197, 198, 199]), _ExpectedPerformance(memory_limit_mb=1500, last_measured_memory_mb=1027))
tmp_path = PosixPath('.../pytest-1/popen-gw1/test_memory_performance_of_joi1')

    @pytest.mark.memory_test
    @pytest.mark.skipif(
        sys.platform.startswith("darwin"), reason="Currently failing on mac"
    )
    def test_memory_performance_of_joining_observations_and_responses(
        setup_benchmark, tmp_path
    ):
        _, ens, observation_keys, mask, expected_performance = setup_benchmark
    
        with memray.Tracker(tmp_path / "memray.bin"):
            ens.get_observations_and_responses(observation_keys, mask)
    
        stats = memray._memray.compute_statistics(str(tmp_path / "memray.bin"))
        mem_usage_mb = stats.total_memory_allocated / (1024**2)
>       assert mem_usage_mb < expected_performance.memory_limit_mb
E       assert 3112.147476196289 < 1500
E        +  where 1500 = _ExpectedPerformance(memory_limit_mb=1500, last_measured_memory_mb=1027).memory_limit_mb

.../ert/performance_tests/test_obs_and_responses_performance.py:444: AssertionError
tests/ert/performance_tests/test_obs_and_responses_performance.py::test_memory_performance_of_joining_observations_and_responses[setup_benchmark2]

Flake rate in main: 87.50% (Passed 5 times, Failed 35 times)

Stack Traces | 49.4s run time
setup_benchmark = ('large', <ert.storage.local_ensemble.LocalEnsemble object at 0x7fd0688533d0>, ['genobs_0', 'genobs_1', 'genobs_10', '..., 193, 194,
       195, 196, 197, 198, 199]), _ExpectedPerformance(memory_limit_mb=4500, last_measured_memory_mb=1710))
tmp_path = PosixPath('.../pytest-1/popen-gw1/test_memory_performance_of_joi2')

    @pytest.mark.memory_test
    @pytest.mark.skipif(
        sys.platform.startswith("darwin"), reason="Currently failing on mac"
    )
    def test_memory_performance_of_joining_observations_and_responses(
        setup_benchmark, tmp_path
    ):
        _, ens, observation_keys, mask, expected_performance = setup_benchmark
    
        with memray.Tracker(tmp_path / "memray.bin"):
            ens.get_observations_and_responses(observation_keys, mask)
    
        stats = memray._memray.compute_statistics(str(tmp_path / "memray.bin"))
        mem_usage_mb = stats.total_memory_allocated / (1024**2)
>       assert mem_usage_mb < expected_performance.memory_limit_mb
E       assert 5591.630917549133 < 4500
E        +  where 4500 = _ExpectedPerformance(memory_limit_mb=4500, last_measured_memory_mb=1710).memory_limit_mb

.../ert/performance_tests/test_obs_and_responses_performance.py:444: AssertionError
tests/ert/performance_tests/test_obs_and_responses_performance.py::test_memory_performance_of_joining_observations_and_responses[setup_benchmark3]

Flake rate in main: 87.50% (Passed 5 times, Failed 35 times)

Stack Traces | 58.4s run time
setup_benchmark = ('large+', <ert.storage.local_ensemble.LocalEnsemble object at 0x7f3b696eb960>, ['genobs_0', 'genobs_1', 'genobs_10', ..., 193, 194,
       195, 196, 197, 198, 199]), _ExpectedPerformance(memory_limit_mb=3300, last_measured_memory_mb=1715))
tmp_path = PosixPath('.../pytest-1/popen-gw0/test_memory_performance_of_joi0')

    @pytest.mark.memory_test
    @pytest.mark.skipif(
        sys.platform.startswith("darwin"), reason="Currently failing on mac"
    )
    def test_memory_performance_of_joining_observations_and_responses(
        setup_benchmark, tmp_path
    ):
        _, ens, observation_keys, mask, expected_performance = setup_benchmark
    
        with memray.Tracker(tmp_path / "memray.bin"):
            ens.get_observations_and_responses(observation_keys, mask)
    
        stats = memray._memray.compute_statistics(str(tmp_path / "memray.bin"))
        mem_usage_mb = stats.total_memory_allocated / (1024**2)
>       assert mem_usage_mb < expected_performance.memory_limit_mb
E       assert 7363.449796676636 < 3300
E        +  where 3300 = _ExpectedPerformance(memory_limit_mb=3300, last_measured_memory_mb=1715).memory_limit_mb

.../ert/performance_tests/test_obs_and_responses_performance.py:444: AssertionError

To view more test analytics, go to the Test Analytics Dashboard
📋 Got 3 mins? Take this short survey to help us improve Test Analytics.

@yngve-sk yngve-sk force-pushed the 25.08.save-runmodel-configs-in-storage branch 9 times, most recently from 96274af to fdd890d Compare October 10, 2025 07:13
@yngve-sk yngve-sk force-pushed the 25.08.save-runmodel-configs-in-storage branch 7 times, most recently from 8d16f80 to ae30135 Compare December 19, 2025 07:43
@yngve-sk yngve-sk force-pushed the 25.08.save-runmodel-configs-in-storage branch from ae30135 to d02862f Compare December 19, 2025 09:07
@yngve-sk yngve-sk force-pushed the 25.08.save-runmodel-configs-in-storage branch 7 times, most recently from ab1fb1a to e4b0c4b Compare December 19, 2025 13:44
@yngve-sk yngve-sk force-pushed the 25.08.save-runmodel-configs-in-storage branch 11 times, most recently from 8805d97 to 4b69ebb Compare January 7, 2026 08:16
@yngve-sk yngve-sk force-pushed the 25.08.save-runmodel-configs-in-storage branch from 4b69ebb to 331eaec Compare January 7, 2026 08:23
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Save serialized runmodels in storage

4 participants