Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cli monitor for ert3 #2960

Merged
merged 1 commit into from
Mar 16, 2022
Merged

Cli monitor for ert3 #2960

merged 1 commit into from
Mar 16, 2022

Conversation

xjules
Copy link
Contributor

@xjules xjules commented Feb 22, 2022

Issue
Resolves #2944
Resolves #2894

Approach
This combines the following two PRs (which will be closed):
#2944
#2903

Pre review checklist

  • Added appropriate labels

Adding labels helps the maintainers when writing release notes, see sections and the
corresponding labels here: https://github.com/equinor/ert/blob/main/.github/release.yml

@codecov-commenter
Copy link

codecov-commenter commented Feb 22, 2022

Codecov Report

Merging #2960 (9aa2f8f) into main (4475cce) will increase coverage by 0.03%.
The diff coverage is 89.07%.

❗ Current head 9aa2f8f differs from pull request most recent head 60ba2ec. Consider uploading reports for the commit 60ba2ec to get more accurate results

Impacted file tree graph

@@            Coverage Diff             @@
##             main    #2960      +/-   ##
==========================================
+ Coverage   65.43%   65.47%   +0.03%     
==========================================
  Files         640      641       +1     
  Lines       50599    50640      +41     
  Branches     4438     4438              
==========================================
+ Hits        33111    33158      +47     
+ Misses      15996    15986      -10     
- Partials     1492     1496       +4     
Impacted Files Coverage Δ
ert_shared/status/utils.py 76.66% <ø> (-5.95%) ⬇️
ert_shared/ensemble_evaluator/monitor.py 90.00% <80.00%> (-1.53%) ⬇️
ert/tracker/evaluator_tracker.py 77.20% <84.78%> (ø)
ert3/evaluator/_evaluator.py 92.30% <91.22%> (-4.67%) ⬇️
ert/tracker/__init__.py 100.00% <100.00%> (ø)
ert_data/loader.py 85.81% <100.00%> (ø)
ert_gui/simulation/tracker_worker.py 54.76% <100.00%> (+1.27%) ⬆️
ert_shared/ensemble_evaluator/config.py 95.18% <100.00%> (+1.18%) ⬆️
ert_shared/ensemble_evaluator/entity/snapshot.py 94.79% <100.00%> (ø)
ert_shared/status/tracker/factory.py 100.00% <100.00%> (ø)
... and 4 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 4475cce...60ba2ec. Read the comment docs.

Copy link
Contributor

@jondequinor jondequinor left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry if this is still WIP, but I left some comments. Seems some stuff is missing.

Other than that, looks good! I like the concurrent.future use!

ert3/evaluator/_evaluator.py Outdated Show resolved Hide resolved
ert3/evaluator/_evaluator.py Outdated Show resolved Hide resolved
ert3/evaluator/_evaluator.py Outdated Show resolved Hide resolved
ert3/evaluator/_evaluator.py Outdated Show resolved Hide resolved

return result


def evaluate(
ensemble: Ensemble,
ensemble: Ensemble, cli: bool = True
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is the cli flag? Is it ever used?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is used, but it might be set as default and instead having a gui there?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure what you mean?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you check line 115, EvaluatorTracker is not created by default but only when cli==True. It could be created by default, but do something special when we will go after gui.

ert3/evaluator/_evaluator.py Outdated Show resolved Hide resolved
ert3/evaluator/_evaluator.py Outdated Show resolved Hide resolved
ert3/evaluator/_evaluator.py Show resolved Hide resolved
@Blunde1
Copy link
Contributor

Blunde1 commented Mar 1, 2022

  • Remember to delete _calculate_progress() and tracker_progress() from ert_shared.status.utils.py

This is okay since the legacy tracker is nuked.
We should see if there are other relevant functions in utils that should be defined in evaluator_tracker as well

@Blunde1
Copy link
Contributor

Blunde1 commented Mar 3, 2022

This is okay since the legacy tracker is nuked. We should see if there are other relevant functions in utils that should be defined in evaluator_tracker as well

Do this in a separate PR: #3006

@Blunde1
Copy link
Contributor

Blunde1 commented Mar 4, 2022

  • testERT3RunModel together with EvaluatorTracker

@Blunde1
Copy link
Contributor

Blunde1 commented Mar 7, 2022

  • Create argument **config_args for EvaluatorTracker(), removing multiple single arguments and created arguments in __init__
  • Assert that required arguments are actually passed
  • Construct arguments like url on the fly
  • Create issue for refactoring according to pydantic model when the PR is merged

Edit Did #3045 instead

@xjules xjules force-pushed the cli-monitor-type-hints_jp branch from 1eed498 to 1d77338 Compare March 7, 2022 13:29
@Blunde1
Copy link
Contributor

Blunde1 commented Mar 7, 2022

#dd18455 introduces **config_args, which should really be a named argument of type EvaluatorServerConfig - we know exactly what is required (host and port) and what is optional (token and cert). There is no need for wildcard notation which introduces unnecessary flexibility.

Can we write an issue now that states "EvaluatorTracker should utilize EvaluatorServerConfig" (no mention to **config_args, resolve it and rebase onto it when it is merged? Then we avoid introducing **config_args to main,

It could of course be introduced directly in this PR, but in the interest of keeping the PR small, the above suggestion might be preferable?

@Blunde1
Copy link
Contributor

Blunde1 commented Mar 8, 2022

Can we write an issue now that states "EvaluatorTracker should utilize EvaluatorServerConfig" (no mention to **config_args, resolve it and rebase onto it when it is merged? Then we avoid introducing **config_args to main,

Refactoring EvaluatorTracker to use EvaluatorServerConfig in this PR: #3045

@xjules xjules force-pushed the cli-monitor-type-hints_jp branch 2 times, most recently from 9cf822c to 2b28f67 Compare March 9, 2022 12:37
@xjules xjules force-pushed the cli-monitor-type-hints_jp branch 2 times, most recently from 07eacc4 to 21b6631 Compare March 11, 2022 14:23
@@ -28,52 +27,3 @@ def test_format_running_time(self):

for t in tests:
self.assertEqual(t["expected"], format_running_time(t["seconds"]))

def test_calculate_progress(self):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are these not possible to test?

Copy link
Contributor Author

@xjules xjules Mar 13, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since _calculate_progress function was removed I guess there is no need of having this test anymore 🤷

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will rewrite this for the _progress() method of EvaluatorTracker where all the functionality now lies

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Found a bug for calculation of _progress() #3092

I suggest we resolve this outside of this PR.
This was found because a test now needs to ensure that total realizations are also fetched correctly.
This was not the case before, as only the maths given correct numbers were tested.

Copy link
Contributor

@jondequinor jondequinor left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some minor comments, but this is really great 👏

ert_shared/ensemble_evaluator/entity/snapshot.py Outdated Show resolved Hide resolved
ert/tracker/evaluator_tracker.py Outdated Show resolved Hide resolved
ert3/evaluator/_evaluator.py Outdated Show resolved Hide resolved
@sondreso
Copy link
Collaborator

Looks very good, only had some minor comments!

@Blunde1
Copy link
Contributor

Blunde1 commented Mar 14, 2022

@sondreso I separated the EvaluatorConnectionInfo into its own file, however it is still inside the tracker module under ert/tracker. This feels better but I still slightly misplaced. My feeling is that we should have something along the lines of

ert
- ensemble_evaluator
- - tracker
- - - evaluator_tracker.py
- - evaluator_connection_info.py (eventually included in config.py if this is moved from ert_shared)

what do you think?

@sondreso
Copy link
Collaborator

My feeling is that we should have something along the lines of

ert
- ensemble_evaluator
- - tracker
- - - evaluator_tracker.py
- - evaluator_connection_info.py (eventually included in config.py if this is moved from ert_shared)

what do you think?

I think that makes the most sense yes!

Copy link
Contributor

@jondequinor jondequinor left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! THanks!

@@ -48,9 +49,10 @@ def _track():

@pytest.mark.timeout(60)
@pytest.mark.parametrize(
"monitor_events,brm_mutations,expected_progress",
"run_model, monitor_events,brm_mutations,expected_progress",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

perhaps "blacken" this string by adding space after comma manually.

@berland
Copy link
Contributor

berland commented Mar 16, 2022

Tested on a couple of ert3 examples, works fine, also with --local-test-run. Increases the user experience!

It includes a new EvaluatorTracker with the mypy annotation located in ert.tracker.evaluator_tracker. ert3 run evaluation supports now the
monitoring, which is enabled by default. A new ERT3RunModel is introduced, which is a simplified version of BaseRunModel.
ERT3RunModel is required by legacy monitoring classes. A new EvaluatorConnectionInfo class is
introduced used to exchange connection info between EvaluatorTracker and monitors.

Co-authored-by: kvashchuka <kvashchuk.anna@gmail.com>
Co-authored-by: blunde1 <BERL@equinor.com>
@xjules xjules merged commit f7d4a5b into equinor:main Mar 16, 2022
@xjules xjules deleted the cli-monitor-type-hints_jp branch March 16, 2022 10:54
@berland berland added the ert3 label May 3, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Move cli monitoring from ert/ert_shared into ert/ert
7 participants