Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Report maps #38

Merged
merged 51 commits into from
May 31, 2023
Merged

Report maps #38

merged 51 commits into from
May 31, 2023

Conversation

mjgleason
Copy link
Collaborator

This PR adds functionality for creating report/presentation quality maps for supply curves. It includes the addition of two new CLI commands: make-maps and map-column which enable simple map generation from a supply curve. The underlying reView.utils.plots.map_geodataframe_column can also be used for producing more bespoke or customized maps. The USAGE.md has been updated with details on all of this.

In addition to the new functionality, I also had to make some changes to the github workflows to support adding cartopy (via geoplot) as a dependency. Unfortunately, cartopy has a dependency on GEOS, but unlike other libraries, like shapely, it does not have wheels pre-built with GEOS binaries included. This means that for the test workflows to run, GEOS must be installed separately ahead of the Python installation, and for each OS, the way to do this differed. In addition to these necessary changes, I also changed the linting and codecov workflows so they only run on PRs, since doing them for each push seems overkill and potentially quite resource consuming.

The tests folder also includes a lot of changes since I added several "expected" output images for the various tests related to the new mapping functionality.

@mjgleason mjgleason self-assigned this May 17, 2023
Copy link
Collaborator

@WilliamsTravis WilliamsTravis left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Approve

Copy link
Collaborator

@WilliamsTravis WilliamsTravis left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Approve

@codecov-commenter
Copy link

codecov-commenter commented May 17, 2023

Codecov Report

Patch coverage: 97.30% and project coverage change: +5.28 🎉

Comparison is base (2681c13) 39.76% compared to head (97fd906) 45.04%.

Additional details and impacted files
@@            Coverage Diff             @@
##             main      #38      +/-   ##
==========================================
+ Coverage   39.76%   45.04%   +5.28%     
==========================================
  Files          45       47       +2     
  Lines        3913     4333     +420     
  Branches      662      697      +35     
==========================================
+ Hits         1556     1952     +396     
- Misses       2331     2348      +17     
- Partials       26       33       +7     
Flag Coverage Δ
unittests 45.04% <97.30%> (+5.28%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

Impacted Files Coverage Δ
setup.py 0.00% <0.00%> (ø)
reView/utils/plots.py 88.88% <88.88%> (ø)
reView/cli.py 93.66% <96.93%> (+5.42%) ⬆️
tests/test_cli/test_cli.py 99.49% <99.24%> (-0.51%) ⬇️
tests/conftest.py 100.00% <100.00%> (ø)
...s/test_scenario_page/test_model/test_difference.py 100.00% <100.00%> (ø)
tests/test_utils/test_characterizations.py 100.00% <100.00%> (ø)
tests/test_utils/test_functions.py 100.00% <100.00%> (ø)
tests/test_utils/test_plots.py 100.00% <100.00%> (ø)

... and 4 files with indirect coverage changes

☔ View full report in Codecov by Sentry.
📢 Do you have feedback about the report comment? Let us know in this issue.

Copy link
Collaborator

@ppinchuk ppinchuk left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Excellent work as always Mike, especially with the documentation piece. Huge kudos to you for being thorough with that.

The image comparison functions are fascinating... did you design those on your own? Never thought to do a hash comparison of the images. I guess you'd have to re-generate those test plots if you ever make an update to the plotting functions though... hopefully that won't happen too often.

Some minor comments below, feel free to ignore

reView/cli.py Show resolved Hide resolved
reView/cli.py Show resolved Hide resolved
reView/cli.py Outdated Show resolved Hide resolved
reView/cli.py Outdated Show resolved Hide resolved
reView/utils/plots.py Outdated Show resolved Hide resolved
tests/conftest.py Outdated Show resolved Hide resolved
tests/test_cli/test_cli.py Show resolved Hide resolved
tests/test_utils/test_plots.py Outdated Show resolved Hide resolved
@mjgleason
Copy link
Collaborator Author

Excellent work as always Mike, especially with the documentation piece. Huge kudos to you for being thorough with that.

The image comparison functions are fascinating... did you design those on your own? Never thought to do a hash comparison of the images. I guess you'd have to re-generate those test plots if you ever make an update to the plotting functions though... hopefully that won't happen too often.

Some minor comments below, feel free to ignore

Thanks Paul, I appreciate the thorough review!

RE: The image comparison - I pieced those together with some trial and error and examples from stack overflow. I did try to research the "proper" way to test plotting functions. Interestingly, the prevailing advice seems to be NOT to test their outputs at all, due to the imprecisions introduced by different OSes, etc. Instead, most people advise to just assume the plotting is working right and test that the correct functions are being called. I wasn't thrilled about skipping these tests though, and I'd already worked out the logic, so I decided to leave it in.

For what it's worth, it does require regenerating the images every now and then, which is annoying but not that tedious. Also, the image hash comparisons due take some tweaking to get them to work across the os matrix in the github workflows.

@ppinchuk
Copy link
Collaborator

Excellent work as always Mike, especially with the documentation piece. Huge kudos to you for being thorough with that.
The image comparison functions are fascinating... did you design those on your own? Never thought to do a hash comparison of the images. I guess you'd have to re-generate those test plots if you ever make an update to the plotting functions though... hopefully that won't happen too often.
Some minor comments below, feel free to ignore

Thanks Paul, I appreciate the thorough review!

RE: The image comparison - I pieced those together with some trial and error and examples from stack overflow. I did try to research the "proper" way to test plotting functions. Interestingly, the prevailing advice seems to be NOT to test their outputs at all, due to the imprecisions introduced by different OSes, etc. Instead, most people advise to just assume the plotting is working right and test that the correct functions are being called. I wasn't thrilled about skipping these tests though, and I'd already worked out the logic, so I decided to leave it in.

For what it's worth, it does require regenerating the images every now and then, which is annoying but not that tedious. Also, the image hash comparisons due take some tweaking to get them to work across the os matrix in the github workflows.

I see. Good to know about the general opinions and why you decided to keep it anyways. I think it's perfectly fine the way it is for now. If there are any annoyances with it in the future, we can deal with it then.

Good stuff!

@mjgleason mjgleason merged commit 8b5be57 into main May 31, 2023
@mjgleason mjgleason deleted the report_maps branch May 31, 2023 00:21
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants