Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Using ratio by default for plotBiomassObservedVsModel... #288

Merged
merged 23 commits into from
Apr 10, 2024

Conversation

SamikDatta
Copy link
Contributor

For the function plotBiomassObservedVsModel, using the ratio of biomasses is much more visually useful than the log-log scale, where differences are not easily observable because of the log scale. Hence, the ratio should be the default.

@gustavdelius
Copy link
Member

Hi Samik. Given that this function is marked explicitly as experimental, I am o.k. with changing its default. Just add an entry about this change to NEWS.md. Do the tests in tests/testthat/test-plotBiomassObservedVsModel.R not need updating?

…ed a couple of failures in other tests (tests/testthat/test-project_methods.R and tests/testthat/test-newMultispeciesParams.R).
@SamikDatta
Copy link
Contributor Author

Hi Gustav, I have now done this, as well as fixed two failures which were happening with the tests. I started experimenting on removing the warnings in the tests, let me know if you want me to continue doing this.

@gustavdelius
Copy link
Member

Hi Samik. Thanks for doing this. I think that upgrading to edition 3 of the testthat package is a good idea. Please keep going. I'll wait with merging your changes relating to the plotBiomassObservedVsModel() function until you are finished. In future, I think you may want to make pull requests not from your master branch but from dedicated feature branches, with one pull request and one branch for each feature. And while working on one feature on its dedicated feature branch, make sure you commit changes that are related to that feature only. That makes it faster to review and then merge the pull requests in the future.

@SamikDatta
Copy link
Contributor Author

Hi Gustav,

Thanks for the suggestion, I'll do my best to keep future pull requests on separate branches and focused so that they're easier for you to review.

In the meantime, I'll update as many of the tests as I can to remove the warnings. The one exception to this is the following warning, which occurs multiple times:

expect_known_value() was deprecated in the 3rd edition. Please use expect_snapshot_value() instead`

This is quite a sizeable task, as the files such as getBiomass will need to be shifted into the correct format. I will do this in a separate pull request.

…to do with `expect_known_value` being superceded by `expect_snapshot_value`.
@SamikDatta
Copy link
Contributor Author

I'm done for now, Gustav - I'll handle the rest in specific branches as requested. You can merge these changes if you're happy.

@gustavdelius
Copy link
Member

Hi Samik. I'll merge this as soon as the other warnings are removed as well. I want to keep the ability to run tests and quickly see the relevant warnings without being swamped by irrelevant warnings.
When you are done you can also remove the calls to local_edition(3) from the test files that were already using edition 3.

@SamikDatta
Copy link
Contributor Author

Hi Gustav - I think this is done now, snapshots have replaced the files which were being used as expected values. No more failures or warnings.

I went through each instance of expect_known_value and checked that equal values were being generated at present.

For example, from test-project_methods.R, line 91, I checked the command expect_known_value(fl, "values/getFeedingLevel") by running expect_equal(fl, readRDS("values/getFeedingLevel")), and it was indeed equal.

This is important as these new snapshots will now be the baseline comparison whenever these tests are run. They all passed fine.

One thing it would be good for you to check is that the .svg files created by vdiffr::expect_doppelganger work fine - I can't upload these to the Github repo, I guess the file type has been exempt from uploading somewhere.

@gustavdelius
Copy link
Member

Hi Gustav - I think this is done now, snapshots have replaced the files which were being used as expected values. No more failures or warnings.

Thanks. I have now also removed the uncaught messages.

I went through each instance of expect_known_value and checked that equal values were being generated at present.

For example, from test-project_methods.R, line 91, I checked the command expect_known_value(fl, "values/getFeedingLevel") by running expect_equal(fl, readRDS("values/getFeedingLevel")), and it was indeed equal.

This is important as these new snapshots will now be the baseline comparison whenever these tests are run. They all passed fine.

Thanks for doing this so carefully. But did you perhaps somehow mess up the creation of the snapshot for project_methods? I had to replace it, see b36b213. Please run the tests on your machine again after having pulled my corrected project_method.md.

One thing it would be good for you to check is that the .svg files created by vdiffr::expect_doppelganger work fine - I can't upload these to the Github repo, I guess the file type has been exempt from uploading somewhere.

I am not sure what you mean. The .svg snapshots seemed to all be there on GitHub. What is it you think I should check?

@SamikDatta
Copy link
Contributor Author

Hi Gustav,

  • the tests are running fine with your updated project_method.md.
  • I don't know how I didn't see the .svg files uploaded fine, please ignore.

I'm happy for you to push these changes now. At some stage once we're happy with the snapshot method we can delete the values folder, but let's keep it there for now. Many thanks for all your guidance.

@gustavdelius
Copy link
Member

Unfortunately the tests don't seem to pass on all platforms. See https://github.com/sizespectrum/mizer/actions/runs/8559843355/job/23461814263#step:6:1075 for example. I guess that the edition 3 applies the tests with less tolerance and for the results of longer calculations the rounding errors appear to accrue differently on different platforms.

@SamikDatta
Copy link
Contributor Author

Hi Gustav, that's a shame. As it only seems to be the "getFeedingLevel for MizerParams" test which fails in test-project_methods.R, I've tried just rounding the data frame to 5 decimal places. If this works all good - if there are multiple tests where the rounding is an issue we'll have to think of a more comprehensive fix.

For example, we could use expect_snapshot_value (see here) using a higher tolerance, but the problem is that the print out in the .md file is a lot less nice to read than when using expect_snapshot. Let's see if the tests pass on all platforms and take it from there.

@gustavdelius
Copy link
Member

This still does not pass all the tests. When you view this pull request, are you able to see the test results or are they only shown to me as the owner of the repository? Here is an example: https://github.com/sizespectrum/mizer/actions/runs/8562120352/job/23474964900?pr=288

I think using expect_snapshot_value() is a good way forward. You must by now be regretting to have taken on the project of changing to testthat edition 3.

…estthat/test-project_methods.R so that tests pass on all platforms.
@SamikDatta
Copy link
Contributor Author

Hi Gustav. Unfortunately I can't see the test results (across all the different platforms like MacOS, Windows and Ubuntu) when I view the pull request.

I have updated the tests in test-project_methods.R to use expect_snapshot_value with a tolerance of 1e-5, let us see if all the tests now pass, and take it from there?

@gustavdelius gustavdelius merged commit 18860c5 into sizespectrum:master Apr 10, 2024
5 checks passed
@gustavdelius
Copy link
Member

Thanks Samik. The tests now passed on all platforms.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants