Skip to content

Conversation

@yaelbh
Copy link
Collaborator

@yaelbh yaelbh commented Aug 11, 2021

Summary

In #190 it was suggested to remove bounds from all over the code. We haven't decided whether it's a good idea, and until now it's been done only for T1. I've decided to promote the topic by creating this PR, where bounds is aggressively removed from all the experiments, without checking the consequences.

Details and comments

Things that remain to be done (I'll need help from experiment owners for some of them):

  • Handle the 8 failing tests (details below).
  • Update the tutorials: The only tutorial that's directly using the bounds parameter is RB. The other tutorials need to rerun as well because there can be small changes in results, but I don't expect issues.
  • Run Sphinx - expected to work without issues.

More details about the failing tests:

  • RB tests fail because of the caching - need to regenerate the data.
  • test_run_single_curve_fail (in test_curve_fit.py) verifies correct reaction when fit fails by fitting with infeasible boundaries. Another way of failing the fit is now required.
  • Some calibration tests fail, I don't know why.

@coruscating coruscating added this to the Release 0.2 milestone Aug 18, 2021
@yaelbh yaelbh removed this from the Release 0.2 milestone Sep 29, 2021
@yaelbh yaelbh mentioned this pull request Oct 17, 2021
@yaelbh yaelbh linked an issue Oct 20, 2021 that may be closed by this pull request
@yaelbh
Copy link
Collaborator Author

yaelbh commented May 8, 2022

Stale, closing

@yaelbh yaelbh closed this May 8, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Maybe remove bounds all over

2 participants