Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adjustments to inference interface #358

Merged
merged 9 commits into from
Jul 24, 2020
Merged

Conversation

barkls
Copy link
Contributor

@barkls barkls commented Jul 14, 2020

Prompted by #354 (Fixes #354). The docs imposed an arbitrary restriction that fit and sample high-level functions could only take strategy names as strings, and not actual Strategy objects, whereas the model.fit and model.sample methods could handle either strategy names or objects. I found this confusing and arbitrary so I taught the high-level functions to use objects as well as names. However, this resulted in there being no difference in functionality between the high-level functions and the model methods. I decided to deprecate the model methods so there's only one way to do things, and it's the way that is most flexible and best documented. Note that these methods were introduced as part of the 3.0 development cycle and aren't described very well in the docs so we should be able to remove them with the rest of the old fitting infrastructure.

@pep8speaks
Copy link

pep8speaks commented Jul 14, 2020

Hello @barkls! Thanks for updating this PR. We checked the lines you've touched for PEP 8 issues, and found:

Line 263:80: E501 line too long (98 > 79 characters)

Comment last updated at 2020-07-16 21:24:30 UTC

@barkls barkls mentioned this pull request Jul 14, 2020
operation, type(strategy).__name__))
try:
strategy = strategy()
except TypeError:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm a little worried by this, because there could conceivably be a TypeEror in the strategy. I think an if isinstance() call might be better here.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just an FYI that I found -- inspect.isclass is another choice. However, I think that what you havein teh updated code is fine because it ensures that the strategy is a holopy object. So don't change this 😄

@barkls
Copy link
Contributor Author

barkls commented Jul 24, 2020

Travis failed on a test that had passed on my own machine and shouldn't be related to any code I touched. I restarted the Travis build and it passed, so something strange is going on that I expect was a pre-existing problem with reproducibility in one of the tests. Unfortunately I think there's now no record of the failing build or which test was problematic.

Ready to merge @briandleahy

@briandleahy
Copy link
Contributor

Yes, I saw. I believe it was a test on a BoundedGaussian, testing that all samples returned were within the bounds. IIRC there was no seed test in the test, and it was only testing for 10 samples. (I was planning on discussing it with you at the holopy meeting yesterday, but I forgot that we were not meeting until next week.)

To me this seems like an edge case that happens once in a while (samples are generated outside the bounds).

I would say "someone" should do the following:

  • Find exact failing test.
  • Raise an issue that this test is finicky.
  • Set a seed and increase the number of samples to a few thousand to see if the test fails and therefore what the bug is.
  • Fix the bug.

I can take a stab at that when I get a chance.

In the mean time, I'll approve this PR.

@briandleahy briandleahy merged commit ed4bbdb into manoharan-lab:develop Jul 24, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

fit and sample functions don't take strategy arguments
3 participants