Skip to content

Conversation

cgranade
Copy link
Collaborator

@cgranade cgranade commented Aug 6, 2018

This PR adds two new models, one which learns Rabi frequencies with unknown T₂ processes, the other of which decorates two-outcome models with Gaussian hyperparameters based on a latent two-outcome variable. This allows for using QInfer to learn T₂ in cases where single-shot measurements are not directly observed, but are only observed through a secondary process that is Gaussian distributed.

@coveralls
Copy link

coveralls commented Aug 6, 2018

Coverage Status

Coverage decreased (-0.6%) to 74.549% when pulling acbd0c9 on cgranade/unknown-T2 into 64d9b2f on master.

Copy link
Collaborator

@ihincks ihincks left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good addition.

## METHODS ##

def domain(self, expparams):
return [RealDomain()] * len(expparams)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

domain() Should support "in the case where n_outcomes_constant is True, None should be a valid input".

This requirement is a side-effect of my poor code design IMO, however, we should stick to it until something smoother comes along.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, good catch, thank you. I'll go on and fix that, then.

# (idx_underlying_outcome, idx_outcome, idx_modelparam, idx_experiment).
# Thus, we need shape
# (idx_underlying_outcome, 1, idx_modelparam, 1).
mu = (modelparams[:, -4:-2].T)[:, None, :, None]
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

np.newaxis is used a bit more consistently elsewhere, I think?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point, I was being a bit sloppy here, but np.newaxis is much easier to read. I'll fix that, then.

return True

def are_models_valid(self, modelparams):
orig_mps = modelparams[:, :-4]
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FYI negative indexes (rather than absolute indexes based on underlying_model.n_modelparams) have occasionally caused me trouble when another decoration model is above. However, I don't think anything in the code base actually cares, and I think other models might use negative indexes too, so don't change it if you don't want to. It mostly comes up when I am manually jiggering things in a notebook, and have many models and many updaters on the go.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's easy enough to split things out in a more robust way, which would really help for more complicated model chains. Thanks for pointing that out!

(modelparams[:, -2:].T)[:, None, :, None]
)

assert np.all(sigma > 0)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this left-over from prototyping?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Very likely, I should either remove it or make it an actual exception.

:modelparam T2_inv: The decoherence strength :math:`T_2^{-1}`.
:scalar-expparam float: The evolution time :math:`t`.
"""
# TODO: add duecredit.cite.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

TODO


# Now we marginalize and return.
L = (underlying_L * conditional_L).sum(axis=0)
assert not np.any(np.isnan(L))
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this left-over from prototyping?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same answer here, sorry for missing them both.

@ihincks
Copy link
Collaborator

ihincks commented Aug 7, 2018

The 4 CI failures look like they have nothing to do with this PR. Are they due to a new numpy thing?

@cgranade
Copy link
Collaborator Author

cgranade commented Aug 7, 2018

Thanks for the review, @ihincks! I'll go on and fix those ASAP. As far as the CI failures, I think that's leftover from #135, and is due to NumPy finally making something an error that was a FutureWarning for a long time. I'll investigate in #135, then, so that I can start clearing out the backlog of PRs contingent on CI failures.

@ihincks
Copy link
Collaborator

ihincks commented Aug 7, 2018

Consider throwing a test or two into test_concrete_models.py. Untested:

class TestGaussianHyperparameterizedModel(ConcreteModelTest, DerandomizedTestCase):
    """
    Tests GaussianHyperparameterizedModel with CoinModel as the underlying model
    (underlying model has no expparams).
    """

    def instantiate_model(self):
        return GaussianHyperparameterizedModel(CoinModel())
    def instantiate_prior(self):
        return ProductDistribution(
               BetaDistribution(mean=0.5, var=0.1),
               ...
        )
    def instantiate_expparams(self):
return np.arange(100, 120).astype(self.model.expparams_dtype)

@taalexander
Copy link
Contributor

This looks great. I agree with @ihincks it is a good addition. It seems as we move forward experimentally there is an increasing focus on continuous outcome distributions. Once I find more time I want to focus on explicitly supporting continuous outcomes (right now IIRC it is maybe hinted at in the documentation but left to the user to figure out how to support them). This would also include supporting the branch me and Ian have been working on to support continuous outcomes natively include experiment design in smc.py. I will try and clean this up and get tests passing later this week so that I can submit a PR. There will be a decent amount of documentation and testing to get it in working condition. I hope that after @ihincks defends he might be able to help me review this and come up with a plan.

@cgranade
Copy link
Collaborator Author

cgranade commented Aug 8, 2018

Sounds great, @taalexander! Thanks as well for the typo catch, @ihincks. I think this might be good to go once I can get the CI builds working. Would anyone mind if I merge in #135, since CI seems to work there again? That would mean deprecating 3.3 in favor of Python 2.7 or ≥ 3.4. Since 3.3 is now approximately six years old, I don't think that should be a problem...

# Next, we sample a bunch of underlying outcomes to figure out
# how to rescale everything.
underlying_outcomes = self.underlying_model.simulate_experiment(
modelparams[:, :-4], expparams
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

repeat=repeat i think will fix the other shape mismatch bug

assert np.all(sigma > 0)

# Now we can rescale the outcomes to be random variates z drawn from N(0, 1).
scaled_outcomes = (outcomes - mu) / sigma
Copy link
Collaborator

@ihincks ihincks Aug 8, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

outcomes[np.newaxis,:,np.newaxis,np.newaxis] will fix one of the shape bugs?

Copy link
Collaborator

@ihincks ihincks left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Go ahead when ready.

I removed init because it was causing a failure; was unaware of its involvement in dcite mechanics. Thanks for fixing.

@cgranade
Copy link
Collaborator Author

cgranade commented Aug 9, 2018

Awesome, thanks for everything! 💕

@cgranade cgranade merged commit 3c9cc7e into master Aug 9, 2018
@cgranade cgranade deleted the cgranade/unknown-T2 branch August 9, 2018 01:34
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants