Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Target & True Models Should Be Static #383

Open
pcwysoc opened this issue Dec 6, 2023 · 5 comments
Open

Target & True Models Should Be Static #383

pcwysoc opened this issue Dec 6, 2023 · 5 comments
Assignees
Labels
discussion Request for further conversation to figure out action steps
Milestone

Comments

@pcwysoc
Copy link

pcwysoc commented Dec 6, 2023

Describe the bug
The target model (and additionally any underlying true model used for simulated data generation) should be static and have N_p=0 parameters. This causes a discrepancy with the model violation being too high for the true model.

To Reproduce

  1. Simulate data using a noisy model
  2. Run gst_protocol = pygsti.protocols.StandardGST(['CPTPLND', 'true', 'full TP', 'Target'], models_to_test={'true': noisy_model})
  3. Observe that N_p should be 0 for target and true models on the model overview tab

Expected behavior
N_p should be 0 for target and true models on the model overview tab. This can be achieved by copying the models to test and setting them to static parameterizations. Similarly, this should be done for the target model. This problem is not blocking and can be worked around by creating a noisy_model with a static parameterization.

Environment (please complete the following information):

  • pyGSTi version qiskit-feature-mcm
  • python version 3.10
  • OS Ventura 13.6.2
@pcwysoc pcwysoc added the bug A bug or regression label Dec 6, 2023
@coreyostrove
Copy link
Contributor

Hi @pcwysoc, thanks for reporting this. I think I've found why this is happening. Since you already have code on your side that produces this error (and I don't have a minimal (non)working example handy) can you try applying the following change and report back on whether it fixes this issue?

On line 1852 of pygsti/protocols/gst.py change initial_model = target_model to initial_model = target_model.copy().

@pcwysoc
Copy link
Author

pcwysoc commented Dec 7, 2023

@coreyostrove that didn't seem to work. I've generated a working example:
NumGaugeParamsBug.ipynb.zip.

@kmrudin mentioned to me that there were/are reasons that target wasn't static, since in the creation of the edesign/protocol objects if not parameterizations for the fit(s) are specified then it defaults to fitting the target's parameterization. Additionally, there can be instances were you want to test another model from, say, another GST fit. In that instance N_p=0 wouldn't be appropriate. This is all largely verbatim.

@sserita sserita added this to the 0.9.12.1 milestone Dec 7, 2023
@sserita
Copy link
Contributor

sserita commented Dec 12, 2023

I'm not sure this is an actual bug... Maybe we should have a discussion, @kmrudin. There are definitely cases where you want to test models that are not static, and it seems troublesome to automatically cast those as static.

I think setting the parameterization to static for the true model is the most pyGSTi-y way to do this currently, and there is likely a similar way to set the Target parameterization of StandardGST to generate a static model as well. However, I take your point @pcwysoc that this is probably something that you may want to do on a regular basis without having to cast the models manually.

Maybe there can be a flag that is set that would do this casting for you? I'm not sure what the best solution here looks like.
Maybe something like cast_models_to_static=None as a default and then you could provide cast_models_to_static=["Target", "true"]?

@sserita sserita modified the milestones: 0.9.12.1, 0.9.13 Jan 16, 2024
@coreyostrove
Copy link
Contributor

Chiming in here with a couple of thoughts to piggy back off of @sserita.

But first:

in the creation of the edesign/protocol objects if not parameterizations for the fit(s) are specified then it defaults to fitting the target's parameterization

StandardGST does not inherit any parameterization from the target model for fitting. It uses whichever parameterization have been specified as a mode string. In order to fit nonstandard parameterizations you need to use the GateSetTomography protocol, which does inherit the parameterization for its fit from the initial_model passed in.

On the conceptual side for model testing, when would we want to use a static model for testing purposes? As in, what does that mean statistically? Even setting that aside, a lot of the things from statistics we often rely on for evaluating the quality of our fits likely break down already in the context of model testing. For example, @pcwysoc in your original post you mentioned getting larger than expected values for the model violation. But statements about model violation are (in pyGSTi) most often based upon application of Wilks’ theorem. But Wilks’ theorem is a statement about the distribution of log-likelihood ratio statistics (LLRS) for maximum likelihood estimators. When we’re doing model testing we are not necessarily at, or even close to, the MLE, so reported N_sigmas may or may not make sense taken literally (my noncommittalness is a representation of the fact that I am genuinely unsure). LLRSs in general still make sense and are useful, but the statements about the expected probability distribution they follow wouldn’t automatically hold. All of this is to say, we should think about this carefully.

@kmrudin, @robinbk, @enielse: You were almost certainly either responsible for, or present during the crafting of the model testing framework in pyGSTi. Do you have any thoughts or historical insights? I have not done a deep dive into this part of the codebase so don’t have too much more to add before having done so.

@robinbk
Copy link
Contributor

robinbk commented Feb 20, 2024 via email

@pcwysoc pcwysoc added discussion Request for further conversation to figure out action steps and removed bug A bug or regression labels Jul 10, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
discussion Request for further conversation to figure out action steps
Projects
None yet
Development

No branches or pull requests

5 participants