Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ModelTesting-functions tutorial fails with error #247

Closed
johnkgamble opened this issue May 26, 2022 · 4 comments
Closed

ModelTesting-functions tutorial fails with error #247

johnkgamble opened this issue May 26, 2022 · 4 comments
Assignees
Labels
bug A bug or regression
Milestone

Comments

@johnkgamble
Copy link
Contributor

Describe the bug
The fifth code cell on the ModelTesting-functions fails when run.

To Reproduce
Launch the ModelTesting-functions and execute the cells in the notebook. Cell 5 gives:

results.add_estimates(results2)
results.add_estimates(results3)

pygsti.report.construct_standard_report(
    results, title="Model Test Example Report", verbosity=1
).write_html("../tutorial_files/modeltest_report", auto_open=True, verbosity=1)
Running idle tomography
Computing switchable properties
---------------------------------------------------------------------------
KeyError                                  Traceback (most recent call last)
Input In [5], in <cell line: 4>()
      1 results.add_estimates(results2)
      2 results.add_estimates(results3)
----> 4 pygsti.report.construct_standard_report(
      5     results, title="Model Test Example Report", verbosity=1
      6 ).write_html("../tutorial_files/modeltest_report", auto_open=True, verbosity=1)

File ~/repos/pyGSTi/pygsti/report/factory.py:1263, in construct_standard_report(results, title, confidence_level, comm, ws, advanced_options, verbosity)
   1259         _warnings.warn("Not all data sets are comparable - no comparisions will be made.")
   1261 printer.log("Computing switchable properties")
   1262 switchBd, dataset_labels, est_labels, gauge_opt_labels, Ls, swLs = \
-> 1263     _create_master_switchboard(ws, results, confidence_level,
   1264                                nmthreshold, printer, None,
   1265                                combine_robust, idt_results, embed_figures)
   1267 if len(Ls) > 0 and Ls[0] == 0:
   1268     _warnings.warn(("Setting the first 'max-length' to zero, e.g. using"
   1269                     " [0,1,2,4] instead of [1,2,4], is deprecated and"
   1270                     " may cause 'no data to plot' errors when creating"
   1271                     " this report.  Please remove this leading zero."))

File ~/repos/pyGSTi/pygsti/report/factory.py:330, in _create_master_switchboard(ws, results_dict, confidence_level, nmthreshold, printer, fmt, combine_robust, idt_results_dict, embed_figures)
    327 switchBd.clifford_compilation[d, i] = est.parameters.get("clifford compilation", 'auto')
    328 if switchBd.clifford_compilation[d, i] == 'auto':
    329     switchBd.clifford_compilation[d, i] = find_std_clifford_compilation(
--> 330         est.models['target'], printer)
    332 switchBd.profiler[d, i] = est_modvi.parameters.get('profiler', None)
    333 switchBd.meta_stdout[d, i] = est_modvi.meta.get('stdout', [('LOG', 1, "No standard output recorded")])

KeyError: 'target'

Expected behavior
I would expect a report to be generated and open.

Environment (please complete the following information):

  • pyGSTi version [0.9.10.1]
  • python version [3.10.4]
  • OS [MacOS 12.3.1]
@johnkgamble johnkgamble added the bug A bug or regression label May 26, 2022
@coreyostrove
Copy link
Contributor

coreyostrove commented Jul 12, 2022

Thanks for the bug report, @johnkgamble. I can reproduce this on the latest version of the master branch and also confirmed that the notebook works as of pygsti 0.9.10.post4. I dug into this a bit more and I'm pretty sure this was broken by this commit: 78a357a on March 15th. This commit removed create_target_model from HasProcessorSpec and also removed the creation/inclusion of a target model in ModelTestEstimateResults by default. ModelTest can still take in a target model at init in which case I think this all would work as-is, but run_model_test from the drivers package which is being used in this notebook doesn't pass in a target model when it initializes its ModelTest object.

I can see us fixing this a couple ways. 1) We could add some plumbing in run_model_test to pass in a target model. A target model isn't always a well-defined concept in the model testing setting, however. 2) We could stick the call to find_std_clifford_compilation in the report factory in a try-except block and have it simply skip that step when a target model is not present in the estimates object (and have it print something to the log indicating as much). Maybe both?

Any other places in the code that assume the existence of a target model? Maybe we should do a scan for these down the road.

@johnkgamble
Copy link
Contributor Author

@coreyostrove , thanks for the follow-up. This isn't critical for me right now, but I think that this example should be removed at the very least. Have the pygsti devs considered running the notebooks in CI? I can't think of another way to keep regressions like this from happening in the future.

I've set up things like this before, so I'm happy to have a discussion about it if you're interested.

@sserita
Copy link
Contributor

sserita commented Jul 13, 2022

Yes we have considered running the Jupyter notebooks in CI, it just hasn't been a high priority. I'll revisit that the next time I have a few extra cycles. :)

@coreyostrove
Copy link
Contributor

A patch for this issue with the ModelTest not getting passed in a target model exists on the branch feature-circuitlistsdesign-reporting included in commit 7c1e01b. Once this branch gets merged in we should be good to go.

P.S. We have automated notebook regression testing up and running on the latest version of develop, so we should be better equipped to catch these problems going forward.

@sserita sserita closed this as completed May 17, 2023
@sandialabs sandialabs deleted a comment from sserita May 19, 2023
@coreyostrove coreyostrove reopened this May 19, 2023
@coreyostrove coreyostrove added fixed-but-not-in-release-yet Bug has been fixed, but isn't in an official release yet (just exists on a development branch) planned-for-next-release labels May 19, 2023
@coreyostrove coreyostrove self-assigned this May 19, 2023
@sserita sserita added this to the 0.9.12.1 milestone Nov 29, 2023
@sserita sserita closed this as completed Feb 2, 2024
@sserita sserita removed the fixed-but-not-in-release-yet Bug has been fixed, but isn't in an official release yet (just exists on a development branch) label Feb 7, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug A bug or regression
Projects
None yet
Development

When branches are created from issues, their pull requests are automatically linked.

3 participants