Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature globally germ aware fpr #350

Merged
merged 71 commits into from
Oct 18, 2023
Merged
Show file tree
Hide file tree
Changes from 69 commits
Commits
Show all changes
71 commits
Select commit Hold shift + click to select a range
76900b1
Initial commit for new per-germ global FPR
Mar 28, 2023
d18c2c6
Add option for approximate fisher info calculation
Apr 6, 2023
e298611
Update logging
Apr 10, 2023
6e6ffc0
Add additional logging
Apr 10, 2023
13ab0c3
New Heuristics and Stability fixes
Apr 13, 2023
f35ee90
Add functionality for precomputing Jacobians
Apr 21, 2023
c6771c4
Add additional logging to jacobian precomputation
Apr 21, 2023
96f589f
Minor logging fix
Apr 21, 2023
b7be3c5
Add MPI support for fisher information functions
Apr 28, 2023
b95c3b6
Reimplement fisher information by L to address bug
Apr 28, 2023
3104b8a
Add additional argument to by L fisher info calc
Apr 28, 2023
195548e
Additional logging for memory errors
Apr 28, 2023
7d498b9
Add a memory efficient mode for the fisher information calculation
Apr 28, 2023
c451265
Minor typo fix
Apr 28, 2023
5d42350
Another minor typo fix
Apr 28, 2023
0ee9bd4
Broken kwarg plumbing
Apr 28, 2023
11061e3
Additional progress logging for fisher information calcs
Apr 28, 2023
59dd8d2
Fix error in splitting of circuits for memory efficiency
Apr 28, 2023
3950f8e
Patch MPI message size problem
Apr 28, 2023
c757541
Typo fix
Apr 28, 2023
9517a99
MPI support and memory optimations for fisher information calculations
May 1, 2023
1ea905e
Clean up empty fiducial label handling
May 10, 2023
a0667bc
Add additional support for memory efficient version of the MPI calcul…
May 16, 2023
f31d44d
Merge branch 'feature-globally-germ-aware-fpr' of https://github.com/…
May 16, 2023
eeed127
Germ selection static gates
May 16, 2023
0076375
Merge branch 'develop' into feature-globally-germ-aware-fpr
May 16, 2023
9b8072c
Front load COPA layout construction for memory checks
May 17, 2023
cdafdc2
Clean up logging messages and profiling print statements.
May 19, 2023
9995155
Remove unneeded variable initialization.
May 19, 2023
0773e77
Add CPTP model support for choi eigenvalue function
May 31, 2023
b842afa
Add more general for seed model selection to StandardGST
May 31, 2023
a30878b
Add truncation tolerance for conversion to CPTP models
May 31, 2023
f7b2ed1
Patch initial model init call
May 31, 2023
ac37dfd
Additional patch for initial model support.
May 31, 2023
64a99db
Revert "Add CPTP model support for choi eigenvalue function"
May 31, 2023
8da8a1b
Temporary patch for DataSet pickling problem
May 31, 2023
dbe64c6
Minor Typo Fix
Jun 2, 2023
87a0759
Redo initial model seeding reimplementation to play better with proto…
Jun 2, 2023
5591c40
Better ModelTest support for CircuitListsDesigns and lack of gauge op…
Feb 1, 2023
0573832
Bugfix for buffer init issue in dist memory customsolve
Jul 23, 2023
79a53e0
Add implicit model support to _remove_spam_vectors
Jul 23, 2023
8be2367
Additional logging
Jul 23, 2023
549987a
Revert changes StandardGST seeding
Jul 23, 2023
88a13cf
Merge branch 'feature-globally-germ-aware-fpr' of https://github.com/…
Jul 23, 2023
8a746c1
More reversions for StandardGST seeding
Jul 23, 2023
d8dcc3f
Logging and dtype spec additions for MPI FIM calcs
Jul 23, 2023
e8d0340
Errgen projection plot bug for more than 2 qubits
Jul 23, 2023
1864084
Merge branch 'feature-globally-germ-aware-fpr' of https://github.com/…
Jul 23, 2023
c1336ed
Improve memory efficiency of germ list completeness check
Jul 23, 2023
15afc2e
Typo fix
Jul 23, 2023
217b330
Another typo fix
Jul 23, 2023
160d719
Add plumbing for gauge params in completeness check
Jul 23, 2023
ea9d737
typo fix
Jul 23, 2023
599899a
Bugfix for mem efficient pretest
Jul 25, 2023
56cfc12
Split of fiducial candidate construction
Jul 25, 2023
e179e7d
Add new heuristics for faster deduping
Jul 25, 2023
48d8733
Add a product method to MapForwardSimulator
Jul 26, 2023
260a341
Merge branch 'develop' into feature-globally-germ-aware-fpr
Sep 8, 2023
ae7117a
Fixes for linting and unit test errors
Sep 14, 2023
46dc295
Docstring updates
Sep 14, 2023
6a01133
Clean up unused variables and kwargs
Sep 14, 2023
d158976
Add unit test for new FPR scheme
Sep 14, 2023
3b16b1c
Add more unit tests + bugfixes
Sep 15, 2023
ec092da
Clean up some documentation and print statements
Sep 16, 2023
910cfdb
Bugfixes for edge case in fiducial selection
Sep 16, 2023
6382d1a
Update experiment design tutorial notebooks
Sep 16, 2023
ac51832
Fisher info example notebook
Sep 17, 2023
9a3233a
Fix minor linting error
Sep 17, 2023
a3c2b5e
Merge branch 'develop' into feature-globally-germ-aware-fpr
sserita Sep 20, 2023
bf5d953
Delete pygsti_new_fpr_requirements_04_24_2023.txt
coreyostrove Oct 18, 2023
4816788
Typo fixes
Oct 18, 2023
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
255 changes: 255 additions & 0 deletions jupyter_notebooks/Examples/FisherInformation.ipynb

Large diffs are not rendered by default.

Large diffs are not rendered by default.

Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,10 @@
"- **Global fiducial pair reduction (GFPR)** removes the same intelligently-selected set of fiducial pairs for all germ-powers. This is a conceptually simple method of reducing the operation sequences, but it is the most computationally intensive since it repeatedly evaluates the number of amplified parameters for en *entire germ set*. In practice, while it can give very large sequence reductions, its long run can make it prohibitive, and the \"per-germ\" reduction discussed next is used instead. \n",
"<span style=\"color:red\">Note: this form of FPR is deprecated on the latest versions of pygsti's develop branch. We now recommend using per-germ FPR instead. Also note that the current implementation of per-germ FPR will in most cases return smaller experiment designs than the legacy global FPR does.</span>\n",
"\n",
"- **Per-germ fiducial pair reduction (PFPR)** removes the same intelligently-selected set of fiducial pairs for all powers of a given germ, but different sets are removed for different germs. Since different germs amplify different directions in model space, it makes intuitive sense to specify different fiducial pair sets for different germs. Because this method only considers one germ at a time, it is less computationally intensive than GFPR, and thus more practical. Note, however, that PFPR usually results in less of a reduction of the operation sequences, since it does not (currently) take advantage overlaps in the amplified directions of different germs (i.e. if $g_1$ and $g_3$ both amplify two of the same directions, then GST doesn't need to know about these from both germs).\n",
"- **Per-germ fiducial pair reduction (PFPR)** removes the same intelligently-selected set of fiducial pairs for all powers of a given germ, but different sets are removed for different germs. Since different germs amplify different directions in model space, it makes intuitive sense to specify different fiducial pair sets for different germs. Because this method only considers one germ at a time, it is less computationally intensive than GFPR, and thus more practical.\n",
"\n",
"- **Per-germ global fiducial pair reduction (PGGFPR)** removes the same intelligently-selected set of fiducial pairs for all powers of a given germ, but different sets are removed for different germs while also taking into account the amplificational properties of a germ set as a whole. This is a two-step process in which we first identify redundancy within a germ set itself due to overlapping amplified directions in parameter space and identifies a subset of amplified parameters for each germ such that collectively we have sensitivity to every direction. In the second stage we select a subset of fiducial pairs for each germ only requiring sensitivity to the subset of amplified parameters of that germ identified in the first stage. This is currently our most effective form of fiducial pair reduction in terms of potential experimental savings, capable with the right settings of achieving experimental designs approaching information theoretic lower bounds in size, but with fewer fiducial pairs comes the potential for detecting non-Markovian effects and potentially less robustness to those effects (the extent to which this is true, or if it is true at all is an active area of research), so caveat emptor. \n",
"\n",
"- **Random per-germ power fiducial pair reduction (RFPR)** randomly chooses a different set of fiducial pairs to remove for each germ-power. It is extremly fast to perform, as pairs are just randomly selected for removal, and in practice works well (i.e. does not impair Heisenberg-scaling) up until some critical fraction of the pairs are removed. This reflects the fact that the direction detected by a fiducial pairs usually has some non-negligible overlap with each of the directions amplified by a germ, and it is the exceptional case that an amplified direction escapes undetected. As such, the \"critical fraction\" which can usually be safely removed equals the ratio of amplified-parameters to germ-process-matrix-elements (typically $\\approx 1/d^2$ where $d$ is the Hilbert space dimension, so $1/4 = 25\\%$ for 1 qubit and $1/16 = 6.25\\%$ for 2 qubits). RFPR can be combined with GFPR or PFPR so that some number of randomly chosen pairs can be added on top of the \"intelligently-chosen\" pairs of GFPR or PFPR. In this way, one can vary the amount of sequence reduction (in order to trade off speed vs. robustness to non-Markovian noise) without inadvertently selecting too few or an especially bad set of random fiducial pairs.\n",
"\n",
"## Preliminaries\n",
Expand All @@ -24,21 +27,30 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 12,
"metadata": {},
"outputs": [],
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Gate operation labels = [Label(('Gxpi2', 0)), Label(('Gypi2', 0))]\n"
]
}
],
"source": [
"#Import pyGSTi and the \"stardard 1-qubit quantities for a model with X(pi/2), Y(pi/2), and idle gates\"\n",
"#Import pyGSTi and the \"stardard 1-qubit quantities for a model with X(pi/2), Y(pi/2)\"\n",
"import pygsti\n",
"import pygsti.circuits as pc\n",
"from pygsti.modelpacks import smq1Q_XYI\n",
"from pygsti.modelpacks import smq1Q_XY\n",
"import numpy as np\n",
"\n",
"#Collect a target model, germ and fiducial strings, and set \n",
"# a list of maximum lengths.\n",
"target_model = smq1Q_XYI.target_model()\n",
"prep_fiducials = smq1Q_XYI.prep_fiducials()\n",
"meas_fiducials = smq1Q_XYI.meas_fiducials()\n",
"germs = smq1Q_XYI.germs()\n",
"target_model = smq1Q_XY.target_model()\n",
"prep_fiducials = smq1Q_XY.prep_fiducials()\n",
"meas_fiducials = smq1Q_XY.meas_fiducials()\n",
"germs = smq1Q_XY.germs()\n",
"maxLengths = [1,2,4,8,16,32]\n",
"\n",
"opLabels = list(target_model.operations.keys())\n",
Expand All @@ -58,9 +70,25 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 2,
"metadata": {},
"outputs": [],
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"** Without any reduction ** \n",
"L=1: 92 operation sequences\n",
"L=2: 168 operation sequences\n",
"L=4: 285 operation sequences\n",
"L=8: 448 operation sequences\n",
"L=16: 616 operation sequences\n",
"L=32: 784 operation sequences\n",
"\n",
"784 experiments to run GST.\n"
]
}
],
"source": [
"#Make list-of-lists of GST operation sequences\n",
"fullStructs = pc.create_lsgst_circuit_lists(\n",
Expand Down Expand Up @@ -155,6 +183,117 @@
"print(\"\\n%d experiments to run GST.\" % len(pfprExperiments))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Per-germ Fiducial Pair Reduction (PFPR) with Greedy Search Heuristics\n",
"\n",
"In addition to the implementation of per-germ fiducial pair reduction above, which supports either a brute force sequential or random search heuristic, there is also an implementation using a greedy search heuristic combined with fast low-rank update-based techniques for significantly faster execution, particularly when generating experiment designs for two-or-more qubits. "
]
},
{
"cell_type": "code",
"execution_count": 13,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"------ Per Germ (L=1) Fiducial Pair Reduction --------\n",
"Progress: [##################################################] 100.0% -- Circuit(Gxpi2:0Gxpi2:0Gypi2:0@(0)) germ (5 params)\n",
"\n",
"Per-germ FPR to keep the pairs:\n",
"Qubit 0 ---|Gxpi2|---\n",
": [(0, 1), (3, 1), (3, 3), (5, 5)]\n",
"Qubit 0 ---|Gypi2|---\n",
": [(0, 3), (2, 3), (5, 2), (4, 4)]\n",
"Qubit 0 ---|Gxpi2|-|Gypi2|---\n",
": [(3, 4), (5, 2), (5, 5), (5, 4)]\n",
"Qubit 0 ---|Gxpi2|-|Gxpi2|-|Gypi2|---\n",
": [(0, 2), (1, 2), (1, 4), (3, 0), (4, 4), (0, 4)]\n",
"\n",
"Per-germ FPR reduction (greedy heuristic)\n",
"L=1: 56 operation sequences\n",
"L=2: 61 operation sequences\n",
"L=4: 71 operation sequences\n",
"L=8: 89 operation sequences\n",
"L=16: 107 operation sequences\n",
"L=32: 125 operation sequences\n",
"\n",
"125 experiments to run GST.\n"
]
}
],
"source": [
"fid_pairsDict = pygsti.alg.find_sufficient_fiducial_pairs_per_germ_greedy(target_model, prep_fiducials, meas_fiducials,\n",
" germs, verbosity=1)\n",
"print(\"\\nPer-germ FPR to keep the pairs:\")\n",
"for germ,pairsToKeep in fid_pairsDict.items():\n",
" print(\"%s: %s\" % (str(germ),pairsToKeep))\n",
"\n",
"pfprStructs_greedy = pc.create_lsgst_circuit_lists(\n",
" opLabels, prep_fiducials, meas_fiducials, germs, maxLengths,\n",
" fid_pairs=fid_pairsDict) #note: fid_pairs arg can be a dict too!\n",
"\n",
"print(\"\\nPer-germ FPR reduction (greedy heuristic)\")\n",
"for L,strct in zip(maxLengths,pfprStructs_greedy):\n",
" print(\"L=%d: %d operation sequences\" % (L,len(strct)))\n",
"\n",
"pfprExperiments_greedy = pc.create_lsgst_circuits(\n",
" opLabels, prep_fiducials, meas_fiducials, germs, maxLengths,\n",
" fid_pairs=fid_pairsDict)\n",
"print(\"\\n%d experiments to run GST.\" % len(pfprExperiments_greedy))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Per-germ Global Fiducial Pair Reduction (PFPR)\n",
"\n",
"As mentioned above, the per-germ global FPR scheme is a two step process. First we identify a reduced set of amplified parameters for each germ to require sensitivity to, and then next we identify reduced sets of fiducials with sensitivity to those particular parameters."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"#Note that we are setting the assume_real flag to True below as we know that we am working in the Pauli basis and as such the \n",
"#process matrices for the germs will be real-valued allowing for memory savings and somewhat faster performance. \n",
"#If you're working with a non-hermitian basis or aren't sure keep this set to it's default value of False.\n",
"#Likewise, float_type specifies the numpy data type to use, and is primarily useful either in conjunction with\n",
"#assume_real, or when needing to fine-tune the memory requirements of the algorithm (running this algorithm for\n",
"#more than 2-qubits can be very memory intensive). When running this function for more than two-qubits, consider\n",
"#setting the mode kwarg to 'RRQR', which is typically significantly faster for larger qubit counts, but is slightly\n",
"#less performant in terms of the cost function of the returned solutions.\n",
"germ_set_spanning_vectors, _ = pygsti.alg.germ_set_spanning_vectors(target_model, germs, assume_real=True, float_type= np.double)\n",
"\n",
"#Next use this set of vectors to find a sufficient reduced set of fiducial pairs.\n",
"#Alternatively this function can also take as input a list of germs\n",
"fid_pairsDict = pygsti.alg.find_sufficient_fiducial_pairs_per_germ_global(target_model, prep_fiducials, meas_fiducials,\n",
" germ_vector_spanning_set=germ_set_spanning_vectors, verbosity=1)\n",
"print(\"\\nPer-germ Global FPR to keep the pairs:\")\n",
"for germ,pairsToKeep in fid_pairsDict.items():\n",
" print(\"%s: %s\" % (str(germ),pairsToKeep))\n",
"\n",
"pggfprStructs = pc.create_lsgst_circuit_lists(\n",
" opLabels, prep_fiducials, meas_fiducials, germs, maxLengths,\n",
" fid_pairs=fid_pairsDict) #note: fid_pairs arg can be a dict too!\n",
"\n",
"print(\"\\nPer-germ Global FPR reduction\")\n",
"for L,strct in zip(maxLengths,pggfprStructs):\n",
" print(\"L=%d: %d operation sequences\" % (L,len(strct)))\n",
"\n",
"pggfprExperiments = pc.create_lsgst_circuits(\n",
" opLabels, prep_fiducials, meas_fiducials, germs, maxLengths,\n",
" fid_pairs=fid_pairsDict)\n",
"print(\"\\n%d experiments to run GST.\" % len(pggfprExperiments))"
]
},
{
"cell_type": "markdown",
"metadata": {},
Expand Down Expand Up @@ -218,6 +357,12 @@
"print(\"\\n------ GST with PFPR sequences ------\")\n",
"pfpr_results = runGST(pfprStructs, pfprExperiments)\n",
"\n",
"print(\"\\n------ GST with PFPR sequences (greedy heuristic) ------\")\n",
"pfpr_results_greedy = runGST(pfprStructs_greedy, pfprExperiments_greedy)\n",
"\n",
"print(\"\\n------ GST with PGGFPR sequences ------\")\n",
"pggfpr_results = runGST(pggfprStructs, pggfprExperiments)\n",
"\n",
"print(\"\\n------ GST with RFPR sequences ------\")\n",
"rfpr_results = runGST(rfprStructs, rfprExperiments)"
]
Expand All @@ -241,6 +386,10 @@
" ).write_html(\"tutorial_files/example_gfpr_report\")\n",
"pygsti.report.construct_standard_report(pfpr_results, title=\"Per-germ FPR Report Example\"\n",
" ).write_html(\"tutorial_files/example_pfpr_report\")\n",
"pygsti.report.construct_standard_report(pfpr_results_greedy, title=\"Per-germ FPR (Greedy Heuristic) Report Example\"\n",
" ).write_html(\"tutorial_files/example_pfpr_greedy_report\")\n",
"pygsti.report.construct_standard_report(pggfpr_results, title=\"Per-germ Global FPR Report Example\"\n",
" ).write_html(\"tutorial_files/example_pggfpr_report\")\n",
"pygsti.report.construct_standard_report(rfpr_results, title=\"Random FPR Report Example\"\n",
" ).write_html(\"tutorial_files/example_rfpr_report\")"
]
Expand All @@ -251,18 +400,28 @@
"source": [
"If all has gone well, the [Standard GST](tutorial_files/example_stdstrs_report/main.html),\n",
"[GFPR](tutorial_files/example_gfpr_report/main.html),\n",
"[PFPR](tutorial_files/example_pfpr_report/main.html), and\n",
"[PFPR](tutorial_files/example_pfpr_report/main.html),\n",
"[PFPR (Greedy)](tutorial_files/example_pfpr_greedy_report/main.html)\n",
"[PGGFPR](tutorial_files/example_pggfpr_report/main.html)\n",
"and\n",
"[RFPR](tutorial_files/example_rfpr_report/main.html),\n",
"reports may now be viewed.\n",
"The only notable difference in the output are \"gaps\" in the color box plots which plot quantities such as the log-likelihood across all operation sequences, organized by germ and fiducials. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"display_name": "New_FPR",
"language": "python",
"name": "python3"
"name": "new_fpr"
},
"language_info": {
"codemirror_mode": {
Expand All @@ -274,9 +433,9 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.5"
"version": "3.9.13"
}
},
"nbformat": 4,
"nbformat_minor": 1
"nbformat_minor": 4
}
25 changes: 19 additions & 6 deletions pygsti/algorithms/core.py
Original file line number Diff line number Diff line change
Expand Up @@ -856,7 +856,23 @@ def _max_array_types(artypes_list): # get the maximum number of each array type
ret = ()
for artype, cnt in max_cnts.items(): ret += (artype,) * cnt
return ret


#These lines were previously in the loop below, but we should be able to move it out from there so we can use it
#in precomputing layouts:
method_names = optimizer.called_objective_methods
array_types = optimizer.array_types + \
_max_array_types([builder.compute_array_types(method_names, mdl.sim)
for builder in iteration_objfn_builders + final_objfn_builders])

#precompute the COPA layouts. During the layout construction there are memory availability checks,
#So by doing it this way we should be able to reduce the number of instances of running out of memory before the end.
#The ModelDatasetCircuitsStore
printer.log('Precomputing CircuitOutcomeProbabilityArray layouts for each iteration.', 2)
precomp_layouts = []
for i, circuit_list in enumerate(circuit_lists):
printer.log(f'Layout for iteration {i}', 2)
precomp_layouts.append(mdl.sim.create_layout(circuit_list, dataset, resource_alloc, array_types, verbosity= printer - 1))

with printer.progress_logging(1):
for i in range(starting_index, len(circuit_lists)):
circuitsToEstimate = circuit_lists[i]
Expand All @@ -871,12 +887,9 @@ def _max_array_types(artypes_list): # get the maximum number of each array type
if circuitsToEstimate is None or len(circuitsToEstimate) == 0: continue

mdl.basis = start_model.basis # set basis in case of CPTP constraints (needed?)
method_names = optimizer.called_objective_methods
array_types = optimizer.array_types + \
_max_array_types([builder.compute_array_types(method_names, mdl.sim)
for builder in iteration_objfn_builders + final_objfn_builders])
initial_mdc_store = _objfns.ModelDatasetCircuitsStore(mdl, dataset, circuitsToEstimate, resource_alloc,
array_types=array_types, verbosity=printer - 1)
array_types=array_types, verbosity=printer - 1,
precomp_layout = precomp_layouts[i])
mdc_store = initial_mdc_store

for j, obj_fn_builder in enumerate(iteration_objfn_builders):
Expand Down
Loading
Loading