-
Notifications
You must be signed in to change notification settings - Fork 131
Description
Informations
- Qiskit Experiments version:
- commit 112f4ce
- Python version:
- 3
- Operating system:
- osx
What is the current behavior?
The t1 fit is constrained by default to give a value between [0,1]. This poses a few problems; if the actual data is not so constrained (because for example readout is poorly calibrated), the resulting fit is poor. If the bound becomes active, it violates the assumptions in linear error propagation and the error bars become invalid.
In extreme cases, if the inital guess is outside [0,1] for amplitude or offset, or outside some other range for t1, the fitter raises an exception, does not generate a plot, and ultimately will not put a fit result (even if a poor one) in the results db. (this is hard to reproduce w/ the T1 fake backend since it uses discretized experimental data).
These experiments (all) need to be able to give some sort of result even for non-ideal data, and should not require manual intervention to get data to fit if it is at all possible to fit it. (This was even included as an example in the t1 fitter).
Steps to reproduce the problem
What is the expected behavior?
For an admittedly extreme example, try
from qiskit_experiments.characterization import T1
from qiskit_experiments.composite import ParallelExperiment
# A T1 simulator
from qiskit_experiments.test.t1_backend import T1Backend
# Simulate T1 of 25 microseconds
t1 = 1
backend = T1Backend(t1=[t1*1e-6], initial_prob1=[-2])
# Time intervals to wait before measurement
import numpy as np
delays = np.linspace(2,4,51)
# Create an experiment for qubit 0,
# setting the unit to microseconds,
# with the specified time intervals
exp = T1(qubit=0,
delays=delays,
unit="us")
# Run the experiment circuits with 1000 shots each,
# and analyze the result
exp_data = exp.run(backend=backend,
shots=1000)
# Print the result
res = exp_data.analysis_result(0)
Suggested solutions
The default parameters on experiments should be "correct" for common use cases.
never use a bounded optimizer on experimental data. They're simply too good at hiding defects in the data (ie, hiding that tomography data is intrinsically non-physical)
Instead use an unbounded one and set the result to bad if the MLE is outside physical bounds by more than a few error bars.