Skip to content

Latest commit

 

History

History
36 lines (29 loc) · 1.91 KB

cdr-2-use-case.md

File metadata and controls

36 lines (29 loc) · 1.91 KB
jupytext kernelspec
text_representation
extension format_name format_version jupytext_version
.md
myst
0.13
1.11.1
display_name language name
Python 3
python
python3

When should I use CDR?

Advantages

The main advantage of CDR is that it can be applied without knowing the specific details of the noise model. Indeed, in CDR, the effects of noise are indirectly learned through the execution of an appropriate set of test circuits. In this way, the final error mitigation inference tends to be tuned to the used backend.

This self-tuning property is even stronger in the case of variable-noise-CDR, i.e., when using the scale_factors option in {func}.execute_with_cdr. In this case, the final error mitigated expectation value is obtained as a linear combination of noise-scaled expectation values. This is similar to Zero-Noise Extrapolation but, in CDR, the coefficients of the linear combination are learned instead of being fixed by the extrapolation model.

Disadvantages

The main disadvantage of CDR is that the learning process is performed on a suite of test circuits which only resemble the original circuit of interest. Indeed, test circuits are near-Clifford approximations of the original one. Only when the approximation is justified, the application of CDR can produce meaningful results. Increasing the fraction_non_clifford option in {func}.execute_with_cdr can alleviate this problem to some extent. Note that, the larger fraction_non_clifford is, the larger the classical computation overhead is.

Another relevant aspect to consider is that, to apply CDR in a scalable way, a valid near-Clifford simulator is necessary. Note that the computation cost of a valid near-Clifford simulator should scale with the number of non-Clifford gates, independently from the circuit depth. Only in this case, the learning phase of CDR can be applied efficiently.