Skip to content

Running Simulations

Pavithran S Iyer edited this page Apr 17, 2018 · 32 revisions

Parameters that specify a submission record

There are a number of parameters that specify a simulation of quantum error correction. These parameters are collectively provided in a text file placed in the inputs/ folder. As usual, lines starting with # in the input file are ignored by chflow. Every parameter specification is presented in a new line in the format: keyword value. Basic keywrods and thier corresponding descriptions are provided in the table below.

keyword description Mandatory Default
ecc Names of quantum error correcting codes at each concatenation level. YES
channel Physical noise model. YES
noiserange Range of values to be scanned for noise rates. YES
scale Scale on which the noise rates are scanned. The actual noise rate is (scale)^noiserange. NO 1
samples Number of samples of the physical noise model. NO 1
stats Number of syndromes to be sampled at top level. NO 1000
metrics Metrics to be computed on the logical channels. NO fidelity
outdir Directory containing the simulation outputs. YES

Example

For a sample submission record file, see inputs/sample_local.txt. To load a simulation record in chflow, we must use the sbload command along with the filename containing the submission record. The details of the submission record can be viewed upon invoking the sbprint command.

Pavithrans-MacBook-Pro:chflow pavithran$ ./chflow.sh 
>> sbload sample_local
Simulation data is available for 0% of the channels.
Preparing physical channels... 10 (100%) done. 
>> sbprint
Time stamp: input
 Physical channel
Parameters                     Values                        
Channel                        dp                            
Noise range                    [ 1.0  2.0  3.0  4.0  5.0  6.0  7.0  8.0  9.0  10.0].
Scale of noise rates           0.66                          
Number of Samples              1                             
Metrics
Logical metrics                fidelity                      
Error correction
QECC                           FiveQubit X FiveQubit X FiveQubit
[[N, K, D]]                    [[125, 1, 27]]                
Levels of concatenation        3                             
ECC frame                      P                             
Decoder                        0                             
Syndrome samples at level 3    1000                          
Type of syndrome sampling      N                             

>> 

To create a new copy of the loaded submission record, we must use the submit command.

Identifying a simulation record

A simulation record can be loaded and recalled using its time stamp information. This is denoted along with the keyword timestamp in the submission text file. The value given to timestamp can technically be any string, however, the date and time are provided in the format DDMMYYYYHHMMSS. This information is also used to recall the simulation record data

Physical noise model

A submission can involve the simulation of many physical noise models, all of which are defined in the same form but with different values for the parameters. The type of channel is specified by the keyword channel and its value must be one of the names in the list of channels. The values for the parameters of the noise model are specified in the form of a range, along with the keyword noiserange. If there are many parameters, a range for each parameter must be specified. The format for noise range is the same as that required by the chcalib command. For several reasons, it is sometimes desirable to scan the noise parameter range in the logscale. This is made possible by specifying a value b, along with the keyword scale. When b != 1, the physical noise rate is calculated as b^x where x lies in the noise range. Moreover, the value, N, accompanying the samples keyword ensures that a set of N physical channels are generated for every distinct value for the noise parameters.

When a simulation is loaded using sbload, the physical noise models are generated and stored in a particular representation, that is specified by the value accompanying the repr keyword. By default channels are stored as their Pauli Liouville matrix. Every distinct value of the noise parameters results in a new quantum channel that is saved to a file in physical/ whose name is formatted as: <channel name>_<parameter values separated by "_">.npy.

Quantum error correction

Describing an error correcting code

Besides any type of stabilizer codes, it is also possible to error correct with concatenated codes. A concatenated encoding can be thought of as a recursive encoding where a logical qubit is encoded in some physical qubits that are each encoded in a further set of physical qubits. In order to specify a concatenated structure, it suffices to specify two quantities. First, the number of concatenation layers (or levels), which is accompanied by the keyword levels and the type of error correcting code used at each level, in the order from the physical until the logical layer. This is specified by a list of names of error correcting codes separated by , and accompanied by the keyword ecc. Eg. the following commands are used to specify a concatenated code that contains a Steane encoding at the physical layer and a 5 qubit encoding at the next two layers: ecc Steane,FiveQubit,FiveQubit and levels 3.

Decoding strategy

As mentioned in the section on decoding stabilizer codes, one can choose to employ either the minimum weight (MW) or the maximum likelihood (ML) decoding strategy in chflow. The keyword used to specify the type of decoder is decoder and it must take one of the values in {0,1} the first of which denotes an MW decoder and the second, an ML decoder.

In the case of the traditional maximum likelihood decoder, the recovery operation is a Pauli. However, it has been observed that designing recovery operations that are more general than Pauli operators,

Measures of logical error

The metrics to be evaluated on the logical channels and averaged over all syndromes can be specified along with the metrics keyword. Multiple metric names must be separated by commas. Eg. metrics infid,dnorm,entropy refers to computing the Infidelity, Diamond norm and the Von Neumann entropy of all the channels at the logical level and consequently their average.

Syndrome sampling

While simulating concatenated codes, the number of different syndromes for the code grow rapidly with the number of concatenated layers, soon making it infeasible to compute the logical channel (or the logical error rate) for every syndrome. Hence we must resort to sampling the syndrome distribution. It turns out that the outlier syndromes contribute significantly to the average logical error rate. To this effect, in this article we have applied techniques from importance sampling to this problem. In order to use importance sampling methods, we must specify one of the two types of methods along with the keyword importance. The first, recalled by the string A, is based on sampling a higher root of the syndrome distribution. Second, recalled by B is based on drawing syndrome samples from the simulation of a noise model of the same type but with a higher physical noise rate. The default value for the keyword importance is N which refers direct sampling of the syndrome distribution.

The number of syndrome samples collected to estimate the average logical error rate can be specified along with the keyword stats in the submission record file.

Execution of the algorithm

The syndrome samples for estimating an average logical error rate can be collected independently, this step could be parallelized by specifying the number of cores, c1, to be used for the syndrome sampling procedure. On the whole, every simulation (of a physical channel with distinct physical noise parameters) can in principle be done in parallel since they do not require any interactions between themselves. The number of core-batches,c2 for this process can also be specified. Hence, the total number of cores used by chflow will be c1 x c2. In order to specify c1 and c2, we use the keyword cores as: cores c2,c1.

Running simulations

Simulations on a local computer

The simulations of quantum error correction can be run in two ways. One, on the interactive front end of chflow. Second, remotely on a computer. In each of the cases, the submission record file must have the host information set to local and loaded in chflow using the command sbload.

On the interactive front end of chflow

Once the simulation is successfully loaded, it can be run using the ecc command, as in the example below.

If logical error rate data from earlier runs is already available, invoking the ecc command does not re-run the simulations.

Remotely run simulations

Simulations on Compute Canada servers

The error correcting simulations can often be intensive since they are tantamount to full density matrix evolution of a n-qubit encoded system. This is especially true when many channels corresponding to different values of the noise parameters (or different samples of the same noise parameters) need to be simulated. In these cases, one can run these simulations on the Mammouth Parallel and Mammouth Serial computing servers, using the bqtools software. The relevant specifications for bqtools that must be provided in the submission record file are as follows.

keyword value type corresponding key word in bqtools configuration file
job string "batchName"
queue string See queue information in "submitOptions"
wall int See walltime information in "submitOptions"
email string "emailAddress"

In addition to the above parameters required by bqtools, we also need to specify the name of the host cluster, accompanied by the keyword host, which can be one of mp2 or ms.

  • Physical noise processes
    • Definitions of quantum channels
    • Representations of quantum channels
    • Approximations to a Pauli channel
  • Quantum error correction
    • Quantum error correcting codes
    • Decoding and effective channel
  • Running simulations
    • On a local computer
    • On Compute Canada clusters
  • Plotting results
  • Deriving new measures of noise strength
    • Fitting logical error rates to an ansatz
    • Using machine learning techniques
Clone this wiki locally