This repository has been archived by the owner on Mar 23, 2021. It is now read-only.
[Power Up] Telling quantum DoQs and quantum Qats apart #19
Labels
Power Up
This is an entry for QHack Power Ups
Team Name:
Quant'ronauts
Project Description:
Idea: we classify regions of the Hilbert space, of quantum states of n qubits. There are 2 categories, "Qat" and "DoQ". As an example, for n=1, one hemisphere of the Bloch sphere could be labelled "Qat", the other hemisphere "DoQ". The state vectors to classify are generated as the output of a sensor, which is then fed into a classifier circuit of M layers. Note that we are NOT classifying the classical params vector of the sensor, as we could use any other sensor with different parameterization as long as it's capable of producing Qat and DoQ states. Also, we take the sensor as is, we don't try to "optimize" it.
Catch: during operation, the sensor can only produce its output once. Thus, when we calculate the accuracy on the test set, we are not allowed to make use of expected values resulting from many shots. There is only 1 shot (in the training phase, we can optimize using expected values, as training is done in our laboratory where we can recreate the sensor outputs of the training set at will). We'd like to experiment how much the accuracy drops due to this 1-shot limitation, whether it's different using simulator vs real quantum hardware, and what kind of cost function would reduce this impact.
Extra: if multiple shots are allowed, how much would a data re-uploading scheme improve the accuracy? E.g. imagine there are M identical sensors located very close to each other. When a certain physical event happens, it sets all the parameters of the M sensors at once, identically for each sensor. Then, the parameters don't change until the next event. Furthermore, there may be exponentially many parameters of the sensor, inaccessible to us. So again, we are classifying quantum states.
Source code:
https://github.com/mickahell/qhack21
Resource Estimate:
We need the extra credit to see in more detail how the use of real quantum hardware influences the accuracy of the classifiers, as well as the accuracy gap between the different options mentioned above.
After simulation, we plan to try 4 candidate circuits, of 1, 2, 5, and 10 wires, respectively, using a Rigetti device. We'll use gradient descent for training, 50 steps, in batches of 10, calculating expected values using 30 shots. So, if we calculate with an average of 60 variables per circuit to optimize, altogether 1 training session will require 50x10x30x60x2=1'800'000 shots (x2 is there due to parameter shift). There will be 4 different systems to train, so a total of 4x1'800'000=7'200'000 shots for training.
Our test set has 200 items. For each of the 4 circuits, we'll compare 2 options, one using expected values of 30 shots, and 1 using only 1 single shot. So the total number of shots required is 4x200x30+4x200=24800.
This estimation still has a buffer for the case when the simulation phase makes us change some of the figures, and/or if we want to try the IonQ device as well.
Alternatively, we might train the circuits locally and do ONLY the testing phase using real quantum hardware, that would enable us to try much more than 4, already trained circuits.
The text was updated successfully, but these errors were encountered: