Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to run SVM on Real Hardware #1718

Closed
masternerdguy opened this issue May 31, 2024 · 6 comments
Closed

Unable to run SVM on Real Hardware #1718

masternerdguy opened this issue May 31, 2024 · 6 comments
Labels
bug Something isn't working

Comments

@masternerdguy
Copy link

Describe the bug
Attempting to use QSVC on a real backend fails, however it works perfectly in the simulator. This appears to be due to ZZFeatureMap when combined with ComputeUncompute (which appears to be the only option) and changes recently made to the automatic transpilation process.

qiskit_ibm_runtime.exceptions.IBMInputValueError: 'The instruction ZZFeatureMap on qubits (0, 1, 2, 3) 
is not supported by the target system. Circuits that do not match the target hardware definition are no 
longer supported after March 4, 2024. See the transpilation documentation (https://docs.quantum.ibm.com/transpile) 
for instructions to transform circuits and the primitive examples (https://docs.quantum.ibm.com/run/primitives-examples) 
to see this coupled with operator transformations.'

Attempts to work around the problem by following the documentation resulted in a more "fundamental" error but did not resolve the issue.

qiskit_ibm_runtime.exceptions.IBMInputValueError: 'The instruction sxdg on qubits (3,) is not supported by the target system. 
Circuits that do not match the target hardware definition are no longer supported after March 4, 2024. See the transpilation 
documentation (https://docs.quantum.ibm.com/transpile) for instructions to transform circuits and the primitive examples 
(https://docs.quantum.ibm.com/run/primitives-examples) to see this coupled with operator transformations.'

Steps to reproduce
Attached are four python files -

  • qsvm.py - demonstrates expected behaviour of QSVC to utilize an SVM within the local simulator.
  • qsvm-0-real.py - trivial attempt to run this program on a real backend leading to discovery of need to locally transpile ZZFeatureMap for real hardware.
  • qsvm-1-real.py - attempt to resolve issue by transpiling ZZFeatureMap, resulting in another failure due top an unsupported sxdg instruction on real hardware.
  • qsvm-2-real.py - attempt to resolve issue by simplifying the problem as a desperate measure, leading to similar sxdg error on a different qubit.

python code.zip

Screenshots from running the four Python programs attached have also been included.

qsvm - works simulator

qsvm-0-real - fails real hardware

qsvm-1-real - fails real hardware

qsvm-2-real - fails real hardware

Expected behavior
The functionality available in the simulator should work on a real backend with minimal modification, or there should be an alternative method of similar complexity to perform the same calculation.

Suggested solutions
Is there an alternative way of transpiling ZZFeatureMap/ComputeUncompute to avoid the illegal sxdg operation on real hardware? It would be ideal if ZZFeatureMap were still automatically transpiled (the errors suggest this was the case in the past). Alternatively, is there another way of implementing this circuit that avoids this issue that could be included in the SDK?

If functionality is known to only work in the simulator, would it be possible to warn of that while running the code there before attempting to do so on a real backend? That would reduce frustration if its known to impossible to perform the operation outside of the simulator at this time.

Additional Information
I am very excited that I can run code on a real quantum computer at all to be honest - what a time to be alive! I am also quite new to quantum computing, so my understanding of this issue could be very wrong. I have been able to run simpler programs on real backends successfully on the free plan.

  • qiskit-ibm-runtime version:
    qiskit 1.1.0
    qiskit-algorithms 0.3.0
    qiskit-ibm-runtime 0.23.0
    qiskit-machine-learning 0.7.2

  • Python version:
    Python 3.9.13

  • Operating system:
    Windows 10 Version 22H2 (OS Build 19045.4412) x86_64

@masternerdguy masternerdguy added the bug Something isn't working label May 31, 2024
@ElePT
Copy link
Collaborator

ElePT commented May 31, 2024

Hi @masternerdguy thanks for the detailed issue description. This is a known issue in some classes from qiskit-algorithms, and has been reported and is currently being discussed in qiskit-community/qiskit-algorithms#164. Until the issue is addressed in the corresponding repo, in this particular case, you may try to install qiskit-algorithms from source and manually run a transpilation step to convert the circuit to ISA inside ComputeUncompute.run() before the sampler is called. This will convert the unsupported instruction to the chosen backend's basis set.

@masternerdguy
Copy link
Author

Thank you very much @ElePT for the explanation! Using that information, I think I have a workaround. For anyone who finds this, here is a patch file of the quick-and-dirty changes I made that allowed my job to be submitted:

patch.txt

Note that submitting the entire iris data set turned out to be rather ambitious at 11175 (!) circuits, which would blow out my quota by orders of magnitude. So, I discarded ~88% of the data set to bring it down to an acceptable number.

submitted

As of writing, the job is still running - however once completed I would expect these results based on the reduced data set in the simulator:

expected simulator

Obviously discarding most of the data set significantly affects the predictions, however if the job completes on the real hardware with similar output that will still validate the workaround. Now I just need to wait and see.

@masternerdguy
Copy link
Author

masternerdguy commented May 31, 2024

Well, this is probably my fault somehow but it didn't quite work -

results real

The original job did complete

learning job

As did a second one that ran immediately after for the fit step (which I didn't expect but makes sense in hindsight)

prediction job

Any ideas? Might this just be due to the randomness of the training and the significantly reduced data set?

@masternerdguy
Copy link
Author

I think this was indeed my fault because I reran it with all of the circuits transpiled together in one pass, and the results look better:

image

It isn't quite what was expected based on the simulator, however this feels a lot better because setosa is zero and having both a zero and one prediction instead of both being zero seems like the computation is going better. I am going to assume the difference is basically a stochastic effect combined with the reduced training data.

Thank you @ElePT for the workaround! I would be curious to know if there really is a difference between doing the transpilation one-by-one as opposed to together. I don't have enough of a quota to experiment with that.

@ElePT
Copy link
Collaborator

ElePT commented Jun 4, 2024

I'm glad it helped @masternerdguy :) Regarding one-by-one vs batch transpilation, there should be no significant differences, but there is some stochasticity in certain transpiler passes. You can try setting the seed_transpiler argument in transpile to ensure that both runs are equivalent if you need to do further tests.

@1ucian0
Copy link
Member

1ucian0 commented Jun 5, 2024

Closing this one here and referring to it from qiskit-algorithms as it is not a strictly a Qiskit Runtime issue (although, it is a consequence of a change in Qiskit Runtime)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants