-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Qiskit wrapper #2
Conversation
Co-authored-by: Quentin Barthélemy <q.barthelemy@gmail.com>
Co-authored-by: Quentin Barthélemy <q.barthelemy@gmail.com>
# Time complexity of quantum algorithm depends on the number of trials and | ||
# the number of elements inside the covariance matrices | ||
# Thus we reduce elements number by using restrictive spatial filtering | ||
sf = XdawnCovariances(nfilter=1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Once again, it should be at least nfilter=2
, otherwise covariance matrices are reduced to scalars.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In the case of Xdawn, n_channels = nfilter x nclasses
if I do not mistake.
Please, see my previous comment below:
#2 (comment)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I think you are right: output matrices are 2x2. However, it remains a rough dimension reduction.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, there is this limitation on the feature dimension (24 when running on the local simulator).
I will try to do something similar to my last answer in the previous comment for nfilter = 2
and 3
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here is the result of nfilter=2, with NaiveDimRed vs PCA (top = classical, bottom = quantum).
The balanced accuracy is very low in quantum as compared to nfilter=1. This is likely due to the fact that the feature dimension increases (from 10 to 18) but the number of measurements stays the same (1024).
But just to give an order, the example displayed in the above figure took 1 day and half to run on my local machine. It would be necessary to use a remote backend to achieve higher computation, but I do not see how we can integrate this in a github action.
That said, the question remains interesting, maybe it would be relevant to add an example of optimization with GridSearchCV on a fewer number of data?
|
||
if __cvxpy__: | ||
classifiers.append(QuanticVQC()) | ||
classifiers.append(QuanticSVM(quantum=False)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What are the hyper-parameters of QSVM? Is it a linear or a RBF QSVM?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are two cases.
- When quantum=False, it uses SVC with RBF kernel.
- When quantum=True, it uses a QuantumKernel, which is parameterized using a feature map (which entangles the data into a quantum state).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, but what are the hyper-parameters for QuanticSVM(quantum=False)
?
You should use the same as SVC.
And when quantum is True, what are the hyper-parameters of the QuantumKernel?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For both there is something like this:
- transform the labels 0/1 into -1/1
- get the kernel matrix:
- for
quantum=False
, that is callsklearn.rbf_kernel
withgamma = None
(hence default is 1/nb_features) - for
quantum=True
, that is callQSVM.get_kernel_matrix
withfeature_map=ZZFeatureMap
andenforce_spd=True
(set negative eigen values to zero if kernel matrix is not positive semi-definite) as parameters
- for
- optimize the kernel matrix, with similar parameters:
- number of iteration=500
- normalization of the output = L2 Norm of y
- L2 norm regularization factor = 0.001 (C?)
I changed gamma = auto
(i.e 1/n_feature according to the doc), and C = 0.001
. The figure below show how QuanticSVM(quantum=False)
compares to SVC(gamma='auto', C = 0.001)
:
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok. Like SVC
, QuanticSVM
should allow to set SVM hyper-parameters in its init
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, you are right. Do you want me also to take this work in a follow-up issue?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it a huge modification of code?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
At first sight, I would say that gamma (for quantum=False), feature_map and L2 regularization (for quantum=True) are easy to access. Same thing for the number of shots, the optimizer and the variational form (for VQC) which might be relevant to expose.
Other parameters: enforce_spd, number of iteration, output normalization and L2 regularisaiton (for quantum=False) might be harder to access.
Co-authored-by: Quentin Barthélemy <q.barthelemy@gmail.com>
Co-authored-by: Quentin Barthélemy <q.barthelemy@gmail.com>
Co-authored-by: Quentin Barthélemy <q.barthelemy@gmail.com>
Co-authored-by: Quentin Barthélemy <q.barthelemy@gmail.com>
Co-authored-by: Quentin Barthélemy <q.barthelemy@gmail.com>
Co-authored-by: Quentin Barthélemy <q.barthelemy@gmail.com>
Co-authored-by: Quentin Barthélemy <q.barthelemy@gmail.com>
Co-authored-by: Quentin Barthélemy <q.barthelemy@gmail.com>
Co-authored-by: Quentin Barthélemy <q.barthelemy@gmail.com>
Co-authored-by: Quentin Barthélemy <q.barthelemy@gmail.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok for me, because code has been deeply refactored.
The review of this PR has been very time-consuming for me: in the future, I am not sure to have enough time to continue to review such PRs.
@sylvchev , I let you review, approve and squash+merge.
And can you deploy the documentation on RTD?
@qbarthelemy |
Thanks @gcattan, we could merge this PR. Thanks a lot @qbarthelemy for your careful reading! |
Issue:
pyRiemann/pyRiemann#141
Blocker:
#1
This PR is a "fresh one" but a similar PR with commit history was raised in #3