New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
quantum mixer implementation #645
Conversation
95205e8
to
92b6a35
Compare
a928ad5
to
81de266
Compare
this github action seems to be failing, i ran the command locally but it doesnt seem to help resolve this issue, sorry new to this whole github action thing |
There is already a PR implementing a quantum mixer here: #639 Can you please explain how it's different from it. I don't mind merging both mixers and/or doing the work to merge them into one and award both of you 10 points for the laptop, but I'd like to understand how this is different given that it came after the original. |
Yeah, it's not much difference in terms of the quantum neural network structure.
The main difference is that this pr is built upon the existing neural mixer to take advantage of the existing batching, and hyper parameter tuning that is in neural.
On inspection of both prs, I would say that the other pr spends more effort implenting the pipeline to load and train the network, whereas this effort is geared towards a more generic qiskit supported neural network, with more qiskit features like batched jobs run and all.
…________________________________
From: George ***@***.***>
Sent: Wednesday, October 20, 2021 12:31:30 AM
To: mindsdb/lightwood ***@***.***>
Cc: Metta Ong ***@***.***>; Author ***@***.***>
Subject: Re: [mindsdb/lightwood] quantum mixer implementation (PR #645)
There is already a PR implementing a quantum mixer here: #639<#639>
Can you please explain how it's different from it.
I don't mind merging both mixers and/or doing the work to merge them into one and award both of you 10 points for the laptop, but I'd like to understand how this is different given that it came after the original.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub<#645 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/AAK5KRSOHGYXXCAQPB3LY4DUHWMOFANCNFSM5GERJKZA>.
Triage notifications on the go with GitHub Mobile for iOS<https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675> or Android<https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub>.
|
(Author of the other PR here) I have to say that the two PRs came at around the same time. This one is more production-ready since it extends the pre-existing classes rather than creating new ones, hence taking advantage of much of the existing infrastructrue. If I had to choose, I would merge this one. I have a few comments, stemming from TODOs I have in my PR:
|
At any rate, looking at both PRs more I do like this one best, ultimately I'd want to get the mixers in and actually benchmark them. @ongspxm any changes you'd like to make on this one or are you happy to merge as is (again, softmax as the last activation looks wrong to me but maybe I'm mistaken, idk), unittest look good so in principle, this works. @mrandri19 would you want to finish your original PR? If so I can merge that too and the benchmark this both head-to-head and when we have the time I'll merge them into a joint implementation. Alternatively, if you have some ideas that'd build on this PR I can merge this one when @ongspxm is happy with it and then you can PR your changes into the new staging. |
yeah, i think it should be good to go |
I think it's best to join this one, since in the current state they are doing the same thing and this one does it better.
I'll do this, thanks! |
c724ebc
to
a0285f2
Compare
a0285f2
to
a98f31a
Compare
I believe this is correct. The final softmax will be added if needed by |
closes #641
implementing a quantum class neural network from https://qiskit.org/textbook/ch-machine-learning/machine-learning-qiskit-pytorch.html
implementation notes