Private Inference of MNIST Model #60
Replies: 4 comments 6 replies
-
Hi @kanav99, thanks for contributing this project - it looks awesome! PS: as for compiler performance, we're cooking up some nice improvements here - stay tuned! |
Beta Was this translation helpful? Give feedback.
-
Congratulations @kanav99!!! You won our tinybin bounty. Can you confirm that the ETH address you provided is correct so we can send your prize: 0x91CD3caDe0BFd95Ad1d37BD058E05B04B4181A44 tinydemo winners announcement tweet: https://x.com/nillionnetwork/status/1820606184895947087 |
Beta Was this translation helpful? Give feedback.
-
Thank you so much @oceans404! It's an honour! Yes, I confirm that this is my ETH address |
Beta Was this translation helpful? Give feedback.
-
Hey @oceans404, was the transfer made? Thanks! |
Beta Was this translation helpful? Give feedback.
-
Show and tell project type
Builder Bounty Submission
Github Repo Link
https://github.com/kanav99/private-mnist
Video Walkthough Link
https://vimeo.com/963472041?share=copy
Project Description
In this project, I perform Private Inference of a machine learning model trained over the MNIST dataset, to carry out the task of digit detection. In the process, neither the model owner needs to reveal the model weights, nor the client needs to reveal their input in clear.
What problems does your project solve? How does it preserve privacy for users?
Machine learning models are usually a proprietary property of companies. On the other hand, inputs to these models might contain some very private data like address, signature, photographs or proprietary source code. Sharing either of the two inputs is a sever breach of privacy. This project solves this using MPC provided by Nillion.
While in this example, we perform inference of a very simple task of digit detection, we notice that this task comes up in many applications. Every time you convert your hand drawn characters on an iPad, it performs a (possibly remote) inference. It also comes up in optical character/mark recognition (OCR/OMR), which is used in checking exams, reading cheques, etc. Inputs in all of these applications are sensitive, and private inference becomes a necessity in this age.
How does the project use Nillion? Describe and link to any Nada programs
We use Nada language to describe our model and perform inference privately. You can find the nada program here: https://github.com/kanav99/private-mnist/blob/main/nada/mnist/src/main.py
Is there anything else you want to share?
The model used in this demo is a simple linear model, which calculates Wx + b, for trained weights W and bias b, followed by an argmax. It looks like when you move to bigger models, like LeNet or MNIST-B, the compilation time is too large - on my laptop, I was not able to compile even in 2hrs. Would love to see optimisations on the compiler side. Natively provided vectori operations would also be really helpful for such ML tasks.
Optional - Link your project and team members' social handles
https://x.com/kanavgupta99
Optional - Team ETH Address(es)
Beta Was this translation helpful? Give feedback.
All reactions