Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Plaintext-speed SMPC Linear Regression #2069

Closed
iamtrask opened this issue Apr 15, 2019 · 12 comments
Closed

Plaintext-speed SMPC Linear Regression #2069

iamtrask opened this issue Apr 15, 2019 · 12 comments
Labels
Good first issue 🎓 Perfect for beginners, welcome to OpenMined! Status: Stale 🍞 Been open for a while with no activity Type: New Feature ➕ Introduction of a completely new addition to the codebase

Comments

@iamtrask
Copy link
Member

This project is to port the following Paper/Code into PyTorch. It makes claims of plaintext-speed linear regression which could be very useful to users of PySyft.

Paper: https://arxiv.org/pdf/1901.09531.pdf
Code: https://github.com/jbloom22/DASH/

@iamtrask iamtrask added the Good first issue 🎓 Perfect for beginners, welcome to OpenMined! label Apr 15, 2019
@LaRiffle LaRiffle changed the title Plantext-speed SMPC Linear Regression Plaintext-speed SMPC Linear Regression Apr 15, 2019
@jbloom22
Copy link

jbloom22 commented Apr 15, 2019

Well that'd be sweet! I'll do my best to answer any questions. I also just joined the slack as jbloom.

Roughly speaking, efficient distributed algorithms are privacy-preserving algorithms because efficiency requires minimizing the number of bits serialized over the network.

In the case of linear regression, we can "quotient out" rotational invariance, factoring the problem into plaintext compression followed by network usage that is independent of n.

In the simplest case of y regressed on X, this just amounts to staring at the formula:
beta = (X^T X)^-1 (X^T y)

So:

  1. Compute X^T X, reducing k n-vectors to (k choose 2) numbers.
  2. Compute X^T y, reducing the n-vector to a k-vector.
  3. Compute beta.

Steps 1 and 2 are the big data steps, but trivially parallelize over p groups of samples, e.g.X^T X = X1^T X1 + ... + Xp^T Xp. So compute these summands in plaintext, and for Step 3, just SMPC the sums and the tiny linear solve to obtain beta. The only extra bit needed for the standard error is the number y^T y.

This gets more fun when you have a ton of features that you'd like to test one-by-one, while including a common set of confounders. We implemented the "singly-distributed" version in Hail (https://hail.is) and used it to compute the largest genetic association to date in a couple hours (http://www.nealelab.is/uk-biobank/), testing 4K traits at each of ~13M locations in the genome across ~360K Brits with ~20 additional covariates, with the data partitioned into intervals of rows in the variant-by-individual matrix.*

To accommodate the growing number of individuals in these studies, we're moving to a more general block (i.e. checkerboard) partitioning strategy. The DASH note is just the natural generalization to "doubly-distributed" (on features and also samples) linear regression. Once we have block partitioning, we'll implement the non-SMPC version in a few lines of Python. We're also discussing with Hoon Cho (https://hhcho.com/) and others how to best provide a provably-secure SMPC version to the statistical genetics community, building on the great work that he and Bonnie Berger have done in the area.

To be clear, none of this is specific to genomics, and I'd love for the mathematical ideas (useful in theory) to find their way into whatever (esp. open-source!) frameworks make them useful in practice. :)

*For BLAS3 vectorization, Hail also lifts the math to process regressions in 2d groups along both the trait and variant axes. The team is now building a Kubernetes/C++ backend as an alternative to Spark/Scala, which will facilitate compiling the parts of the computational graph that are big linear algebra to GPU / TPU.

@jbloom22
Copy link

And in case you're not satisfied by my cheeky reference to the Pythagorean Theorem (https://en.wikipedia.org/wiki/Plimpton_322) as the proof of the Lemma, here's a home-brewed geometric proof (there must be plenty of short algebraic proofs in the lit):

Let P be the orthogonal projection from R^N to C^perp (the orthogonal complement of the column space of C), so PC = 0. Let b and g be the fit parameters for:

y = xb + Cg + e

Then the residual e is orthogonal to both x and col(C), hence to Px which is in their span. Since e is orthogonal to col(C), we also know Pe = e. So applying P to both sides gives

Py = (Px)b + e

and this is an orthogonal decomposition (!), so by uniqueness the regression problem Py on Px has the same estimate b and the same residual vector e as the original regression problem of y on x and C. All that's changed is the number of degrees of freedom.

[explicitly, P = I - QQ^T, which leads to the formulas].

@akontaxis
Copy link
Contributor

akontaxis commented Apr 15, 2019

I'd definitely be interested in taking up this project if it's still unassigned!

@kamathhrishi
Copy link
Contributor

@akontaxis Go for it :)

@iamtrask
Copy link
Member Author

Hey @akontaxis how's it goin? Also, are you in the Slack (if so, what's your username?)

@akontaxis
Copy link
Contributor

Hi @iamtrask ! Yep, I'm on slack (username is also akontaxis).

It's going alright -- I put together a basic notebook that walks through an example of the linear regression case. Securely performing the linear solve at the end is still a to do, so was planning to tackle that next, but definitely open to any guidance on scope/next steps!

@jbloom22
Copy link

jbloom22 commented May 1, 2019

I've updated the demo in the repo to clarify that the matrix R need not be shared publicly:
https://github.com/jbloom22/DASH/

@hereismari hereismari added the Type: New Feature ➕ Introduction of a completely new addition to the codebase label May 5, 2019
@akontaxis
Copy link
Contributor

Hi guys! Apologies for the radio silence, have been busy the past few months but working on this intermittently. I just opened a PR: #2350

I kept this stuff in a notebook because I wasn't sure where it should sit in the codebase -- any feedback there would be super helpful!

@kamathhrishi
Copy link
Contributor

I guess this issue should be closed?

@jbloom22
Copy link

Hi Hrisikesh! Slack Andre Farias, winner of the OpenMined grant, for the latest if you're interested to contribute. Andrew Kontaxis is now coordinating with him, they should decide what to do with this Issue. @akontaxis @andrelmfarias

@TanishB
Copy link

TanishB commented Mar 1, 2020

Hello everyone! Is this issue resolved? If not, can I try it out? @iamtrask

@github-actions
Copy link

This issue has been marked stale because it has been open 30 days with no activity. Leave a comment or remove the stale label to unmark it. Otherwise, this will be closed in 7 days.

@github-actions github-actions bot added the Status: Stale 🍞 Been open for a while with no activity label May 25, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Good first issue 🎓 Perfect for beginners, welcome to OpenMined! Status: Stale 🍞 Been open for a while with no activity Type: New Feature ➕ Introduction of a completely new addition to the codebase
Projects
None yet
Development

No branches or pull requests

7 participants