Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Replicated Sharing Tensor: Multiplication, Matrix multiplication, and Convolution #4015

Merged
merged 50 commits into from
Aug 26, 2020
Merged

Conversation

abogaziah
Copy link
Member

@abogaziah abogaziah commented Aug 14, 2020

Description

The implementation of the rest of the operations of replicated shared tensor that's necessary to implement falcon protocol.
1- Multiplication: to multiplay a secret x = [x1,x2,x3] by a secret y = [y1,y2,y3] :

  • Parties locally compute z1 = x1y1+x2y1+x1y2, z2 = x2y2+x3y2+x2y3 and z3 = x3y3 +x1y3 +x3y1.
    At the end of this, z1,z2, and z3 form a 3-out-of-3 secret sharing of z

  • Each party adds 3of3 correlated random number (alpha i) to zi where i is the index of the worker.

  • Parties perform resharing by sending a copy of zi to party i+1

2- Matrix multiplication: multiplication algorithm works with matrix multiplication, just by replacing multiplication operator with matrix multiplication

3- Convolution: convolution is done by converting to one matrix multiplication as explained here using torch unfold

4- Applied black box principle, i.e. the class should take in a tensor and a set of workers and output a tensor, all other details should be abstracted inside the class, I did that by removing shares_map from the parameters and change other detail around it

Checklist


@abogaziah abogaziah added Status: Review Needed 🙋 This needs someone to approve, deny, comment, or request changes and removed Status: In Progress 🌟 This is actively being worked on labels Aug 19, 2020
@abogaziah abogaziah marked this pull request as ready for review August 19, 2020 16:08
@abogaziah abogaziah requested a review from a team as a code owner August 19, 2020 16:08
@abogaziah abogaziah requested a review from gmuraru August 20, 2020 14:18
self.child = shares_map
return self
PRZS.setup(workers)
return shares_map
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Q: Shouldn't we return self here?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, not anymore, this function was used to kinda init the object but this doesn't happen anymore

):
super().__init__(id=id, owner=owner, tags=tags, description=description)
self.ring_size = 2 ** 32
shares_map = self.__generate_shares_map(plain_text, players)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Q: Shouldn't we do this only if we have plain_text != None?
(maybe we can have two ways in which we construct RSS, one when we send the plaintext to the constructor and one using the share method)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is checked in the __generate_shares_map function. if plain_text's None it returns None

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm...seems ok - when refactoring we should stick with the same name method (we have share for AST and we do the sharing in the constructor for the RST)


def share_secret(self, secret, workers):
def __generate_shares_map(self, plain_text, players):
if plain_text is None or players is None:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Q: Shouldn't we better raise an exception here? (maybe something like BadValue)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sometimes you need to make an empty tensor like an empty torch tensor, it should be acceptable I think

return workers

def generate_shares(self, plain_text, number_of_shares=3):
shares = []
plain_text.long()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Q: In case plain_text is a float this will simply do a "cast" to an int type, right?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it will cast it from any torch.dtype to torch.long

shares_map[workers[i]] = (shares[i], pointer)
return shares_map

def conv2d(self, image, padding=0):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Q: Should we have this in the RSS or in the Falcon class?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hmm, it could be implemented both ways, but the paper described it as an operation of the sharing scheme, so let's stick to the paper for now, it can be changed easily later if it is inconvenient

plain_text_mod = self.__sum_shares(shares)
plain_text = self.__map_modular_to_real(plain_text_mod)
return plain_text

def __retrieve_shares(self, shares_map):
pointers = self.__retrieve_pointers(shares_map)
def retrieve_shares(self):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Q: Can we make this __retrieve_shares?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I meant to make them public since they are very useful for debugging to see the shares before adding them, it's basically the reconstruction algorithm but you'll have to add the shares manually

shares = []
for pointer in pointers:
shares.append(pointer.get())
return shares

@staticmethod
def __retrieve_pointers(shares_map):
def retrieve_pointers(self):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Q: Also this one, __retrieve_pointers?

self.child = shares_map
return self

def apply_to_shares(self, function, *args, **kwargs):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Q: This is used only internally. Should it be public?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

any reshaping operation like view, unfold, reshape (or any other non-arithmetic operation) will use this function, so it will be used externally

@gmuraru gmuraru mentioned this pull request Aug 25, 2020
4 tasks
@abogaziah abogaziah requested a review from gmuraru August 25, 2020 18:39
@gmuraru
Copy link
Member

gmuraru commented Aug 26, 2020

LGTM! After you merge it, could you give me a ping to refactor this

@abogaziah abogaziah merged commit 9dc09f3 into OpenMined:master Aug 26, 2020
rav1kantsingh pushed a commit to rav1kantsingh/PySyft that referenced this pull request Oct 29, 2020
… Convolution (OpenMined#4015)

* initialized replicated sharing tensor

* removed crypto provider for now

* added reconstruction implementation and sharing from native tensor

* added more tests

* added private sharing

* added public addetion and general addetion

* added subtraction

* added negative results and refactors

* fix mod_to_real and switch priavte public

* cleaning

* communication rounds optimization

* more communication rounds optimization

* cleaning

* added edge cases to tests

* cleaning

* more cleaning

* even more cleaning

* even more cleaning

* arrange workers fix

* add and sub with operator

* added public multiplication

* cleaning

* more cleaning

* more cleaning

* added private multiplication without correlated randomness

* formatting

* added matrix multiplication

* added public matrix multiplication

* cleaning

* cleaning

* added shape, apply to shares

* added correlated randomness

* przs edits

* cleaning

* support for bigger ring size

* convolution support wootwoot!!🎉🎉

* cleaning

* more cleaning

* black box principle

* negative tests

* cleaning

* added support for 1-xor 2-consecutive arithmetic 3- arbitrary ring_size

* cleaning

* cleaning

* more cleaning, information hiding

Co-authored-by: Muhammed Abogazia <abogaziah@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Status: Review Needed 🙋 This needs someone to approve, deny, comment, or request changes
Projects
None yet
2 participants