-
-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Replicated Sharing Tensor: Multiplication, Matrix multiplication, and Convolution #4015
Conversation
� Conflicts: � syft/frameworks/torch/tensors/interpreters/replicated_shared.py � test/torch/tensors/test_replicated_shared.py
syft/frameworks/torch/tensors/interpreters/replicated_shared.py
Outdated
Show resolved
Hide resolved
syft/frameworks/torch/tensors/interpreters/replicated_shared.py
Outdated
Show resolved
Hide resolved
syft/frameworks/torch/tensors/interpreters/replicated_shared.py
Outdated
Show resolved
Hide resolved
self.child = shares_map | ||
return self | ||
PRZS.setup(workers) | ||
return shares_map |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Q: Shouldn't we return self
here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, not anymore, this function was used to kinda init the object but this doesn't happen anymore
): | ||
super().__init__(id=id, owner=owner, tags=tags, description=description) | ||
self.ring_size = 2 ** 32 | ||
shares_map = self.__generate_shares_map(plain_text, players) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Q: Shouldn't we do this only if we have plain_text != None
?
(maybe we can have two ways in which we construct RSS, one when we send the plaintext to the constructor and one using the share
method)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is checked in the __generate_shares_map function. if plain_text's None it returns None
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm...seems ok - when refactoring we should stick with the same name method (we have share
for AST
and we do the sharing in the constructor for the RST
)
|
||
def share_secret(self, secret, workers): | ||
def __generate_shares_map(self, plain_text, players): | ||
if plain_text is None or players is None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Q: Shouldn't we better raise an exception here? (maybe something like BadValue
)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sometimes you need to make an empty tensor like an empty torch tensor, it should be acceptable I think
return workers | ||
|
||
def generate_shares(self, plain_text, number_of_shares=3): | ||
shares = [] | ||
plain_text.long() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Q: In case plain_text is a float
this will simply do a "cast" to an int
type, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it will cast it from any torch.dtype to torch.long
shares_map[workers[i]] = (shares[i], pointer) | ||
return shares_map | ||
|
||
def conv2d(self, image, padding=0): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Q: Should we have this in the RSS
or in the Falcon
class?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hmm, it could be implemented both ways, but the paper described it as an operation of the sharing scheme, so let's stick to the paper for now, it can be changed easily later if it is inconvenient
plain_text_mod = self.__sum_shares(shares) | ||
plain_text = self.__map_modular_to_real(plain_text_mod) | ||
return plain_text | ||
|
||
def __retrieve_shares(self, shares_map): | ||
pointers = self.__retrieve_pointers(shares_map) | ||
def retrieve_shares(self): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Q: Can we make this __retrieve_shares
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I meant to make them public since they are very useful for debugging to see the shares before adding them, it's basically the reconstruction algorithm but you'll have to add the shares manually
shares = [] | ||
for pointer in pointers: | ||
shares.append(pointer.get()) | ||
return shares | ||
|
||
@staticmethod | ||
def __retrieve_pointers(shares_map): | ||
def retrieve_pointers(self): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Q: Also this one, __retrieve_pointers?
self.child = shares_map | ||
return self | ||
|
||
def apply_to_shares(self, function, *args, **kwargs): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Q: This is used only internally. Should it be public
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
any reshaping operation like view, unfold, reshape (or any other non-arithmetic operation) will use this function, so it will be used externally
LGTM! After you merge it, could you give me a ping to refactor this |
… Convolution (OpenMined#4015) * initialized replicated sharing tensor * removed crypto provider for now * added reconstruction implementation and sharing from native tensor * added more tests * added private sharing * added public addetion and general addetion * added subtraction * added negative results and refactors * fix mod_to_real and switch priavte public * cleaning * communication rounds optimization * more communication rounds optimization * cleaning * added edge cases to tests * cleaning * more cleaning * even more cleaning * even more cleaning * arrange workers fix * add and sub with operator * added public multiplication * cleaning * more cleaning * more cleaning * added private multiplication without correlated randomness * formatting * added matrix multiplication * added public matrix multiplication * cleaning * cleaning * added shape, apply to shares * added correlated randomness * przs edits * cleaning * support for bigger ring size * convolution support wootwoot!!🎉🎉 * cleaning * more cleaning * black box principle * negative tests * cleaning * added support for 1-xor 2-consecutive arithmetic 3- arbitrary ring_size * cleaning * cleaning * more cleaning, information hiding Co-authored-by: Muhammed Abogazia <abogaziah@users.noreply.github.com>
Description
The implementation of the rest of the operations of replicated shared tensor that's necessary to implement falcon protocol.
1- Multiplication: to multiplay a secret x = [x1,x2,x3] by a secret y = [y1,y2,y3] :
Parties locally compute z1 = x1y1+x2y1+x1y2, z2 = x2y2+x3y2+x2y3 and z3 = x3y3 +x1y3 +x3y1.
At the end of this, z1,z2, and z3 form a 3-out-of-3 secret sharing of z
Each party adds 3of3 correlated random number (alpha i) to zi where i is the index of the worker.
Parties perform resharing by sending a copy of zi to party i+1
2- Matrix multiplication: multiplication algorithm works with matrix multiplication, just by replacing multiplication operator with matrix multiplication
3- Convolution: convolution is done by converting to one matrix multiplication as explained here using torch unfold
4- Applied black box principle, i.e. the class should take in a tensor and a set of workers and output a tensor, all other details should be abstracted inside the class, I did that by removing shares_map from the parameters and change other detail around it
Checklist