-
-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add a new tutorial #4539
Add a new tutorial #4539
Conversation
Check out this pull request on Review Jupyter notebook visual diffs & provide feedback on notebooks. Powered by ReviewNB |
results = {} | ||
|
||
|
||
def inverse(var): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
MAYBE: We have this in precision.py
def reciprocal(self, method="NR", nr_iters=10):
Args:
method:
'NR' : `Newton-Raphson`_ method computes the reciprocal using iterations
of :math:`x_{i+1} = (2x_i - self * x_i^2)` and uses
:math:`3*exp(-(x-.5)) + 0.003` as an initial guess by default
nr_iters:
Number of iterations for `Newton-Raphson`
Returns:
Reciprocal of `self`
"""
if method.lower() == "nr":
new_self = self.modulus()
result = 3 * (0.5 - new_self).exp() + 0.003
for i in range(nr_iters):
result = 2 * result - result * result * new_self
return result * self.signum()
Are the results better/computed faster if you use this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Interesting, I should compare!
But the key the fast computtaion here is caching the result, for batchnorm at inference that's always the inverse you're trying to do :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Might be worth checking the following:
- if changing the initial guess gives better results? (like the difference between the logits would be smaller)
- if removing the
if...else
changes the speed -- sometimes computers do thisbranch prediction
thing.
CPUs are using a pipeline for fetching data/fetching instructions/etc and they might run code before it needed to be run (predict that you might take the "if" or "else" branch). If the prediction is correct you get "full speed" ahead, but the problem comes in when that prediction is bad, in which case they need to evict all the results from the pipeline (and if this is the case) it might introduce some latency.
I do not know all the "internals" of python to know what it does with the if....else
, but it might be worth a shot removing the if...else
and check if it speeds things up
PS: Sorry for the long post
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Integrated :)
I'll keep the if else statement for the moment because this function isn't called so many times
I could improve the initiatal guess indeed for my method, i'll investigate this separateely I think
@@ -0,0 +1,456 @@ | |||
{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- Where a shared input [[x]] is applied on a public function --> I think it should be the other way around
- alpha is called a random mask (when I hear mask I always tend to consider it like an "index-value" selector)? - maybe a random number?
- x = y + 1 shouldn't continue on the previous line?
- [[f_alpha]]_j (maybe specify that j is in {0, 1}) or explicitly specify [[f_alpha]]_0 and [[f_alpha]]_1 (also, thinking of it, those function shares, can be owned by some other workers, right?)
- Maybe: moving the "function shares [[f_alpha]] is also shared between Alice and Bob" where you give the example
Reply via ReviewNB
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK I 've tried to improve!
- Where a shared input [[x]] is applied on a public function --> I think it should be the other way around. --> I wouldn keep this for secureNN because they both to the same "public" protocol to evaluate the function, and have a secret shared input
@@ -0,0 +1,456 @@ | |||
{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Lol yeah that's a good point! Not sure if I can remove the warning though
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm...you can delete the message (manually) or what I did for Crypten tutorials was to simply edit the path and add "intentionally_changed_path" (or something like this :D)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK I'll do it!
@@ -0,0 +1,456 @@ | |||
{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
TYPO: I might be wrong. one by tab -> one per tab
There is also a ModelCentricFLClient (if you change the model_owner with that...it might still work?) <-- I think it gives a better representation on what client has what data
Reply via ReviewNB
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Isn't the ModelCentricFLClient really different fomr the DataCentric? It used to be the StaticFL worker right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmmm..I am not sure @cereallarceny might know more :D
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, ModelCentricFLClient is related with static FL
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks! :)
Codecov Report
@@ Coverage Diff @@
## master #4539 +/- ##
==========================================
- Coverage 94.81% 94.75% -0.06%
==========================================
Files 209 209
Lines 21422 21464 +42
==========================================
+ Hits 20312 20339 +27
- Misses 1110 1125 +15
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM! I did not run it yet :( I can try to run it tomorrow and provide more input - if something goes haywire
Oh yeah that would be awesome :D |
@LaRiffle I am getting this error:
|
* Improve native.encrypt() * Fix typo * Improve batchnorm by caching inverse results * Add support for encrypt / decrypt on nn.Module * Add early version of tutorial * Fix AST serialization to send the protocol attribute * Silence __del__ error at program termination * Add serialiation of the data_centric_fl_client * Update the resnet tutorial and rename folder * Add demo for George and Alex * Add working version of the tutorial with websockets * Respond to comments on the tutorial * Clean def encrypt * Fix tests for AST serde * Exlcude the notebook from the CI * Blend functional.py inverse in FPT.reciprocal * Update the tutorial * Update tutorial * Make N_Process dynamic depending on the machine * UPdate tuto * up
Description
This is a new tutorial and provides appropriate changes to make it run
Affected Dependencies
List any dependencies that are required for this change.
How has this been tested?
Checklist