Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add a new tutorial #4539

Merged
merged 22 commits into from
Sep 11, 2020
Merged

Add a new tutorial #4539

merged 22 commits into from
Sep 11, 2020

Conversation

LaRiffle
Copy link
Contributor

@LaRiffle LaRiffle commented Sep 7, 2020

Description

This is a new tutorial and provides appropriate changes to make it run

Affected Dependencies

List any dependencies that are required for this change.

How has this been tested?

  • Describe the tests that you ran to verify your changes.
  • Provide instructions so we can reproduce.
  • List any relevant details for your test configuration.

Checklist

@review-notebook-app
Copy link

Check out this pull request on  ReviewNB

Review Jupyter notebook visual diffs & provide feedback on notebooks.


Powered by ReviewNB

results = {}


def inverse(var):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

MAYBE: We have this in precision.py

def reciprocal(self, method="NR", nr_iters=10):

Args:
             method:
             'NR' : `Newton-Raphson`_ method computes the reciprocal using iterations
                     of :math:`x_{i+1} = (2x_i - self * x_i^2)` and uses
                     :math:`3*exp(-(x-.5)) + 0.003` as an initial guess by default
         
             nr_iters:
                 Number of iterations for `Newton-Raphson`
         Returns:
             Reciprocal of `self`
         """
 
         if method.lower() == "nr":
             new_self = self.modulus()
             result = 3 * (0.5 - new_self).exp() + 0.003
             for i in range(nr_iters):
                 result = 2 * result - result * result * new_self
             return result * self.signum()

Are the results better/computed faster if you use this?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Interesting, I should compare!
But the key the fast computtaion here is caching the result, for batchnorm at inference that's always the inverse you're trying to do :)

Copy link
Member

@gmuraru gmuraru Sep 9, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Might be worth checking the following:

  • if changing the initial guess gives better results? (like the difference between the logits would be smaller)
  • if removing the if...else changes the speed -- sometimes computers do this branch prediction thing.
    CPUs are using a pipeline for fetching data/fetching instructions/etc and they might run code before it needed to be run (predict that you might take the "if" or "else" branch). If the prediction is correct you get "full speed" ahead, but the problem comes in when that prediction is bad, in which case they need to evict all the results from the pipeline (and if this is the case) it might introduce some latency.

I do not know all the "internals" of python to know what it does with the if....else, but it might be worth a shot removing the if...else and check if it speeds things up

PS: Sorry for the long post

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Integrated :)
I'll keep the if else statement for the moment because this function isn't called so many times
I could improve the initiatal guess indeed for my method, i'll investigate this separateely I think

@@ -0,0 +1,456 @@
{
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • Where a shared input [[x]] is applied on a public function --> I think it should be the other way around
  • alpha is called a random mask (when I hear mask I always tend to consider it like an "index-value" selector)? - maybe a random number?
  • x = y + 1 shouldn't continue on the previous line?
  • [[f_alpha]]_j (maybe specify that j is in {0, 1}) or explicitly specify [[f_alpha]]_0 and [[f_alpha]]_1 (also, thinking of it, those function shares, can be owned by some other workers, right?)
  • Maybe: moving the "function shares [[f_alpha]] is also shared between Alice and Bob" where you give the example

Reply via ReviewNB

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK I 've tried to improve!

  • Where a shared input [[x]] is applied on a public function --> I think it should be the other way around. --> I wouldn keep this for secureNN because they both to the same "public" protocol to evaluate the function, and have a secret shared input

@@ -0,0 +1,456 @@
{
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Q: Do you want to leave the path from your computer?


Reply via ReviewNB

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Lol yeah that's a good point! Not sure if I can remove the warning though

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm...you can delete the message (manually) or what I did for Crypten tutorials was to simply edit the path and add "intentionally_changed_path" (or something like this :D)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK I'll do it!

@@ -0,0 +1,456 @@
{
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

TYPO: I might be wrong. one by tab -> one per tab

There is also a ModelCentricFLClient (if you change the model_owner with that...it might still work?) <-- I think it gives a better representation on what client has what data


Reply via ReviewNB

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Isn't the ModelCentricFLClient really different fomr the DataCentric? It used to be the StaticFL worker right?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmmm..I am not sure @cereallarceny might know more :D

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, ModelCentricFLClient is related with static FL

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks! :)

examples/tutorials/tutorial-websocket/WS Server Bob.ipynb Outdated Show resolved Hide resolved
@codecov
Copy link

codecov bot commented Sep 9, 2020

Codecov Report

Merging #4539 into master will decrease coverage by 0.05%.
The diff coverage is 77.77%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master    #4539      +/-   ##
==========================================
- Coverage   94.81%   94.75%   -0.06%     
==========================================
  Files         209      209              
  Lines       21422    21464      +42     
==========================================
+ Hits        20312    20339      +27     
- Misses       1110     1125      +15     
Impacted Files Coverage Δ
syft/frameworks/torch/mpc/primitives.py 95.48% <ø> (ø)
syft/frameworks/torch/hook/hook.py 89.86% <42.85%> (-2.86%) ⬇️
...ft/frameworks/torch/tensors/interpreters/native.py 90.85% <80.95%> (-0.43%) ⬇️
syft/frameworks/torch/mpc/fss.py 84.14% <100.00%> (ø)
syft/frameworks/torch/nn/functional.py 89.51% <100.00%> (-0.32%) ⬇️
...orks/torch/tensors/interpreters/additive_shared.py 91.94% <100.00%> (ø)
...frameworks/torch/tensors/interpreters/precision.py 96.93% <100.00%> (+0.08%) ⬆️
syft/generic/pointers/object_pointer.py 79.41% <100.00%> (+0.62%) ⬆️
syft/generic/utils.py 100.00% <100.00%> (ø)
test/serde/serde_helpers.py 100.00% <100.00%> (ø)
... and 1 more

Copy link
Member

@gmuraru gmuraru left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! I did not run it yet :( I can try to run it tomorrow and provide more input - if something goes haywire

@LaRiffle
Copy link
Contributor Author

LGTM! I did not run it yet :( I can try to run it tomorrow and provide more input - if something goes haywire

Oh yeah that would be awesome :D

@gmuraru
Copy link
Member

gmuraru commented Sep 11, 2020

@LaRiffle I am getting this error:

f"You tried to run a crypto protocol on worker {crypto_store._owner.id} "
AttributeError: 'str' object has no attribute '_owner'

@LaRiffle LaRiffle merged commit 88c2606 into master Sep 11, 2020
@delete-merged-branch delete-merged-branch bot deleted the ryffel/resnet branch September 11, 2020 16:37
rav1kantsingh pushed a commit to rav1kantsingh/PySyft that referenced this pull request Oct 29, 2020
* Improve native.encrypt()

* Fix typo

* Improve batchnorm by caching inverse results

* Add support for encrypt / decrypt on nn.Module

* Add early version of tutorial

* Fix AST serialization to send the protocol attribute

* Silence __del__ error at program termination

* Add serialiation of the data_centric_fl_client

* Update the resnet tutorial and rename folder

* Add demo for George and Alex

* Add working version of the tutorial with websockets

* Respond to comments on the tutorial

* Clean def encrypt

* Fix tests for AST serde

* Exlcude the notebook from the CI

* Blend functional.py inverse in FPT.reciprocal

* Update the tutorial

* Update tutorial

* Make N_Process dynamic depending on the machine

* UPdate tuto

* up
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants