Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

FL MNIST example with autograd tracing #3567

Merged
merged 4 commits into from
May 24, 2020
Merged

Conversation

vvmnnnkv
Copy link
Member

Description

Attempt to use autograd tracing (in its existing form) to trace MNIST example for static FL.
Initially, there's unit test that should pass.
Next, notebooks should be updated.

Checklist:

  • My changes are covered by tests.
  • I have run the pre-commit hooks to format and check my code for style issues.
  • I have commented my code following Google style.

(See the the contribution guidelines for additional tips.)

@vvmnnnkv vvmnnnkv requested a review from a team as a code owner May 20, 2020 21:46
@vvmnnnkv vvmnnnkv requested review from karlhigley and Prtfw May 20, 2020 21:46
@review-notebook-app
Copy link

Check out this pull request on  ReviewNB

Review Jupyter notebook visual diffs & provide feedback on notebooks.


Powered by ReviewNB

@codecov
Copy link

codecov bot commented May 20, 2020

Codecov Report

Merging #3567 into master will decrease coverage by 0.11%.
The diff coverage is 96.07%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master    #3567      +/-   ##
==========================================
- Coverage   94.76%   94.65%   -0.12%     
==========================================
  Files         159      160       +1     
  Lines       17269    17294      +25     
==========================================
+ Hits        16365    16369       +4     
- Misses        904      925      +21     
Impacted Files Coverage Δ
syft/execution/placeholder.py 95.16% <ø> (ø)
.../frameworks/torch/tensors/interpreters/autograd.py 87.91% <66.66%> (-0.53%) ⬇️
syft/execution/plan.py 94.73% <95.45%> (+0.98%) ⬆️
syft/execution/role.py 98.13% <100.00%> (+0.01%) ⬆️
syft/execution/translation/torchscript.py 100.00% <100.00%> (ø)
...frameworks/torch/tensors/interpreters/gradients.py 100.00% <100.00%> (ø)
...works/torch/tensors/interpreters/gradients_core.py 95.34% <100.00%> (+3.48%) ⬆️
test/execution/test_translation.py 100.00% <100.00%> (ø)
syft/frameworks/torch/fl/utils.py 72.22% <0.00%> (-14.82%) ⬇️
syft/workers/websocket_client.py 54.10% <0.00%> (-4.11%) ⬇️
... and 31 more

@@ -181,3 +181,103 @@ def autograd_test(X):

# Test all results are equal
assert torch_grads.eq(ts_plan_grads).all()


def test_fl_mnist_example_training_can_be_translated(hook, workers):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Outstanding!!!

syft/execution/role.py Outdated Show resolved Hide resolved
syft/execution/role.py Outdated Show resolved Hide resolved
syft/execution/plan.py Show resolved Hide resolved
@@ -100,7 +120,7 @@ def __init__(self, self_, other):
self.other = other

def gradient(self, grad):
assert isinstance(self.other, int)
# assert isinstance(self.other, int)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No longer needed or worth keeping?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This div code was made for ints, but it seems to works fine for tensor (with scalar value at least).
@LaRiffle do you see any problem with removing this assert?

@vvmnnnkv vvmnnnkv requested a review from LaRiffle May 24, 2020 05:58
@karlhigley karlhigley merged commit 682bfd2 into master May 24, 2020
@delete-merged-branch delete-merged-branch bot deleted the vvm/fl-mnist-backward branch May 24, 2020 20:20
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants