Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve public mul and div for AST #3835

Merged
merged 2 commits into from
Jul 16, 2020
Merged

Improve public mul and div for AST #3835

merged 2 commits into from
Jul 16, 2020

Conversation

LaRiffle
Copy link
Contributor

@LaRiffle LaRiffle commented Jul 8, 2020

Description

This is a small cleaning PR

Affected Dependencies

None

How has this been tested?

  • Describe the tests that you ran to verify your changes.
  • Provide instructions so we can reproduce.
  • List any relevant details for your test configuration.

Checklist

@LaRiffle LaRiffle added Type: Improvement 📈 Minor improvements not introducing a new feature or requiring a major refactor Priority: 3 - Medium 😒 Should be fixed soon, but there may be other pressing matters that come first Severity: 3 - Medium 😒 Does not cause a failure, impair usability, or interfere with the system labels Jul 8, 2020
@LaRiffle LaRiffle requested review from gmuraru and Yugandhartripathi and removed request for gmuraru July 8, 2020 13:29
@codecov
Copy link

codecov bot commented Jul 15, 2020

Codecov Report

Merging #3835 into master will increase coverage by 0.06%.
The diff coverage is 95.31%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master    #3835      +/-   ##
==========================================
+ Coverage   95.07%   95.14%   +0.06%     
==========================================
  Files         186      186              
  Lines       18857    18911      +54     
==========================================
+ Hits        17928    17992      +64     
+ Misses        929      919      -10     
Impacted Files Coverage Δ
...orks/torch/tensors/interpreters/additive_shared.py 93.02% <88.00%> (-0.42%) ⬇️
...frameworks/torch/tensors/interpreters/precision.py 97.53% <100.00%> (+0.27%) ⬆️
test/torch/tensors/test_additive_shared.py 100.00% <100.00%> (ø)
syft/messaging/message.py 92.69% <0.00%> (-0.20%) ⬇️
syft/generic/pointers/object_pointer.py 82.08% <0.00%> (-0.14%) ⬇️
...frameworks/torch/tensors/interpreters/gradients.py 94.19% <0.00%> (-0.04%) ⬇️
test/generic/test_gc.py 100.00% <0.00%> (ø)
test/execution/test_translation.py 100.00% <0.00%> (ø)
syft/execution/plan.py 94.85% <0.00%> (+0.01%) ⬆️
syft/workers/message_handler.py 91.71% <0.00%> (+0.04%) ⬆️
... and 5 more

Copy link
Member

@gmuraru gmuraru left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I look quickly, but LGTM!

Copy link
Member

@youben11 youben11 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! But I'm wondering, do we have to always get a remote tensor to compare with zero? Can't we do that remotely without fetching it?

@LaRiffle
Copy link
Contributor Author

LGTM! But I'm wondering, do we have to always get a remote tensor to compare with zero? Can't we do that remotely without fetching it

Yeah indeed that could be a security issue. We could have a remote is_zero ? Like we have a is_none I think

@LaRiffle LaRiffle merged commit aaf3739 into master Jul 16, 2020
@delete-merged-branch delete-merged-branch bot deleted the ryffel/ast_mul_div branch July 16, 2020 08:24
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Priority: 3 - Medium 😒 Should be fixed soon, but there may be other pressing matters that come first Severity: 3 - Medium 😒 Does not cause a failure, impair usability, or interfere with the system Type: Improvement 📈 Minor improvements not introducing a new feature or requiring a major refactor
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants