Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix private tensor disclosure via execute_command #2434

Merged
merged 2 commits into from Aug 2, 2019

Conversation

@youben11
Copy link
Member

youben11 commented Aug 2, 2019

Private tensors aren't meant to be accessible from a remote client, however, execute_command was getting any object using his id, this fix get the object using the get_obj method that doesn't return private tensors
#2432

youben11 added 2 commits Aug 2, 2019
private tensors aren't meant to be accessible from a remote client,
however, execute_command was getting any object using his id, this fix
get the object using the get_obj method that doesn't return private
tensors
Copy link
Collaborator

LaRiffle left a comment

Well spotted!
But get_obj doesn't check that the tensor is private, does it?

@youben11

This comment has been minimized.

Copy link
Member Author

youben11 commented Aug 2, 2019

@LaRiffle

This comment has been minimized.

Copy link
Collaborator

LaRiffle commented Aug 2, 2019

I have an old version of dev... -> like a noob

Ok this is all good, thanks for spotting it!

@LaRiffle LaRiffle merged commit 4c4c8a6 into OpenMined:dev Aug 2, 2019
1 check passed
1 check passed
continuous-integration/travis-ci/pr The Travis CI build passed
Details
@midokura-silvia

This comment has been minimized.

Copy link
Collaborator

midokura-silvia commented Aug 2, 2019

The multiple inheritance scheme might produce problems.
So BaseWorker defines the wanted get_obj behaviour.
However which method gets invoked in WebsocketServerWorker?
It inherits from

  1. FederatedClient <- ObjectStorage and
  2. VirtualWorker <- BaseWorker <- ObjectStorage

So which get_obj() function does it have?

@youben11 youben11 deleted the youben11:fix-execute-command branch Aug 2, 2019
@youben11 youben11 restored the youben11:fix-execute-command branch Aug 2, 2019
@youben11

This comment has been minimized.

Copy link
Member Author

youben11 commented Aug 2, 2019

@midokura-silvia do you think that we should specify the exact method it should call? keeping in mind that the private feature for tensors should be available for remote worker. I think that introducing the private feature imply using the get_obj() everywhere we wanna use an object because doing otherwise may left a an exploitable flaw

@midokura-silvia

This comment has been minimized.

Copy link
Collaborator

midokura-silvia commented Aug 2, 2019

Yes, specifying that it should call the BaseWorker.get_obj() method would make it explicit and easier to understand.

@youben11

This comment has been minimized.

Copy link
Member Author

youben11 commented Aug 2, 2019

could you please reopen the PR or should I make a new one?

@midokura-silvia

This comment has been minimized.

Copy link
Collaborator

midokura-silvia commented Aug 2, 2019

I don't know see a way to reopen the pull request. Easiest will be to create a new one and to reference the initial pull request.

robert-wagner added a commit that referenced this pull request Aug 2, 2019
* grad is None for CRT tensors

* remarks from review

* typo

* Add possibility to overwrite functions on native tensors

* Fix error in handle_func_command for AST

* Add support for torch.roll(<AST>, <MPT>)

* Rm the (spring) roll prints

* removed duplicated tests for roll

* added kwargs in native roll

* removed .get() in AST roll

* share and get for CRT tensors

* basic test share and get CRT tensors

* choice of CRT representation when using fix_prec()

* typo

* removed wrap test

* added operations with scalars

* minor changes

* Update version number

* added messages for __init__ assertions

* Add explicit support of fix_prec on pointers

* Disable gc-ing shared when simplying an AdditiveSharedTensor

* Add test on ops for remote AST

* Update tutorial 10

* assert fields are equal when sharing FPT

* more assert messages

* Add div by constant integer for autograd

* Fix autograd div with AST

* Add refresh option for AST and tests

* Split a test into 2

* choice in field size for CRT tensors

* Update README.md

* Typo Fixes in Tutorial/Part 1

* Typo Fixes in Tutorials Part 2

* Typo Fixes in Tutorial Part 4

* Add files via upload

* no more overflow for big fields and can represent neg values

* Update README.md

* Added fix_prec for Linear Object

* Python call inside start_websocket_servers.py same a the python worker used to invoke it

* minor modif in AST sub

* change sign of neg when doing FPT mul

* modif fix_prec and float_prec for CRT

* more interesting tests

* black

* Fix typos

* patch tf-encrypted version

* wrap keras constructor hooks to fix decorator signature

* Small changes to remove useless code

* Fix typo bug in torch.roll for AST

* Optimize _compress in serde and rm buggy test in test_serde

* Make create_pointer a static method

* Remove wrappers from AST shares and MPT children
- Move functionalities from native to pointer objects
- Make wrapper more like a real wrapper
- Update functionalities accordingly

* Add a no_wrap option for send() and share() to skip wrapping

* Generalize use of no_wrap in additive_shared

* Generalize use of no_wrap in crypto protocols

* Small fix for AST mul / matmul

* Add the data_size attribute to the BaseWorker Class

* Add a get_packet_size static method to the WebsocketClientWorker class

* Modify get_packet_size static method's interface arg in websocket_client.py

* Modify get_packet_info (renamed) to sniff on packets transmitted

* Modify docstring of get_packet_info static method

* Add pyshark to requirements_dev.txt

* Split original get_packet_info method into get_packets and read_packet functions

* Add arguments to be passed to get_packets method to control sniffing better

* Move network traffic monitoring utility to syft.generic.metrics.py

* Add tutorial example of new metrics utility to monitor network traffic

* To expand on the drafted metrics tutorial to give examples of the NetworkMonitor class

* Edit metrics tutorial as per reviewer's suggestion

* simple support for AST torch.dot

* tests for dot

* Typo

* Syft Doc

* change port in CI test

* Revert "change port in CI test"

This reverts commit 77f39d2.

* Minor Typo Fixes

* longer sleep

* longer sleep test_objects_count_remote

* Update README.md

Add instructions on how to run docker image on a Mac

* Remove the wrapper between FPT and AST

* Fix tests accordingly

* Fix circular import error

* Fix how workers are provided to nn.module.send

* Add docstring for MultiPointerTensor

* Small fix and improvements in native.py

* Update README.md

* longer sleep for last tests

* helper function to try websocket connection

* Remove redundant time.sleep() calls.

* Due to pickling error while creating a separate process for websockerserver on Windows modified the code to create the websocketserver within the current process context itself

* ran black on websockets-example-MNIST

* use 'operates' instead of 'operate'

* Modify example to store test dataset on separate worker

Models are sent and evaluated at the (remote) worker.

* Address comments in pull request.

* Modify evaluate function to return dictionary

* Move train_config check into separate function

* Make model execution determinstic

* reduced the timeout inetrval on windows due to C timeval overflowerror

* removed pre-computation of reconstruction coefficients

* black

* added field to LPT

* reduced precision frac in tests

* Few fix post review

* Add default get and mid_get in abstract.py

* reduced timeout for websocket connection to 999999 seconds

* add field to get_class_attributes

* rename to crt_precision.py and CRTPrecisionTensor

* factorize assert residues are FPT

* rename variable to base_residue

* __imul__ for FPT

* black

* fixed multiplication FPT

* notebook with more instruction and some changes in scripts

start_websocket_servers have some unnecessary lines to provide python versions and is causing errors on windows 10 but since in subprocess.Popen we provide the name of exe that will execute certain file with certain arguments. In run_websocket_client logging.debug is replaced by print because logging is not showing losses and errors. Notebook has more instructions and possible solutions for possible errors and removed some extra code

* Update Federated learning with websockets and federated averaging.ipynb

* removed some wording

* Update start_websocket_servers.py

* minor fixes

fixed formatting in the notebook, replaced print with logs and added a condition to start servers for other platforms other than windows 10

* condition over python variable instead of list

* Add v0 of encrypted training on MNIST

* Add files via upload

* Add files via upload

* Closes #2412, remove simplification and detailing for plans that already have this done

* Reverting example

* Corrected some typos in the tutorials.

* Black

* Left padded some file names with a zero so tutorials are displayed in order

* black checked

* manual checked

* black rechecked

* black

* docstring where CRT comes from

* Adding a simplifier and a detailer for shape

* Delete Part 8 - Introduction to Plans.ipynb

* added notebook back

* black

* Initial

* replace all lists with tuples in serde

* removed print statements and fixed a couple bugs

* updated notebook

* update

* Delete Part 8 - Introduction to Plans.ipynb

* Added first pen testing challenge

* balck

* Update tutorial + add illustration

* Update illustration link

* Switch to Adam, because of custom learning_rate for Adadelta (#2362)

* Add "Installing PySyft after Cloning Repository" to CONTRIBUTING.md (#2403)

* Fix broken Openmined.org demo (#2387)

* Copy and edit WebsocketClientWorker and WebsocketServerWorker notebooks from Google Colab

* Make the WebsocketServerWorker tutorial work, WebsocketClientWorker WIP

* Make changes to WebsocketClientWorker and WebsocketServerWorker notebooks so they work in Colab

* Update WebsocketServerWorker tutorial notebook for use in Colab

* Add "import logging" statement to "WebsocketServerWorker" notebook

* Change "print" statement into a "logging" one in websocket_server.py file

* Update websocket_server.py as per reviewer request

* One worker bug (#2407)

* one iteration doesn't need to change worker

* One worker testcase

* Plaintext speed regression notebook (#2350)

Adds notebook containing
- torch implementations of a few linear algebra routines
- initial implementations of linear regression and DASH.

* Make the local worker aware of itself on TorchHook creation. (#2431)

* Make the local worker aware of itself on TorchHook creation.

* Create test to ensure local worker is inside the _known_workers dict.

* Move test to test_local_worker

* bumpversion 0.1.21a1 -> 0.1.22a1 (#2427)

* Fix private tensor disclosure (#2434)

private tensors aren't meant to be accessible from a remote client,
however, execute_command was getting any object using his id, this fix
get the object using the get_obj method that doesn't return private
tensors

* Improve Build new tensor tutorial (#2435)

* Renamed func in hook_args for clarity (#2408)

* 1903 : renamed functions to remove ambiguity

* 1903 : renamed functions to remove ambiguity[reformatted]

* 1903 : renamed functions after suggestions

* Clear objects for ObjectStorage with websocket connection (#2410)

* Make clear_objects() callable on remote ObjectStorage.

Note that the signature of clear_objects() changed.
It does no longer return self.

* Remove comment

* Modify function clear_objects to return self by default and undo changes to test_udacity.py

* Add missing argument to remote clear_objects function

* Created an actual Message type and moved Plan out of federated (#2436)

* moved plan out of federated folder into message folder; added a stub for Messagee type

* black

* changed all

* put imports on separate linesE

* removed promise code

* relative -> absolute imports

* removed promise stuff

* unit teset for Message serde

* imported Message

* updated import

* removed extraneous comment

* rmeoved extraneous imports

* unifed how refer to Message

* newline

* fixed a few inconsistent imports

* name conflict

* msg -> messaging

* black
iamtrask added a commit that referenced this pull request Dec 3, 2019
* Update Master (#2438)

* grad is None for CRT tensors

* remarks from review

* typo

* Add possibility to overwrite functions on native tensors

* Fix error in handle_func_command for AST

* Add support for torch.roll(<AST>, <MPT>)

* Rm the (spring) roll prints

* removed duplicated tests for roll

* added kwargs in native roll

* removed .get() in AST roll

* share and get for CRT tensors

* basic test share and get CRT tensors

* choice of CRT representation when using fix_prec()

* typo

* removed wrap test

* added operations with scalars

* minor changes

* Update version number

* added messages for __init__ assertions

* Add explicit support of fix_prec on pointers

* Disable gc-ing shared when simplying an AdditiveSharedTensor

* Add test on ops for remote AST

* Update tutorial 10

* assert fields are equal when sharing FPT

* more assert messages

* Add div by constant integer for autograd

* Fix autograd div with AST

* Add refresh option for AST and tests

* Split a test into 2

* choice in field size for CRT tensors

* Update README.md

* Typo Fixes in Tutorial/Part 1

* Typo Fixes in Tutorials Part 2

* Typo Fixes in Tutorial Part 4

* Add files via upload

* no more overflow for big fields and can represent neg values

* Update README.md

* Added fix_prec for Linear Object

* Python call inside start_websocket_servers.py same a the python worker used to invoke it

* minor modif in AST sub

* change sign of neg when doing FPT mul

* modif fix_prec and float_prec for CRT

* more interesting tests

* black

* Fix typos

* patch tf-encrypted version

* wrap keras constructor hooks to fix decorator signature

* Small changes to remove useless code

* Fix typo bug in torch.roll for AST

* Optimize _compress in serde and rm buggy test in test_serde

* Make create_pointer a static method

* Remove wrappers from AST shares and MPT children
- Move functionalities from native to pointer objects
- Make wrapper more like a real wrapper
- Update functionalities accordingly

* Add a no_wrap option for send() and share() to skip wrapping

* Generalize use of no_wrap in additive_shared

* Generalize use of no_wrap in crypto protocols

* Small fix for AST mul / matmul

* Add the data_size attribute to the BaseWorker Class

* Add a get_packet_size static method to the WebsocketClientWorker class

* Modify get_packet_size static method's interface arg in websocket_client.py

* Modify get_packet_info (renamed) to sniff on packets transmitted

* Modify docstring of get_packet_info static method

* Add pyshark to requirements_dev.txt

* Split original get_packet_info method into get_packets and read_packet functions

* Add arguments to be passed to get_packets method to control sniffing better

* Move network traffic monitoring utility to syft.generic.metrics.py

* Add tutorial example of new metrics utility to monitor network traffic

* To expand on the drafted metrics tutorial to give examples of the NetworkMonitor class

* Edit metrics tutorial as per reviewer's suggestion

* simple support for AST torch.dot

* tests for dot

* Typo

* Syft Doc

* change port in CI test

* Revert "change port in CI test"

This reverts commit 77f39d2.

* Minor Typo Fixes

* longer sleep

* longer sleep test_objects_count_remote

* Update README.md

Add instructions on how to run docker image on a Mac

* Remove the wrapper between FPT and AST

* Fix tests accordingly

* Fix circular import error

* Fix how workers are provided to nn.module.send

* Add docstring for MultiPointerTensor

* Small fix and improvements in native.py

* Update README.md

* longer sleep for last tests

* helper function to try websocket connection

* Remove redundant time.sleep() calls.

* Due to pickling error while creating a separate process for websockerserver on Windows modified the code to create the websocketserver within the current process context itself

* ran black on websockets-example-MNIST

* use 'operates' instead of 'operate'

* Modify example to store test dataset on separate worker

Models are sent and evaluated at the (remote) worker.

* Address comments in pull request.

* Modify evaluate function to return dictionary

* Move train_config check into separate function

* Make model execution determinstic

* reduced the timeout inetrval on windows due to C timeval overflowerror

* removed pre-computation of reconstruction coefficients

* black

* added field to LPT

* reduced precision frac in tests

* Few fix post review

* Add default get and mid_get in abstract.py

* reduced timeout for websocket connection to 999999 seconds

* add field to get_class_attributes

* rename to crt_precision.py and CRTPrecisionTensor

* factorize assert residues are FPT

* rename variable to base_residue

* __imul__ for FPT

* black

* fixed multiplication FPT

* notebook with more instruction and some changes in scripts

start_websocket_servers have some unnecessary lines to provide python versions and is causing errors on windows 10 but since in subprocess.Popen we provide the name of exe that will execute certain file with certain arguments. In run_websocket_client logging.debug is replaced by print because logging is not showing losses and errors. Notebook has more instructions and possible solutions for possible errors and removed some extra code

* Update Federated learning with websockets and federated averaging.ipynb

* removed some wording

* Update start_websocket_servers.py

* minor fixes

fixed formatting in the notebook, replaced print with logs and added a condition to start servers for other platforms other than windows 10

* condition over python variable instead of list

* Add v0 of encrypted training on MNIST

* Add files via upload

* Add files via upload

* Closes #2412, remove simplification and detailing for plans that already have this done

* Reverting example

* Corrected some typos in the tutorials.

* Black

* Left padded some file names with a zero so tutorials are displayed in order

* black checked

* manual checked

* black rechecked

* black

* docstring where CRT comes from

* Adding a simplifier and a detailer for shape

* Delete Part 8 - Introduction to Plans.ipynb

* added notebook back

* black

* Initial

* replace all lists with tuples in serde

* removed print statements and fixed a couple bugs

* updated notebook

* update

* Delete Part 8 - Introduction to Plans.ipynb

* Added first pen testing challenge

* balck

* Update tutorial + add illustration

* Update illustration link

* Switch to Adam, because of custom learning_rate for Adadelta (#2362)

* Add "Installing PySyft after Cloning Repository" to CONTRIBUTING.md (#2403)

* Fix broken Openmined.org demo (#2387)

* Copy and edit WebsocketClientWorker and WebsocketServerWorker notebooks from Google Colab

* Make the WebsocketServerWorker tutorial work, WebsocketClientWorker WIP

* Make changes to WebsocketClientWorker and WebsocketServerWorker notebooks so they work in Colab

* Update WebsocketServerWorker tutorial notebook for use in Colab

* Add "import logging" statement to "WebsocketServerWorker" notebook

* Change "print" statement into a "logging" one in websocket_server.py file

* Update websocket_server.py as per reviewer request

* One worker bug (#2407)

* one iteration doesn't need to change worker

* One worker testcase

* Plaintext speed regression notebook (#2350)

Adds notebook containing
- torch implementations of a few linear algebra routines
- initial implementations of linear regression and DASH.

* Make the local worker aware of itself on TorchHook creation. (#2431)

* Make the local worker aware of itself on TorchHook creation.

* Create test to ensure local worker is inside the _known_workers dict.

* Move test to test_local_worker

* bumpversion 0.1.21a1 -> 0.1.22a1 (#2427)

* Fix private tensor disclosure (#2434)

private tensors aren't meant to be accessible from a remote client,
however, execute_command was getting any object using his id, this fix
get the object using the get_obj method that doesn't return private
tensors

* Improve Build new tensor tutorial (#2435)

* Renamed func in hook_args for clarity (#2408)

* 1903 : renamed functions to remove ambiguity

* 1903 : renamed functions to remove ambiguity[reformatted]

* 1903 : renamed functions after suggestions

* Clear objects for ObjectStorage with websocket connection (#2410)

* Make clear_objects() callable on remote ObjectStorage.

Note that the signature of clear_objects() changed.
It does no longer return self.

* Remove comment

* Modify function clear_objects to return self by default and undo changes to test_udacity.py

* Add missing argument to remote clear_objects function

* Created an actual Message type and moved Plan out of federated (#2436)

* moved plan out of federated folder into message folder; added a stub for Messagee type

* black

* changed all

* put imports on separate linesE

* removed promise code

* relative -> absolute imports

* removed promise stuff

* unit teset for Message serde

* imported Message

* updated import

* removed extraneous comment

* rmeoved extraneous imports

* unifed how refer to Message

* newline

* fixed a few inconsistent imports

* name conflict

* msg -> messaging

* black

* added new custom message types, but no change in functionality yet

* Added unit tests for CommandMessage, ObjectMessage, and ObjectRequsetMessage

* added tests for GetShapeMessage and ForceObjectDeleteMessage

* balck

* formatting

* __getitem__ should return none if the item doesn't exist

* bugfix with todo for pointer.is_none

* test isnone message

* black

* formatting

* added searchmessage with test

* black

* todos

* black

* added faster detailer and simplifier for CommandMessage

* black

* fixed comments

* fixed init

* black

* removed unnecessary comment

* cleanup request from bobby

* added _get_msg utility function

* added compress and decompress to serde init

* Add type checking for messages

In the PR, bobby made a recommendation to make sure th right messages are sent.
I actually found a bug when doing this which got sorted as well.

* run black

* Fix a typo

* Add documentation to message.py

* Fix test

This test was broken because it relied on the remote worker having logging engaged.
However, somehow I forgot to turn on logging before. It could be that since the
worker was global, if tests are run in a correct order they would pass. At any rate,
I figured out the issue. I also had to remove one assertion from
test/message/test_message.py::test_is_none_message because our serializer raises
an error on it (which is a valid error). Basically, it was checking to see if the
msg typ was correct after the tensor had been deleted, but we cannot deserialize
the msg whn inside the message it neeeds to deserialize a tensor which no longer
exists. Not a big deal, it wasn't an important assertion.

The bug regarding the worker (bob) not having log_msgs == True seems to be affecting
several tests. I'll be fixing those next

* Fix test_force_object_delete_message

This test was broken in the same way as the previous commit. bob did not have
logging engaged, so we couldn't use the message log in our assertions

* Fix test_get_shape_message

This test was broken in the same way as the previous commit. bob did not have
logging engaged, so we couldn't use the message log in our assertions

* Fix test_obj_req_message

This test was broken in the same way as the previous commit. bob did not have
logging engaged, so we couldn't use the message log in our assertions

* Fix test_obj_message

This test was broken in the same way as the previous commit. bob did not have
logging engaged, so we couldn't use the message log in our assertions

* Fix test_cmd_message

This test was broken in the same way as the previous commit. bob did not have
logging engaged, so we couldn't use the message log in our assertions

* Run black

* Run black

I forgot to run black on syft as well in the previous commit

* Add documentation to Message class

I added some inline documentation/comments to the Message class. I left one TODO
which I could have done myself, but it wouldn't get used in the current state
of the codebase, and it's a nice self-contained project which could be done
as a first project for someone learning the codebase.

* Add TODO

As an additional option (see last commit), I added an alterative for the project. It's unclear
to me at the moment whether it's a good idea for the Message type to have a default detailer.
It's not harming anything at the moment, but I also don't think it's getting used.

* Add inline documentation for CommandMessage

Details for this commit can be viewed in the documentation that I wrote.

* Fix error in docs

While working on the last commit I noticed an error in Message's inline documentation so I fixed it

* Typo in CommandMessage docs

* Add documentation for ObjectMessage

* Add documentation for ObjectRequestMessage

* Add documentation for IsNoneMessage

Deetailsf or this commit can be viewd by reading th documentation added

* Change messaging import strategy in serde.py

In the PR for this work, @midokura-silvia requested that imports be done differently.
#2450 (comment)
This follows inline with her recommendation.

* Access Message.contents as property

@midokura-silvia had a reasonable request to always save contents in Message instantiations. However, this is less efficient and breaks when extensions of the message have their own ways for storing their contents (other than a generic tuple). Explanation for why these are important has been added to the documentation in this commit.

I chose to alter the .contents attribute of Messagese to be a property which referencse a private attribute ._contents. This satisfies
@midokura-silvia's request to keep the number of attributes fixed while also allowing for the efficiency gains
of messages with more expressive content storage.

* Run 'black syft'

* Fix typo in several todos 'detalier' -> 'detailer'

* Finish IsNoneMessage documentation

I committed a bit early on that one

* Change tensor_tuple to msg_tuple

* Document GetShapeMessage type

same as previous commit but for this messag etype

* Document ForceObjectDeleteMessage type

same as previous commit but for this messag etype

* Document SearchMessage type

same as previous commit but for this messag etype

* Add Github issues for each TODO

As requested from #2450 (comment)
I have added a github issue for each TODO inside of syft/messaging/message.py

* Add 'syft/messaging/promise.py'

This will be the file in which the new Promise type will be created. This will allow
a worker to tell another worker that in the future it will have a tensor with a certain
id. Other workers can then add operations to that promise such that when the promise is
kept the other operations will be automatically triggered

* Init generic Promise object with schema and docs

I created the initial generic promise object with what I believe to be
the correct schema for the object. I added documentation explaining
how each piece will be used. The next step is to add serialization
for the promise object.

* Run 'black syft'

* Create TODO for misplaced files/classes

While working on this issue, I discovered that there are 3 files which are in the torch
frameweork folder but which are generic to PySyft. I added a TODO and link to a newly
created Github Issue for someone to make this modification.

* Add simplify and detail to Promise

I added a basic simplifier and detailer to Promise. I haven't added any tests yet.

I also ran 'black syft' which apparnetly I didn't do before the last commit, so that's
why it also edited a couple files in torch/pointers

* Add Promise to messaging/__init__.py

* Add __str__ and __repr__ to Promise

I added a to-string which I feel is informative to an end user without being too verbose.

I also ran 'black syft' which updated __init__.py from the changes created in the last commit

* Add attribute to Promise to store its future object

* Move default plans argument to initializer body

There's a very strange garbage collecion bug when you put set() in the
default value part of an argument in __init__. When you re-initialize
a variable with the same name it gives it the same object as the last
plan had. Python should fix this bug.

At any rate, the fix for us is simple. Set plans=None by default and
just reset it to set() if 'plan is None'

* Init PromiseTensor

I created the initial class for PromiseTensor based on LoggingTensor.
Note that since PromiseTensor needs to implement both AbstractTensor
and Promise, I needed to add *args and **kwargs to the __init__ of
both AbstractTensor and Promise as written in:
https://www.programiz.com/python-programming/methods/built-in/super

* Run 'black syft'

* Fix strange merge

I accidentally called git pull origin master when I meant to call
git pull origin dev and it lead to some strange merge stuff.
Hopefully, this doesn't cause long term troubles but I think everything
is back to normal. (tests are passing)

* Run  and

* Fix multi-inheritance for PromisTensor

It was really tricky to get PromiseTensor to properly call in the initializers
for AbstractTensor and Promise but I did get it to work. I'm not 100% sure it's
the conventional way. https://stackoverflow.com/questions/9575409/calling-parent-class-init-with-multiple-inheritance-whats-the-right-way
was helpful and a few other resources, but I ultimately had to try a combination
of multiple strateegies together. I *think* it's ok but I'm not totally sure

* Cleanup extra reference code

I had some extra code in the file just as a reference for the last commit.
I removed it

* Run 'black syft'

* Init PromiseTensor example using __add__

As of this checkpoint, I've got an example working where you can add two PromiseTensors
together creating a promise chain, which you can fulfill by calling .keep on all of the
input variables.

* Remove Promise when fulfilled

I made is so that when you call .keep() on a promise on a tensor chain,
which ever parts of the chain get kept (as it proceeds through the
graph of promises), also have the PromiseTensor part of the chain removed

* Remove Promise when fulfilled

I made is so that when you call .keep() on a promise on a tensor chain,
which ever parts of the chain get kept (as it proceeds through the
graph of promises), also have the PromiseTensor part of the chain removed

* Fix typing issue in wrappers

This commit was a little bit painful. Basically, I wanted PromiseTensors to
automatically transform into the tensors they're supposed to be, even if they're
wrapped. At the last commit thi kindof worked, but it ended up with tensors
like Wrapper>tensor([1,2,3,4,5]) so I needed to instead move the data
into the wrapper. But this presented a problem because our wrapper types don't
match the data they wrap. So, I had to create a new .torch_tensor() method which
would try to sort out what type wrappers should be. I'll be surprised if this
functionality is perfect at this point but it works for this part of the codebase.

* Fix .torch_type() when self.child is dict

When I wrote the torch_type() method in the last commit, I forgot that sometimes
.child is a dict (MultiPointerTensor) which caused tests to error.

* Add TODO and link to Github Issue

* Run 'black syft'

* Add ways to create promises more conveniently

* Reorder methods in promise.py

* Clean up

* Remove print statement

* Add type information

* Remove commented out code

* Store input/output shape info on Plans

A Promise of a tensor must have shape information stored from initialization in order for
promises to be chained together because each promise initializes (interanlly) a Plan, and
all plans require shape information upon their creation. Thus, I needed to add input
and output shape information to the schema of Plan objects.

* Store input/output shape info on Plans

A Promise of a tensor must have shape information stored from initialization in order for
promises to be chained together because each promise initializes (interanlly) a Plan, and
all plans require shape information upon their creation. Thus, I needed to add input
and output shape information to the schema of Plan objects.

* Fix bad merge

* Generate __add__ dynamically

Previous to this commit I had hard-coded th __add__ function for TensorPlan,
but since I do not want to do this for all functions, I modified the __add__
code to generate it automatically. In the next commit, I'll move this code to
TorchHook and try to do it for all methods.

* Make PointerTensor.method logic work with varible # of args

* Make PointerTensor.method logic work with varible # of args

* Hook Promises for addition, multiplication, and division automagically

* Hook all methods in PromiseTensor

* Add automatic support for .grad on promise tensors

* Add simplify and detail to PromiseTensor

* checkpoint

* Fix imports from previous merge

* Fix typo

* Remove print statements

* Change Plan.readable_plan type from tuple to list

We need to be able to extend the readable_plan object, but it was getting
converted to a tuple when the ids were getting replaced via _replace_message_ids. invocations of _replace_message_ids properly return a list.

* Remove print statements

* Improve __str__ for PromiseTensor to be more informative

* Create PromiseTensor.send()

* Create PromiseTensor.send()

* Add owner to PointerTensor

* Enable plans across multiple workers

This was a larger commit than I wanted to be mostly
because PySyft was fighting back a bit and I got
a little impatient. I got things working except
there are some ID issues which I will try to fix in the
next commit.

Basically, now if you call .sendI() on a promise, the
chain of promises will naturally stretch across workers
such that if you .keep() all the input variables
then the execution will trigger across all th workres
involved in the promise chain.

However, remote chains are giving their kept tensors
the wrong ids. Will fix tomorrow.

* Run 'black syft'

* Simplify wrapper management and fix keep() id bug

First we had a problem where .keep() wasn't setting the id as specified
in the promise. We fixed this by adding a one-liner which set the id correctly.

Then we moved wrapper management from PromiseTensor.keep() to native.py. It just made things simpler.

* Remove print statement

* Add a comment:

* Add a comment:

* Checkpoint to share code with Theo.

See following commits for explanation.

* added has_args_fulfilled method to plan

* fixed imports after merge

* remove def from loop

* operations can be done between promise and non-promise

* pointer tensors can forward keep method

* started to add first tests for PromiseTensors

* can send promises

* black

* remove doubled line

* use value to get the result of a promise

* can do remote operations on promises

does not work anymore on local promises because cannot use .value() at
local worker

* set local_worker.is_client_worker to False in tests needing it

* set is_kept to true when promise kept

* can do operations with scalar

* fix cast scalars to tensors when building plans for operations

* prepare to have buffer for promise objects

* output of promises is bufferized

* black

* .wrap() hack to fix test

* test for result buffer

* Remove code related to the create_send_plan method

I didn't understand this part of the code and everything seems to work
without it so I removed it

* Removed code related to the obj_id2promise_id dict

I don't think it was needed if we use .keep() on promises and that the
promises can directly warn all the plans waiting for them

* tests for plans with promises

* always use ids instead of objects

* Fixed operations on PromiseTensors

Only works when both arguments are promises for now

* Remove obj_id attribute from Promise

I don't think setting an id for the object to build was necessary,
especially for promises that are kept several times and for which obj_id
would have been changed each time.

* Plan attribute promised_args is dict

This is to be able to have promise and non-promise arguments for plans.

* remove commented line

* fix problem of ids in plans

* collection of arguments a bit clearer

* added function to setup plans with promises

* tests for promises on plans

* black

* added some doc

* removed attribute promised_args for plans

* a bit of comments

* setup_plan_with_promises closer to normal __call__ now

* removed comment

* Check if plan waiting for promise on same worker

This might not be the case if the promise has been moved

* call setup_plan_with_promises under the hood

* usage of plan with similar to classic plans

* black

* return result when using keep on wrapper

* some cleaning according to Theo's review

* remove tests mixing promises and remote computations

* rm code dealing with promises on remote workers

* use shape instead of _shape

* checkout websocket client example from upstream

* black

* remove id attribute for abstract Promise

* remove unnecessary cast to list

* rm change to tuto from PR

* set is_client_worker back to True at the end of promises tests

* direct re-alignment with plans

* added attributes to serde

* fix because of id problem

* method to simply send __call__ to plan when applied to PointerPlan

* black

* rm comments saying move to generic folder

* added implementation of remote_send

* start making protocols work with promises

* test for protocol with promises

* black

* doc for build protocol with promises

* cleaning

* fix output_shape property for plan

* precise some comments for review

* clean call pointed plan on pointed promises

* can run protocol with 2 consecutive plans on same worker

* some cleaning according to review

* put promise_out_id in plan's procedure

* distribute plans on workers for protocols where ids match exactly

* detail and simplify for promise abstract method

* fix for serde refactor

* black

* clean according to review

* black

* can change location of pointer when remote_send

* tests for remote_send

* clean comment

* docstring for PromiseTensor arguments

* remove unused argument from PromiseTensor constructor

* black
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
3 participants
You can’t perform that action at this time.