Skip to content
This repository has been archived by the owner on Feb 16, 2023. It is now read-only.

Jupyter Notebooks examples and tutorials to not work with docker-compose #543

Closed
KadKla opened this issue Apr 18, 2020 · 15 comments
Closed
Assignees

Comments

@KadKla
Copy link

KadKla commented Apr 18, 2020

Describe the bug
I launched the gateway and nodes with the provided docker-compose and tried to connect with a local running JupyterLab as well as with a JupyterLab running inside the docker-compose. In Both cases I cannot execute successfully the examples from two different examples: pygrid in pysyft (here Part 1 worked and the nodes seem to know each other) nor the pygrid examples

I changed optionally localhost to gateway, alice, bob, etc., however, this did not change anything. Before I launched the docker-compose I added all the hosts to the /etc/hosts file:

127.0.0.1       gateway
127.0.0.1       bob
127.0.0.1       alice
127.0.0.1       bill
127.0.0.1       james

In the local as well as in the jupyterlab running in the docker-container I get the following error:

Websocket connection closed (worker: Bob)
Created new websocket connection
---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
<ipython-input-5-12b0ebb6e5f3> in <module>
      7 and storing a pointer plan that manages all remote references.
      8 '''
----> 9 cloud_grid_service.serve_model(model,id=model.id,allow_remote_inference=True, mpc=True) # If mpc flag is False, It will host a unencrypted model.

/opt/conda/lib/python3.7/site-packages/syft/grid/public_grid.py in serve_model(self, model, id, mpc, allow_remote_inference, allow_download, n_replica)
     64             self._serve_unencrypted_model(model, id, allow_remote_inference, allow_download)
     65         else:
---> 66             self._serve_encrypted_model(model)
     67 
     68     def query_model_hosts(

/opt/conda/lib/python3.7/site-packages/syft/grid/public_grid.py in _serve_encrypted_model(self, model)
    162 
    163                     # SMPC Share
--> 164                     model.fix_precision().share(*smpc_workers, crypto_provider=crypto_provider)
    165 
    166                     # Host model

/opt/conda/lib/python3.7/site-packages/syft/execution/plan.py in share_(self, *args, **kwargs)
    526 
    527     def share_(self, *args, **kwargs):
--> 528         self.state.share_(*args, **kwargs)
    529         return self
    530 

/opt/conda/lib/python3.7/site-packages/syft/execution/state.py in share_(self, *args, **kwargs)
    100         for tensor in self.tensors():
    101             self.create_grad_if_missing(tensor)
--> 102             tensor.share_(*args, **kwargs)
    103 
    104     def get_(self):

/opt/conda/lib/python3.7/site-packages/syft/frameworks/torch/tensors/interpreters/native.py in share_(self, *args, **kwargs)
    896                 kwargs["requires_grad"] = False
    897 
--> 898             shared_tensor = self.child.share_(*args, **kwargs)
    899 
    900             if requires_grad and not isinstance(shared_tensor, syft.PointerTensor):

/opt/conda/lib/python3.7/site-packages/syft/frameworks/torch/tensors/interpreters/precision.py in share_(self, *args, **kwargs)
    929         contrary to the classic share version version
    930         """
--> 931         self.child = self.child.share_(*args, no_wrap=True, **kwargs)
    932         return self
    933 

/opt/conda/lib/python3.7/site-packages/syft/frameworks/torch/tensors/interpreters/native.py in share_(self, *args, **kwargs)
    904             return self
    905         else:
--> 906             return self.share(*args, **kwargs)  # TODO change to inplace
    907 
    908     def combine(self, *pointers):

/opt/conda/lib/python3.7/site-packages/syft/frameworks/torch/tensors/interpreters/native.py in share(self, field, crypto_provider, requires_grad, no_wrap, *owners)
    875                 )
    876                 .on(self.copy(), wrap=False)
--> 877                 .init_shares(*owners)
    878             )
    879 

/opt/conda/lib/python3.7/site-packages/syft/frameworks/torch/tensors/interpreters/additive_shared.py in init_shares(self, *owners)
    198         shares_dict = {}
    199         for share, owner in zip(shares, owners):
--> 200             share_ptr = share.send(owner, **no_wrap)
    201             shares_dict[share_ptr.location.id] = share_ptr
    202 

/opt/conda/lib/python3.7/site-packages/syft/frameworks/torch/tensors/interpreters/native.py in send(self, inplace, user, local_autograd, preinitialize_grad, no_wrap, garbage_collect_data, *location)
    417                 local_autograd=local_autograd,
    418                 preinitialize_grad=preinitialize_grad,
--> 419                 garbage_collect_data=garbage_collect_data,
    420             )
    421 

/opt/conda/lib/python3.7/site-packages/syft/workers/base.py in send(self, obj, workers, ptr_id, garbage_collect_data, create_pointer, **kwargs)
    385 
    386         # Send the object
--> 387         self.send_obj(obj, worker)
    388 
    389         # If we don't need to create the pointer

/opt/conda/lib/python3.7/site-packages/syft/workers/base.py in send_obj(self, obj, location)
    678                 receive the object.
    679         """
--> 680         return self.send_msg(ObjectMessage(obj), location)
    681 
    682     def request_obj(

/opt/conda/lib/python3.7/site-packages/syft/workers/base.py in send_msg(self, message, location)
    285 
    286         # Step 2: send the message and wait for a response
--> 287         bin_response = self._send_msg(bin_message, location)
    288 
    289         # Step 3: deserialize the response

/opt/conda/lib/python3.7/site-packages/syft/workers/virtual.py in _send_msg(self, message, location)
     13             sleep(self.message_pending_time)
     14 
---> 15         return location._recv_msg(message)
     16 
     17     def _recv_msg(self, message: bin) -> bin:

/opt/conda/lib/python3.7/site-packages/syft/workers/websocket_client.py in _recv_msg(self, message)
    103             if not self.ws.connected:
    104                 raise RuntimeError(
--> 105                     "Websocket connection closed and creation of new connection failed."
    106                 )
    107         return response

RuntimeError: Websocket connection closed and creation of new connection failed.

In the docker-compose look I get the following, however, not sure if the error occurs at the same time:

bob_1      | Traceback (most recent call last):
bob_1      |   File "/usr/local/lib/python3.7/site-packages/gevent/pywsgi.py", line 976, in handle_one_response
bob_1      |     self.run_application()
bob_1      |   File "/usr/local/lib/python3.7/site-packages/geventwebsocket/handler.py", line 75, in run_application
bob_1      |     self.run_websocket()
bob_1      |   File "/usr/local/lib/python3.7/site-packages/geventwebsocket/handler.py", line 52, in run_websocket
bob_1      |     list(self.application(self.environ, lambda s, h, e=None: []))
bob_1      |   File "/usr/local/lib/python3.7/site-packages/flask/app.py", line 2463, in __call__
bob_1      |     return self.wsgi_app(environ, start_response)
bob_1      |   File "/usr/local/lib/python3.7/site-packages/flask_sockets.py", line 45, in __call__
bob_1      |     handler(environment, **values)
bob_1      |   File "/app/app/main/events/__init__.py", line 57, in socket_api
bob_1      |     response = route_requests(message)
bob_1      |   File "/app/app/main/events/__init__.py", line 37, in route_requests
bob_1      |     return forward_binary_message(message)
bob_1      |   File "/app/app/main/auth/__init__.py", line 60, in wrapped
bob_1      |     return f(*args, **kwargs)
bob_1      |   File "/app/app/main/events/syft_events.py", line 27, in forward_binary_message
bob_1      |     decoded_response = current_user.worker._recv_msg(message)
bob_1      |   File "/usr/local/lib/python3.7/site-packages/syft/workers/virtual.py", line 19, in _recv_msg
bob_1      |     return self.recv_msg(message)
bob_1      |   File "/usr/local/lib/python3.7/site-packages/syft/workers/base.py", line 314, in recv_msg
bob_1      |     msg = sy.serde.deserialize(bin_message, worker=self)
bob_1      |   File "/usr/local/lib/python3.7/site-packages/syft/serde/serde.py", line 69, in deserialize
bob_1      |     return strategy(binary, worker)
bob_1      |   File "/usr/local/lib/python3.7/site-packages/syft/serde/msgpack/serde.py", line 381, in deserialize
bob_1      |     return _deserialize_msgpack_simple(simple_objects, worker)
bob_1      |   File "/usr/local/lib/python3.7/site-packages/syft/serde/msgpack/serde.py", line 372, in _deserialize_msgpack_simple
bob_1      |     return _detail(worker, simple_objects)
bob_1      |   File "/usr/local/lib/python3.7/site-packages/syft/serde/msgpack/serde.py", line 472, in _detail
bob_1      |     return detailers[obj[0]](worker, obj[1], **kwargs)
bob_1      |   File "/usr/local/lib/python3.7/site-packages/syft/messaging/message.py", line 252, in detail
bob_1      |     return ObjectMessage(sy.serde.msgpack.serde._detail(worker, msg_tuple[0]))
bob_1      |   File "/usr/local/lib/python3.7/site-packages/syft/serde/msgpack/serde.py", line 472, in _detail
bob_1      |     return detailers[obj[0]](worker, obj[1], **kwargs)
bob_1      |   File "/usr/local/lib/python3.7/site-packages/syft/serde/msgpack/torch_serde.py", line 192, in _detail_torch_tensor
bob_1      |     ) = tensor_tuple
bob_1      | ValueError: not enough values to unpack (expected 9, got 7)
bob_1      | 2020-04-18T16:19:18Z {'REMOTE_ADDR': '172.22.0.8', 'REMOTE_PORT': '60776', 'HTTP_HOST': 'bob:3000', (hidden keys: 26)} failed with ValueError

Here is the docker-compose file I used:

version: '3'
services:
    gateway:
        image: openmined/grid-gateway:latest
        build: .
        environment:
            - PORT=5000
            - SECRET_KEY=ineedtoputasecrethere
            - DATABASE_URL=sqlite:///databasegateway.db
        ports:
        - 5000:5000
    redis:
        image: redis:latest
        volumes:
            - ./redis-data:/data
        expose:
        - 6379
        ports:
        - 6379:6379
    jupyter:
        image: openmined/pysyft-notebook
        environment:
            - WORKSPACE_DIR=/root
        volumes:
            - .:/root
        depends_on:
            - "gateway"
            - "redis"
            - "bob"
            - "alice"
            - "bill"
            - "james"
        entrypoint: ["jupyter", "notebook", "--allow-root", "--ip=0.0.0.0", "--port=8888", "--notebook-dir=/root"]
        expose:
        - 8888
        ports:
        - 8888:8888
    bob:
        image: openmined/grid-node:latest
        environment:
            - GRID_NETWORK_URL=http://gateway:5000
            - ID=Bob
            - ADDRESS=http://bob:3000/
            - REDISCLOUD_URL=redis://redis:6379
            - PORT=3000
        depends_on:
            - "gateway"
            - "redis"
        expose:
            - 3000
        ports:
        - 3000:3000
    alice:
        image: openmined/grid-node:latest
        environment:
            - GRID_NETWORK_URL=http://gateway:5000
            - ID=Alice
            - ADDRESS=http://alice:3001/
            - REDISCLOUD_URL=redis://redis:6379
            - PORT=3001
        depends_on:
            - "gateway"
            - "redis"
        expose:
            - 3001
        ports:
        - 3001:3001
    bill:
        image: openmined/grid-node:latest
        environment:
            - GRID_NETWORK_URL=http://gateway:5000
            - ID=Bill
            - ADDRESS=http://bill:3002/
            - REDISCLOUD_URL=redis://redis:6379
            - PORT=3002
        depends_on:
            - "gateway"
            - "redis"
        expose:
            - 3002
        ports:
        - 3002:3002
    james:
        image: openmined/grid-node:latest
        environment:
            - GRID_NETWORK_URL=http://gateway:5000
            - ID=James
            - ADDRESS=http://james:3003/
            - REDISCLOUD_URL=redis://redis:6379
            - PORT=3003
        depends_on:
            - "gateway"
            - "redis"
        expose:
            - 3003
        ports:
        - 3003:3003

To Reproduce
Steps to reproduce the behavior:

  1. Run the docker-compose file
  2. Launch a JupyterLab-notebook locally
  3. Test the different tutorials

Expected behavior
Successful execution of the jupyter notebooks
A clear and concise description of what you expected to happen.

Desktop (please complete the following information):

  • OS: Ubuntu 18.04

Additional context
In the future we want to transfer the whole docker-compose setup to Kubernetes

@IonesioJunior
Copy link
Member

Thanks for reporting this @KadKla! I'll check it.

@IonesioJunior IonesioJunior self-assigned this Apr 18, 2020
@IonesioJunior
Copy link
Member

Ohh, wait! Could you check if your PySyft lib is the last version? Most of our errors on the serde package are incompatibility between Syft library versions on PyGrid instance/User environment

@KadKla
Copy link
Author

KadKla commented Apr 18, 2020

Nice! Sure, here is my requirements.txt for the environment, from where I start my JupyterLab:

attrs==19.3.0
backcall==0.1.0
bleach==3.1.4
certifi==2020.4.5.1
chardet==3.0.4
click==7.1.1
decorator==4.4.2
defusedxml==0.6.0
entrypoints==0.3
Flask==1.1.2
Flask-SocketIO==4.2.1
idna==2.8
importlib-metadata==1.6.0
ipykernel==5.2.1
ipython==7.13.0
ipython-genutils==0.2.0
ipywidgets==7.5.1
itsdangerous==1.1.0
jedi==0.17.0
Jinja2==2.11.2
jsonschema==3.2.0
jupyter==1.0.0
jupyter-client==6.1.3
jupyter-console==6.1.0
jupyter-core==4.6.3
lz4==3.0.2
MarkupSafe==1.1.1
mistune==0.8.4
msgpack==1.0.0
nbconvert==5.6.1
nbformat==5.0.5
notebook==6.0.3
numpy==1.18.2
pandocfilters==1.4.2
parso==0.7.0
pexpect==4.8.0
phe==1.4.0
pickleshare==0.7.5
Pillow==6.2.2
prometheus-client==0.7.1
prompt-toolkit==3.0.5
protobuf==3.11.3
ptyprocess==0.6.0
Pygments==2.6.1
pyrsistent==0.16.0
python-dateutil==2.8.1
python-engineio==3.12.1
python-socketio==4.5.1
pyzmq==19.0.0
qtconsole==4.7.2
QtPy==1.9.0
requests==2.22.0
scipy==1.4.1
Send2Trash==1.5.0
six==1.14.0
syft==0.2.4
syft-proto==0.2.5a1
tblib==1.6.0
terminado==0.8.3
testpath==0.4.4
torch==1.4.0
torchvision==0.5.0
tornado==4.5.3
traitlets==4.3.3
urllib3==1.25.8
wcwidth==0.1.9
webencodings==0.5.1
websocket-client==0.57.0
websockets==8.1
Werkzeug==1.0.1
widgetsnbextension==3.5.1
zipp==3.1.0

@santteegt
Copy link

@IonesioJunior do you mean the latest release of PySyft (v0.2.4) or the latest changes so we should install it from source?

@IonesioJunior
Copy link
Member

I mean, the grid node docker images are out of date (they were built two weeks ago). It cause an incompatibility between versions of the library used by these images and your python environment.

@santteegt
Copy link

I'm trying to build the docker images locally but seems that the README instructions are a bit outdated. Not sure how to build the grid-node as there's only one Dockerfile at the root that seems to correspond to the node-gateway

@thiessl
Copy link

thiessl commented Apr 22, 2020

Hi all,

I have the same problem. Recently, I tested with the example from https://github.com/OpenMined/PySyft/tree/master/examples/tutorials/grid/federated_learning/mnist and the latest grid-node and gatway image from dockerhub (4/21/2020). It resulted still in: "RuntimeError: Websocket connection closed and creation of new connection failed."

Thanks in advance for any support!

@IonesioJunior
Copy link
Member

Yeah @santteegt ! Actually, to build the grid nodes you should use the GridNode's repository Dockerfile.

I'm working on it right now. thank you, folks, for reporting this issue!

The updated news should be announced soon. :)

@IonesioJunior
Copy link
Member

IonesioJunior commented Apr 24, 2020

@KadKla , @santteegt , @thiessl
Hello guys, I have been investigating the error reported by you and I have some results to share:

  • Errors were found during inference process on encrypted models (plans in PySyft have been updated), I will be opening a new issue to update them in PyGrid as well.

  • Errors in the persistence module in Redis have also been detected (I temporarily removed the use of Redis in our images, but the use of the platform in a non-persistent way remains functional). I will be fixing this problem in the next PR’s.

  • Reducing the size of the images: Much of the time spent investigating this problem was used to build the images (which were unnecessarily large). So I took the opportunity to create smaller images by reducing their sizes from 3GB to 1GB.

    • Now we have a base lightweight image of PySyft (openmined/pysyft-lite)
    • Once the openmined/pysyft-lite image is built, we can use it as a basis for building grid-node and grid-gateway docker images.
  • Update on the sample Grid notebooks: The notebooks were out of date so I took the opportunity to update them while testing. They all seem to work.

  • Update on sample PySyft notebooks: The notebooks were out of date so I took the opportunity to update them while testing. They all seem to work (except Encrypted MLaaS as mentioned before). I 'll submit a PR fixing those notebooks ASAP.

Related PR's: #549
If possible, Could you review/test our new images/notebooks? I would love to receive your feedback.

Thank you in advance.

@santteegt
Copy link

Hi @IonesioJunior,

Awesome! Looking forward to the next PRs. After reviewing and testing the changes you've introduced so far, here are some comments from my side:

  • Docker Images are now more lightweight (~1.5GB) and faster to build. Kudos for this optimization
  • I was able to execute most of the notebooks that implement PyGrid use cases by installing PySyft from source. If you use the latest release (0.2.4), it will still raise the RuntimeError: Websocket connection closed and creation of new connection failed. exception when trying to execute either grid.serve_model and tensor.send methods
  • When trying to run encrypted inferences (You've already identified this issue) on the Part 02 - Grid as a Secure MLaaS to Cloud Providers notebook I've got the following error:
---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-4-e72cafd212c8> in <module>
      9 receive the mpc results and aggregate it, returning the inference's result.
     10 '''
---> 11 result = cloud_grid_service.run_remote_inference("convnet", user_input_data, mpc=True)# If mpc flag is False, It will send your real data to the platform.
     12 print("Inference's result: ", result) # ( [2.0, 4.0] * [5.0, 3.0] ) + [1000] = [1022]

~/openmined/PySyft/syft/grid/public_grid.py in run_remote_inference(self, id, data, mpc)
    100             return self._run_unencrypted_inference(id, data)
    101         else:
--> 102             return self._run_encrypted_inference(id, data)
    103 
    104     def _serve_unencrypted_model(

~/openmined/PySyft/syft/grid/public_grid.py in _run_encrypted_inference(self, id, data, copy)
    271         """
    272         # Get model's host / mpc shares
--> 273         host_node, smpc_workers, crypto_provider = self._query_encrypted_models(id)
    274 
    275         # Share your dataset to same SMPC Workers

~/openmined/PySyft/syft/grid/public_grid.py in _query_encrypted_models(self, id)
    212             # Host of encrypted plan
    213             node_id = list(match_nodes.keys())[0]  # Get the first one
--> 214             node_address = match_nodes[node_id]["address"]
    215 
    216             # Workers with SMPC parameters tensors

TypeError: string indices must be integers
  • I'm still unable to run the Part 03 - Grid applied to Smart Cities and Smart Homes notebook. The following exception is being thrown when calling the smart_city_aggregation method
RuntimeError                              Traceback (most recent call last)
<ipython-input-7-ebad910c1f1d> in <module>
     12 
     13 energy_spent_by_houses = [ sum_energy_spent_by_home(home_id, results) for home_id in results.keys() ]
---> 14 total_spend = reduce(lambda x, y: smart_city_aggregation(x,y), energy_spent_by_houses)

<ipython-input-7-ebad910c1f1d> in <lambda>(x, y)
     12 
     13 energy_spent_by_houses = [ sum_energy_spent_by_home(home_id, results) for home_id in results.keys() ]
---> 14 total_spend = reduce(lambda x, y: smart_city_aggregation(x,y), energy_spent_by_houses)

<ipython-input-7-ebad910c1f1d> in smart_city_aggregation(x, y)
      9     print("Sending X(", x.location, ") to Y(", y.location, ").")
     10     x.location.connect_nodes(y.location)
---> 11     return x.move(y.location) + y
     12 
     13 energy_spent_by_houses = [ sum_energy_spent_by_home(home_id, results) for home_id in results.keys() ]

~/openmined/PySyft/syft/frameworks/torch/tensors/interpreters/native.py in move(self, location, requires_grad)
    702             A pointer to the worker location
    703         """
--> 704         self.child = self.child.move(location, requires_grad)
    705         # We get the owner from self.child because the owner of a wrapper is
    706         # not reliable and sometimes end up being the syft.local_worker

~/openmined/PySyft/syft/generic/pointers/pointer_tensor.py in move(self, destination, requires_grad)
    280             return self.get()
    281 
--> 282         ptr = self.remote_send(destination, requires_grad=requires_grad)
    283 
    284         # We make the pointer point at the remote value. As the id doesn't change,

~/openmined/PySyft/syft/generic/pointers/pointer_tensor.py in remote_send(self, destination, requires_grad)
    304             self.id_at_location, self.location.id, [destination.id], kwargs_
    305         )
--> 306         self.owner.send_msg(message=message, location=self.location)
    307         return self
    308 

~/openmined/PySyft/syft/workers/base.py in send_msg(self, message, location)
    285 
    286         # Step 2: send the message and wait for a response
--> 287         bin_response = self._send_msg(bin_message, location)
    288 
    289         # Step 3: deserialize the response

~/openmined/PySyft/syft/workers/virtual.py in _send_msg(self, message, location)
     13             sleep(self.message_pending_time)
     14 
---> 15         return location._recv_msg(message)
     16 
     17     def _recv_msg(self, message: bin) -> bin:

~/openmined/PySyft/syft/workers/websocket_client.py in _recv_msg(self, message)
    103             if not self.ws.connected:
    104                 raise RuntimeError(
--> 105                     "Websocket connection closed and creation of new connection failed."
    106                 )
    107         return response

RuntimeError: Websocket connection closed and creation of new connection failed.
  • Other notebooks can be executed with no issues

@thiessl
Copy link

thiessl commented May 7, 2020

Hi @IonesioJunior,

After building the docker images based on the newest version and using pysyft 0.2.5, there is no runtime error anymore! thanks for fixing!

However, currently grid.search(...) is not returning anything in the tutorials. I commented out the redis urls (env variables) in the docker compose file, as you suggested in the slack channel.
Looking forward for using the persistent version with redis.
Thanks in advance for fixing!

@eric-yates
Copy link
Contributor

I'm seeing this same issue. How do I update the Node container to PySyft 0.2.5? Ideally, I would still use docker-compose for simplicity

@eric-yates
Copy link
Contributor

Nevermind, it looks like the grid-node container was just updated 4 hours ago to include PySyft 0.2.5! For anyone else who has this problem, you'll need to make sure docker-compose is using this updated version. Run this to do so:

docker-compose pull && docker-compose up

@KCC13
Copy link

KCC13 commented May 11, 2020

Hi @IonesioJunior,

Thank you for your great efforts, after updating docker images and PySyft to the latest version, the PyGrid examples can now be executed successfully.

However, like some of above comments had mentioned, I also failed to run the second and third grid examples in PySyft. The error messages I had encountered are the same as those in @santteegt's comment, but here I want to further provide the error messages shown in grid-node when running cloud_grid_service.run_remote_inference() in Part 02 - Grid as a Secure MLaaS to Cloud Providers.ipynb.

bob_1      | [2020-05-11 11:48:19,315] ERROR in app: Exception on /search-encrypted-models [POST]
bob_1      | Traceback (most recent call last):
bob_1      |   File "/root/.local/lib/python3.7/site-packages/flask_sockets.py", line 40, in __call__
bob_1      |     handler, values = adapter.match()
bob_1      |   File "/root/.local/lib/python3.7/site-packages/werkzeug/routing.py", line 1784, in match
bob_1      |     raise NotFound()
bob_1      | werkzeug.exceptions.NotFound: 404 Not Found: The requested URL was not found on the server. If you entered the URL manually please check your spelling and try again.
bob_1      | 
bob_1      | During handling of the above exception, another exception occurred:
bob_1      | 
bob_1      | Traceback (most recent call last):
bob_1      |   File "/root/.local/lib/python3.7/site-packages/flask/app.py", line 2446, in wsgi_app
bob_1      |     response = self.full_dispatch_request()
bob_1      |   File "/root/.local/lib/python3.7/site-packages/flask/app.py", line 1951, in full_dispatch_request
bob_1      |     rv = self.handle_user_exception(e)
bob_1      |   File "/root/.local/lib/python3.7/site-packages/flask/app.py", line 1820, in handle_user_exception
bob_1      |     reraise(exc_type, exc_value, tb)
bob_1      |   File "/root/.local/lib/python3.7/site-packages/flask/_compat.py", line 39, in reraise
bob_1      |     raise value
bob_1      |   File "/root/.local/lib/python3.7/site-packages/flask/app.py", line 1949, in full_dispatch_request
bob_1      |     rv = self.dispatch_request()
bob_1      |   File "/root/.local/lib/python3.7/site-packages/flask/app.py", line 1935, in dispatch_request
bob_1      |     return self.view_functions[rule.endpoint](**req.view_args)
bob_1      |   File "/root/.local/lib/python3.7/site-packages/flask_cors/decorator.py", line 128, in wrapped_function
bob_1      |     resp = make_response(f(*args, **kwargs))
bob_1      |   File "/app/app/main/routes.py", line 223, in search_encrypted_models
bob_1      |     for state_id in model.state.state_ids:
bob_1      | AttributeError: 'State' object has no attribute 'state_ids'

Hope this helps, thanks for your effort again.

kuronosec added a commit to kuronosec/GridNode that referenced this issue May 26, 2020
It fixes the following error:
AttributeError: 'State' object has no attribute 'state_ids'
when trying to run a remote inference.

refs #OpenMined/PyGrid-deprecated---see-PySyft-#543
@kuronosec
Copy link

Hi guys, thanks for your amazing work in OpenMined! I sent a commit with a proposed fix for this issue: https://github.com/OpenMined/GridNode/pull/14

jmaunon added a commit to jmaunon/GridNode that referenced this issue May 30, 2020
commit 2093fcc
Author: IonesioJunior <ionesiojr@gmail.com>
Date:   Wed May 27 22:36:13 2020 -0300

    Update syft version checking

commit 7bb4914
Author: Ionésio Junior <ionesiojr@gmail.com>
Date:   Wed May 27 21:23:19 2020 -0300

    Update Dockerfile

commit bf17b4f
Author: Ionésio Junior <ionesiojr@gmail.com>
Date:   Wed May 27 17:20:56 2020 -0300

    Update requirements.txt

commit 73f40c9
Author: Ionésio Junior <ionesiojr@gmail.com>
Date:   Wed May 27 17:20:27 2020 -0300

    Update Dockerfile

commit ee3c9c3
Author: kurono <andresgomezram7@gmail.com>
Date:   Tue May 26 18:41:26 2020 +0000

    fixes state_ids error when using PySyft v0.2.5. (#14)

    It fixes the following error:
    AttributeError: 'State' object has no attribute 'state_ids'
    when trying to run a remote inference.

    refs #OpenMined/PyGrid-deprecated---see-PySyft-#543

    Co-authored-by: kurono <kurono@riseup.net>
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants