Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RPC Server Wrapper #19

Closed
4 tasks
tom-nslt opened this issue Feb 24, 2022 · 1 comment
Closed
4 tasks

RPC Server Wrapper #19

tom-nslt opened this issue Feb 24, 2022 · 1 comment

Comments

@tom-nslt
Copy link
Contributor

Describe the solution you'd like
Use python RPC library to create a RPC server which automatically loads all methods implemented of the model implementation (that it wraps around).

Describe alternatives you've considered
We could also consider wrapping the model implementation in an RPC client. This however would cause large overhead on the prediction backend side as well as considerable traffic overhead. Since we would need to constantly ask the prediction backend if jobs are available.

Additional context
This will be part of the next milestone.
Python3 RPC Server

Tasks

  • Load all functions of the underlying model implementation
  • Use TLS
  • Only accept a connection from the prediction backend (e.g., use the public key of the prediction backend)
  • Accept client requests and execute function calls
@tom-nslt tom-nslt added this to the Docker - Orchestration milestone Feb 24, 2022
tom-nslt added a commit that referenced this issue Jul 9, 2022
@tom-nslt
Copy link
Contributor Author

We chose to not use RPC for the sake of cleaner code and less overhead.
Instead we use FastAPI and two separate Routes, which are added depending on the settings.node_type variable.

It might be possible to implement the middleware again for routing, iff an alternative approach to consuming the body (encode/starlette#495) is found. However the two routes setup allows for easier understanding of the code which might also be preferable.

Closing as this was done in 84b37a8 and ultimatly finished in #24

TLS should be handled by a TLS termination proxy (see https://fastapi.tiangolo.com/deployment/https/).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant