New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Sharing model among worker process #2
Comments
Hi @mehmetilker When talking about distributed workers, you can load the model as a singleton for each worker. For large models I can recommend to prefetch them from an object storage (e.g. S3, COS) to memory (e.g. using Redis), so that each worker can load the model initially. Does this help? |
Hi @eightBEC Using App's state to load an instance for once for the whole application life time but it is meaningful if an application works on single worker.
Your advice for the solution (loading from object store to memory) does not change the situation I think, if I understood you right. Here is another question in SO: I haven't tried but I think related : |
Hi @mehmetilker, have you found a solution for this? I'm facing the same here. |
@viniciusdsmello no unfortunately... |
@mehmetilker Did the above solution work? If so, can you close this issue? |
No. Also not working on the project for a long time. Since I assume few years passed there should be a better way to solve the problem. |
It is a question rather than an issue.
I have gone over your code to see if there is a solution to share the model (nlp, prediction etc..) among worker processes to prevent load model for every worker and utilize async definition (which is another subject/problem) but could not see a solution.
Is there something you can advice or apply in this skeleton?
Thanks.
The text was updated successfully, but these errors were encountered: