-
Notifications
You must be signed in to change notification settings - Fork 256
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to handle long polling process #497
Comments
Thanks Barret for posting to Gihub. I'd appreciate any advice on hosting a plumber API on a windows server (based on my most recent post above) where it can leverage all available cores to allow parallel calls. Thanks for your help! |
I've found that staying within R as much as possible works the smoothest. I would look into using Have a background R process for each new job could be a blessing and a curse as you will launch an independent R process for each new job. This can be bad if you launch too many processes. However, you could add queueing logic into your API to prevent machine overload.
@shrektan will have more advice here, but I strongly recommend running your plumber instance within Docker. While plumber may work on windows, we do not actively support it. |
Hosting plumber on Windows by using the R session directly is OK. But need to remember that Docker is natively supported by Windows 10. It may be a better option because Docker containers are easier to be managed and scaled. In addition, |
I don't have enough experience personally with Using Use-case: if an API endpoint accepts some unique ID as a parameter (for which only one execution should occur at a time), then we can store that id within your Then ... either the main R process crashes or is restarted (e.g., updated deployment), my guess is that the child processes would either be interrupted (killed) or orphaned (output goes nowhere). I'd think that the "orphaned" outcome is a matter of what the bg process does ... if it works in side-effect (e.g., insert data into a db), then it will likely do its thing but nothing is notified on exit unless/until somebody checks to see if the side-effect is done (e.g., queries the db). However, if in the meantime another caller tries to start this API with that same ID, then ... it is started again. Any thoughts to external IPC? I can see utility in filesystem or nosql (redis?) based operations, where we might still be able to use Thoughts? |
I believe the original intent was to turn a long execution process into something that can be inspected for a status and result. I believe the trick is to offload the work to someplace other than the main R thread. There are definitely many different approaches and considerations to be aware of when offloading processing to somewhere other than the main R thread. (Similar to the communication issues that can occur with distributed databases as compared to accessing a Going to close the issue for now as |
Copying an email thread here to continue discussion
@mftokic - Oct 7, 2019
@schloerke
@mftokic
@schloerke
`tokic.R`
@mftokic
The text was updated successfully, but these errors were encountered: