-
-
Notifications
You must be signed in to change notification settings - Fork 29
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Rearchitecturing the backend evaluation #439
Comments
This will also fix #367 |
There would be two or more processes. This would be the lifecycle for each process: flowchart TD
P[process created] --> S[Starting] --> |Start result received| F[Free] -->|incoming code| E[Executing] -->|Result Received OR incoming code| S
Not necessarily in order:
Note that restarts are not immediate. AREPL backend gives a slight delay before force killing to let any end handlers finish up. Another diagram. In this scenario three processes start, code comes in, code finishes, code comes in, and then another code comes in while the first code is executing:
|
So that's the theory, but how to implement the code? I could set a setInterval, checking the executors every X milliseconds till one of them are free. New code would clear the setinterval and set a new one. Or maybe I could use https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Promise/any.... somehow? The problem is I'm dealing with several different functions. the handleresult function is different than the onUserInput function. I suppose I could refactor the backend to return a promise when execCode is started, and store that promise somewhere. Then, something like: if(freeProcesses){
exec()
}
else{
Promise.any([startingPromise, executingPromise]).then(value -> value.exec(code))
} First solution seems simpler. |
This reminds me of the elevator problem. https://leetcode.com/discuss/interview-question/object-oriented-design/124936/design-an-elevator-system Also reminds me of load balancers, although that's actually simpler because you don't have to cancel existing processing when another request comes in. |
Any way this can be manually worked around for the time being? |
@ben-pelletier which issue are you experiencing? |
Is your feature request related to a problem? Please describe.
There are a number of problems that can appear when doing the second run of AREPL. For example, #438 and #436. I've tried adding code in my backend to address these issues, but it adds complexity to the backend and does not fully solve all issues.
Describe the solution you'd like
A better solution would be a double-process approach. When the extension starts, two processes are spawned, A and B. When the debounce finishes, code is passed to A. When A returns, A is restarted. To prevent code waiting on A restart, when A is running or restarting, code is passed to B. The backend starts fast, so by the time the next input rolls around, A will be ready to go. It will go to A, then B, then A, and so on.
This means I can get rid of a lot of backend logic, because each process will start entirely fresh each time. It will also completely eliminate any bugs or cases not handled by said backend logic.
Also note that previously AREPL would throw away input that came in while AREPL was executing. Now AREPL will always show the results from the last input, which is a plus.
Scenarios:
Might be able to simplify above logic if I just toss enough executors in there that there will always be one available. Downside is that uses up more memory (40MB each process).
Describe alternatives you've considered
The simplest solution would be to just restart the process every run. However, I tried it and it adds slowness to the time you wait for execution. This slowness might be acceptable, but in the absence of user feedback I'm going with my personal feeling, which is unfavorable.
Additional context
numpy/numpy#16241
The text was updated successfully, but these errors were encountered: