You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am testing bree on AWS, problem I am having is when I rollout my code on more then 2 workers my VM crashes. Machine is with 4GB memory,
how can I find out how much each worker consumes memory, or is it possible to assign each worker max memory to use?
Kinda stuck what to do next?. code running in workers is connecting to database doing some reads and writes, could it be that after worker is done I should release the handler to db
In general how I understood, I set job to executes every 1min, so when worker completes executing the job, I signal to main thread that worker is done and then main thread kills the worker thread, after 1min it start all over, worker executes the job in newly spawned thread and after done signal it kills the thread. Is this what is happening?
In my case I have 2 jobs running in parallel and repeating after every 1min. Does worker threads when spawned allocate maximum available memory?
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
I am testing bree on AWS, problem I am having is when I rollout my code on more then 2 workers my VM crashes. Machine is with 4GB memory,
how can I find out how much each worker consumes memory, or is it possible to assign each worker max memory to use?
Kinda stuck what to do next?. code running in workers is connecting to database doing some reads and writes, could it be that after worker is done I should release the handler to db
In general how I understood, I set job to executes every 1min, so when worker completes executing the job, I signal to main thread that worker is done and then main thread kills the worker thread, after 1min it start all over, worker executes the job in newly spawned thread and after done signal it kills the thread. Is this what is happening?
In my case I have 2 jobs running in parallel and repeating after every 1min. Does worker threads when spawned allocate maximum available memory?
Beta Was this translation helpful? Give feedback.
All reactions