New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Issue with puma concurrency #89
Comments
I think the problem is that Ember-cli-rails is creating N ember instances for each worket. While debugging other problem i had i found that we shouldn't have more than 1 ember watch running over the same directory at a time. Ember CLI seems to be using tmp as a place to store some cache-related data. The result is like accessing the same no-threadsafe data set with two threads: a race condition. I don't know too much of the code to know if there is some way to run one global ember process per folder (in a way similar to how zeus uses a .zeus.sock file) instead of one ember process per rails process (worker). It could be a better solution than the workaround. |
That is correct. Honestly, I don't know how to detect multiple worker situation properly and consistently. The assumption was that this should only happen in development environment and people usually don't run multiple workers in development environment. Apparently it's false :) |
I think the problem is that, sometimes, it's easiest to just put gem :webrick, :group => :development
gem :puma, :group => :production But anyway, there is no assumption of just one worker and maybe in the future there could be a server which uses many workers everywhere and it could help to just have covered the situation of accidentally creating many embers. Why do you think of using an empty file to detect if the ember process was already running? |
I've created a new issue to consolidate these two #94. Let's all move there. |
When developing get the error:
Workaround:
Found that reducing the max concurrent workers to 1 fixed the issue.
The text was updated successfully, but these errors were encountered: