Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use therubyracer or node.js ExecJS backend in production? #222

Closed
tomtaylor opened this issue Mar 25, 2015 · 13 comments
Closed

Use therubyracer or node.js ExecJS backend in production? #222

tomtaylor opened this issue Mar 25, 2015 · 13 comments

Comments

@tomtaylor
Copy link

The README used to mention that a "high performance in process JS VM like therubyracer" should be used with react-rails, but I notice this has been removed in recent revisions. I'm currently running an app on Heroku and the memory usage is pretty high, something I attribute to therubyracer.

Does anyone have any experience of using react-rails in production with the node.js ExecJS backend? Is there a reason why this note was removed from the README? Thanks!

@rmosolgo
Copy link
Member

I removed it after reading this note: https://devcenter.heroku.com/articles/rails-asset-pipeline#therubyracer

@tomtaylor
Copy link
Author

Thanks. Have you noticed any performance issues on Heroku?

On 25 Mar 2015, at 17:25, Robert Mosolgo notifications@github.com wrote:

I removed it after reading this note: https://devcenter.heroku.com/articles/rails-asset-pipeline#therubyracer


Reply to this email directly or view it on GitHub.

@rmosolgo
Copy link
Member

Sorry, I don't use heroku and I don't use prerender! I just wanted to take that warning into consideration.

This guy might have more to say: #156

Sorry I don't have much!

@bowd
Copy link
Contributor

bowd commented Mar 25, 2015

@tomtaylor I've been using react-rails with pre-rendering on heroku for a few months now. The sad part is I've been having memory related performance issues with ruby 2.1.2 for a while now so I didn't have a clean slate to compare to after adding therubyracer, but I think the memory usage didn't change dramatically (but with it being bad to begin with it's hard to say).

One thing I can say for sure is that using the "node.js" backend (which I did at first before going live). Is much much slower. And after I looked into it, it made sense to me, but I might be wrong so take this with a grain of salt:

The way react-rails works is by keeping a pool of ExecJS javascript context to use for rendering. These are (usually) warmed up by evaluating the component.js you pass in, and then evaluating the React.renderToString cals in those warmed up contexts. The thing is, with therubyracer, because it's a JS VM in ruby, the ExecJS context actually stay warm in memory. But with node.js as a backend, each time you render a component, it actually reevaluates all the javascript from component.js because the ExecJS context defers to an actual node process each time and parses the results. So it can't keep those warmed contexts in which it continuously evaluates new render cals.

Again this might be because of something wrong with my setup, but I remember following the code to the end back then. So with node backend I was spending ~5s rendering views, whereas with therubyracer it was ~150ms.

@rmosolgo
Copy link
Member

wow, so maybe I was wrong to remove that suggestion from the readme?

On Wed, Mar 25, 2015 at 1:27 PM, bogdan-dumitru notifications@github.com
wrote:

@tomtaylor https://github.com/tomtaylor I've been using react-rails
with pre-rendering on heroku for a few months now. The sad part is I've
been having memory related performance issues with ruby 2.1.2 for a while
now so I didn't have a clean slate to compare to after adding therubyracer,
but I think the memory usage didn't change dramatically (but with it being
bad to begin with it's hard to say).

One thing I can say for sure is that using the "node.js" backend (which I
did at first before going live). Is much much slower. And after I looked
into it, it made sense to me, but I might be wrong so take this with a
grain of salt:

The way react-rails works is by keeping a pool of ExecJS javascript
context to use for rendering. These are (usually) warmed up by evaluating
the component.js you pass in, and then evaluating the React.renderToString
cals in those warmed up contexts. The thing is, with therubyracer, because
it's a JS VM in ruby, the ExecJS context actually stay warm in memory. But
with node.js as a backend, each time you render a component, it actually
reevaluates or the javascript from component.js because the ExecJS
context defers to an actual node process each time and parses the
results. So it can't keep those warmed contexts in which it continuously
evaluates new render cals.

Again this might be because of something wrong with my setup, but I
remember following the code to the end back then. So with node backend I
was spending ~5s rendering views, whereas with therubyracer it was ~150ms.


Reply to this email directly or view it on GitHub
#222 (comment).

@bowd
Copy link
Contributor

bowd commented Mar 25, 2015

@rmosolgo Like I said I might have been wrong abut this but that's my understanding right now, and the result of the last time I investigated this. If anybody knows if ExecJS can keep warm node context in memory then I'm all ears 🍰

@tomtaylor
Copy link
Author

Thanks @bogdan-dumitru, that's really useful to know. The high memory issues we're seeing with therubyracer aren't causing particular performance issues, I was just surprised at the usage. We're using Ruby 2.2.1.

(We'd like to run more processes in each dyno, if possible, but each process seems to stabilise around ~300MB at the moment, so can't run more than 1 in a 1X dyno (512MB max). Switching to 2X dynos might help, and might give us enough room to run 3 worker processes in each, saving overall.)

@bowd
Copy link
Contributor

bowd commented Mar 25, 2015

@tomtaylor you might also wanna fiddle with the number of renderers you want react-rails to keep in the pool (cose that should have a linear impact on the extra memory). So if you're running a single process with X threads you don't need more than X renderers, and I think the default is 10.

@tomtaylor
Copy link
Author

@bogdan-dumitru yeah, we're using 5 threads in Puma, and max_renderers is set to that too.

@glittershark
Copy link
Contributor

To add my anecdotal evidence to the conversation, we just deployed a change to our production (heroku) app adding prerendering for all of our react-rails components - load time shot up to 35 seconds (yes, seconds) on pages that had a lot of React components, and we had to roll it back. Going to try this out with therubyracer and see if it's the case still

@rmosolgo
Copy link
Member

Check out #290 for some comparisons of ExecJS backends & discussion too

@glittershark
Copy link
Contributor

Sure this is covered over there, but I can definitely confirm that using therubyracer cleared the performance problem right up. It doesn't look to be using that much more memory, either

@rmosolgo
Copy link
Member

seems like this has been settled :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants