New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Meteor 1.7.1-rc.11 slower than rc.10 due to Node 8.12.0 update. #10216

Closed
p3pp8 opened this Issue Sep 12, 2018 · 58 comments

Comments

Projects
None yet
@p3pp8

p3pp8 commented Sep 12, 2018

I've noticed a huge slowdown of my app after upgrading to RC.11 release from RC.10.

@abernix

This comment has been minimized.

Member

abernix commented Sep 12, 2018

@abernix abernix changed the title from Meteor 1.7.1-rc.11 Huge slowdown. to Meteor 1.7.1-rc.11 slower than rc.10. Sep 12, 2018

@benjamn benjamn changed the title from Meteor 1.7.1-rc.11 slower than rc.10. to Meteor 1.7.1-rc.11 slower than rc.10 due to Node 8.12.0 update. Sep 12, 2018

benjamn added a commit that referenced this issue Sep 12, 2018

Downgrade Node from 8.12.0 back to 8.11.4, for now.
This minor update was evidently too risky to slip into a release candidate
of Meteor 1.7.1: #10216

You can still use Node 8.12.0 to run your app in production, and thus get
the benefits of #10090, even if it's
not the version used in development.

@benjamn benjamn added this to the Release 1.7.1 milestone Sep 12, 2018

@benjamn benjamn self-assigned this Sep 12, 2018

@benjamn

This comment has been minimized.

Member

benjamn commented Sep 12, 2018

@veered

This comment has been minimized.

Contributor

veered commented Sep 12, 2018

A few questions:
(a) Does your app now have high CPU usage?
(b) Did your app (before the upgrade) already have high CPU usage?
(c) How are you measuring the performance issues?

It would be extremely helpful if you take a CPU profile before & after the Node upgrade using https://github.com/qualialabs/profile

FWIW, Qualia has been running Node 8.12 RC1 in production for a few weeks and we haven't noticed any performance penalty. However, we aren't using Meteor 1.7.1, we are still using 1.6.0.1 until 1.7.1 is officially released.

@KoenLav

This comment has been minimized.

KoenLav commented Sep 12, 2018

Something I did notice and might be related; after I upgraded to 1.7.1-rc.11 Meteor maxed out a core on my development machine the first time I ran it.

Running it the second time this didn't happen. I don't have more to go by than this and haven't put Meteor 1.7.1 on a (production) server yet.

@macrozone

This comment has been minimized.

macrozone commented Sep 13, 2018

Maybe a stupid question, but do you really run on node 8.12.0 on production? On galaxy, this is surely the case, but if you - like me - deploy on a custom environment, this is not guaranteed as node is not part of meteor's bundle.

Do the slowdowns also occur on develop?

@KoenLav

This comment has been minimized.

KoenLav commented Sep 13, 2018

@macrozone very good point!

@macrozone

This comment has been minimized.

macrozone commented Sep 13, 2018

@jimrandomh as you opened the issue #10117 it would also be interesting to hear about your observation with node 8.12.

could #9796 be resolved with 8.12. or was the fiber spike issue not related to that issue at all?

@nick-gudumac

This comment has been minimized.

Contributor

nick-gudumac commented Sep 13, 2018

In development and local build I haven't seen anything unusual, just when deployed to production, the CPU spiked like crazy. I suspect it being related to Fibers / Oplog.

@benjamn I followed your instructions and did a CPU profiling on local build. I haven't seen anything outstanding. Check them out below. Let me know if you need more information.

1.7.1 RC.11 build (Node v8.12.0)

CPU-20180913T165128-RC11-proper.cpuprofile.zip

1.7.1 RC.12 build (Node v8.11.4)
CPU-20180913T170450-RC12-proper.cpuprofile.zip

@benjamn

This comment has been minimized.

Member

benjamn commented Sep 13, 2018

@nick-gudumac Ah yes, those profiles don't seem to be showing anything suspicious.

Maybe you could try using qualia:profile from meteor shell in production? https://github.com/qualialabs/profile#runtime-profiling

@nick-gudumac

This comment has been minimized.

Contributor

nick-gudumac commented Sep 13, 2018

@benjamn unfortunately I can't ssh into Galaxy, is there any other way to test this?

@sebakerckhof

This comment has been minimized.

Contributor

sebakerckhof commented Sep 13, 2018

@nick-gudumac , if you can reproduce it when spawning a second project with the same code base, you could try: https://github.com/qualialabs/web-shell by cloning it manually and removing this line: https://github.com/qualialabs/web-shell/blob/master/package.js#L7 . But you obviously don't want to push this to your production environment.

@veered

This comment has been minimized.

Contributor

veered commented Sep 13, 2018

To get a profile in production, you could also create a route (with a random, secure name) that triggers a profile and then returns the file so you can download it. Maybe I can try to whip something like that up this weekend, but you could also easily do it on your own.

Pretty rough that people using Galaxy can't use prod-shell... it's not like allowing SSH into Galaxy containers would change the security model much. The Meteor server running in the container can already run arbitrary shell commands.

@MylesBorins

This comment has been minimized.

MylesBorins commented Sep 14, 2018

Please let me know if we caused major regressions in 8.12.0. I can't think of anything that stands out that could have caused it but we can easily revrt something and cut an 8.12.1

Please lmk if we need to bisect and I can help

@benjamn

This comment has been minimized.

Member

benjamn commented Sep 18, 2018

@nick-gudumac Any luck with @veered's suggestion? You could also use a Meteor.call method to start the profile, and restrict it to a specific user (i.e. you, or any admin).

@nick-gudumac

This comment has been minimized.

Contributor

nick-gudumac commented Sep 18, 2018

@benjamn I've managed to take a CPU profile in a production environment under load, below are the results. Can you take a look?

cpu-profile-meteor-1.7.1-RC-11.cpuprofile.zip

Let me know if you need more information, or if I need to take another CPU profile.

@benjamn

This comment has been minimized.

Member

benjamn commented Sep 18, 2018

Thanks @nick-gudumac!

Wow, a ton of time is being spent in the garbage collector! 87% of the profile, or 9.6 out of 11 seconds. Nowhere close to 11 seconds of "real work" is happening here, but everything in the chart seems to bottom out in long GC pauses. There's a reasonable diversity of callers, too, so I can't blame this on (say) Kadira, though it does show up in this profile.

@veered

This comment has been minimized.

Contributor

veered commented Sep 18, 2018

@nick-gudumac It's worth checking that using the profiler didn't increase the memory usage on the server (aka the mem usage was roughly the same both before and after running the profiler).

This is only relevant if the profile you uploaded wasn't the first one you triggered since the server booted up. I've seen the profiler increase mem usage a couple times (but it's rare). If your server was already close to running out of memory, then it's possible running the profiler put it over the edge and the GC started running all the time.

If the mem or CPU usage didn't increase after running the profiler or if your uploaded profile was the first profile you recorded, then the profiler definitely isn't the issue.

@nick-gudumac

This comment has been minimized.

Contributor

nick-gudumac commented Sep 18, 2018

@benjamn Yeah, I've seen the long gc pauses, any idea why this might happen? How can we trace this more, perhaps try different configurations for the garbage collector?

@veered the above profiling was not taken at server startup but initiated via method call. There was almost no memory usage change during the profiling or else, and the profiler did not cause any CPU spikes alone. Seem like the spikes are created by subscriptions and method calls (even a few calls), and the app becomes unresponsive after that for a while.

@veered

This comment has been minimized.

Contributor

veered commented Sep 18, 2018

@nick-gudumac Ok cool, then the profile you uploaded should accurately reflect what's happening to your app in production.

@csillag

This comment has been minimized.

csillag commented Sep 24, 2018

Since this issue seems to be blocking a critical path leading to a major performance improvement in our use case, may I ask who has the ball on this one? Thank you.

@benjamn

This comment has been minimized.

Member

benjamn commented Sep 24, 2018

@csillag Is your performance improvement related to Node 8.12.0, or just general Meteor 1.8 improvements (or both)?

@csillag

This comment has been minimized.

csillag commented Sep 24, 2018

We are waiting for Node 8.12.0, for the Fiber fix.

@MylesBorins

This comment has been minimized.

MylesBorins commented Sep 25, 2018

@benjamn have you figured out if the GC stuff is related to a change in 8.12.0? Here are the changes that landed on V8 in that version

* 5294919d05 - deps: V8: cherry-pick 9040405 from upstream (5 weeks ago)<Junliang Yan>
* ae63db8624 - deps: backport 804a693 from upstream V8 (6 weeks ago)<Matheus Marchini>
* d9ab189f55 - deps: cherry-pick b767cde1e7 from upstream V8 (6 weeks ago)<Ben Noordhuis>
* 31883368c7 - deps: cherry-pick 0c35b72 from upstream V8 (6 weeks ago)<Gus Caplan>
* ffb72f810e - deps: cherry-pick 09b53ee from upstream V8 (6 weeks ago)<Anna Henningsen>
* 8e0f28b8f0 - deps: V8: backport 49712d8a from upstream (6 weeks ago)<Ali Ijaz Sheikh>
* efe28b8581 - deps: V8: fix bug in InternalPerformPromiseThen (6 weeks ago)<Ali Ijaz Sheikh>
* 9aeffab452 - deps: V8: cherry-pick 8361fa58 from upstream (6 weeks ago)<Ali Ijaz Sheikh>
* f987a512d4 - deps: V8: backport b49206d from upstream (6 weeks ago)<Ali Ijaz Sheikh>
* e6cd7e57b3 - deps: V8: cherry-pick 5ebd6fcd from upstream (6 weeks ago)<Ali Ijaz Sheikh>
* d868eb784c - deps: V8: cherry-pick 502c6ae6 from upstream (6 weeks ago)<Ali Ijaz Sheikh>
* 656ceea393 - deps: cherry-pick dbfe4a49d8 from upstream V8 (6 weeks ago)<Jan Krems>
@benjamn

This comment has been minimized.

Member

benjamn commented Sep 26, 2018

@csillag What if we put out Meteor 1.8 with Node 8.11.4 (same as currently), and then immediately release a Meteor 1.8.1 with Node 8.12.0, without "recomending" that release until we can get to the bottom of these problems, so that your team could update?

@benjamn

This comment has been minimized.

Member

benjamn commented Oct 6, 2018

For those who have been wanting to run Node 8.12.0 in production on Galaxy, you can now update your apps to Meteor 1.8.1-beta.0 by running the following command:

meteor update --release 1.8.1-beta.0

If you deploy to Galaxy with this version of Meteor, please keep an eye on your CPU usage, and report your findings (good or bad) here.

@csillag

This comment has been minimized.

csillag commented Oct 10, 2018

We have been eagerly waiting for this, although we don't use Galaxy (company policy reasons). Unfortunately, we can't prove any meaningful feedback as of now, because we have hit another (independent) performance problem. (This is a live system, and data patterns are also changing.) I will report back when we have resolved that conundrum, and we are in a better position to take some measurements.

xet7 added a commit to wekan/wekan that referenced this issue Oct 10, 2018

- [Upgrade](#1522) to [Meteor](https://blog.meteor.com/meteor-1-8-era…
…ses-the-debts-of-1-7-77af4c931fe3) [1.8.1-beta.0](meteor/meteor#10216).

  with [these](079e45e)
  [commits](dd47d46). So now it's possible to use MongoDB 2.6 - 4.0.

Thanks to xet7 !

tab00 added a commit to tab00/prebuiltdeploy that referenced this issue Oct 11, 2018

Set NPM version to 8.11.x; added "fix" to "audit"
Change back to 8.x.x after this issue is resolved: meteor/meteor#10216
@elie222

This comment has been minimized.

Contributor

elie222 commented Oct 12, 2018

Just to add to this conversation from a different angle, we were experiencing memory leaks on our 1.6 app. Likely to do with our Apollo resolvers, but we didn't figure it out yet. Upon upgrading to Meteor 1.7 these memory issues seem to have stopped and memory stays around 500mb for each instance. It does hit 800mb or so occasionally, but then drops down after a while. Before the upgrade I was seeing instances randomly jump from 500mb ram usage to 2gb ram usage in a matter of hours.

Didn't quite understand how this happened, but if the GC is doing more work now, could be that's why our memory is better controlled now.

@oohaysmlm

This comment has been minimized.

oohaysmlm commented Oct 15, 2018

Possibly posted this in the wrong place earlier, but I think we had a similar issue which I was describing over here: #10032

@erikolofsson

This comment has been minimized.

erikolofsson commented Oct 20, 2018

We have this issue now running Meteor 1.8 with node 8.12.0. Some data:

It seems dependant on how long the node process has been running (sample of three):

Time                 Duration  Action
2018-10-15 06:22:29            New launch
2018-10-16 21:11:10  38h 49m   Killed due to 100% CPU usage
2018-10-17 06:27:53            New launch
2018-10-18 18:24:03  35h 56m   Killed due to 100% CPU usage
2018-10-20 05:31:18  35h 7m    Killed due to 100% CPU usage

After the first process reaches 100% CPU usage other node processes launched at the same time seems to also go into 100% soonish. After I noticed this I started killing all nodes at the same time.

Memory usage is nowhere near the limit. This is the most recent instance of the problem:

Running on Ubuntu 16.04
Resident Size: 610 MB
Started with: --max_old_space_size=2000

Callstack when breaking process in gdb:
#0  0x00007f5a8da0c827 in futex_abstimed_wait_cancelable (private=0, abstime=0x0, expected=0, futex_word=0x306c530) at ../sysdeps/unix/sysv/linux/futex-internal.h:205
#1  do_futex_wait (sem=sem@entry=0x306c530, abstime=0x0) at sem_waitcommon.c:111
#2  0x00007f5a8da0c8d4 in __new_sem_wait_slow (sem=0x306c530, abstime=0x0) at sem_waitcommon.c:181
#3  0x00007f5a8da0c97a in __new_sem_wait (sem=<optimized out>) at sem_wait.c:29
#4  0x00000000012ffaa8 in v8::base::Semaphore::Wait() ()
#5  0x0000000000dc3054 in v8::internal::ItemParallelJob::Run() ()
#6  0x0000000000ddd999 in v8::internal::MarkCompactCollector::Evacuate() [clone .constprop.846] ()
#7  0x0000000000de3fb5 in v8::internal::MarkCompactCollector::CollectGarbage() ()
#8  0x0000000000d95a3a in v8::internal::Heap::MarkCompact() ()
#9  0x0000000000daf1b9 in v8::internal::Heap::PerformGarbageCollection(v8::internal::GarbageCollector, v8::GCCallbackFlags) ()
#10 0x0000000000daffab in v8::internal::Heap::CollectGarbage(v8::internal::AllocationSpace, v8::internal::GarbageCollectionReason, v8::GCCallbackFlags) [clone .constprop.960] ()
#11 0x0000000000db069b in v8::internal::Heap::HandleGCRequest() ()
#12 0x0000000000d4bf94 in v8::internal::StackGuard::HandleInterrupts() ()
#13 0x000000000100dc94 in v8::internal::Runtime_StackGuard(int, v8::internal::Object**, v8::internal::Isolate*) ()
...
@KoenLav

This comment has been minimized.

KoenLav commented Oct 20, 2018

@erikolofsson

https://pages.mtu.edu/~shene/NSF-3/e-Book/SEMA/basics.html

It seems that garbage collection is triggered, awaiting access to a resource, which somehow causes garbage collection to run again, awaiting access to that same resource (which it is apparently never granted access to).

@MylesBorins I am not sure if my interpretation above is correct, would you care to shed some more light?

@benjamn am I correct in assuming this will probably have something to do with the use of Fibers?

If so then maybe @laverdet can help pinpointing the problem?

@jimrandomh

This comment has been minimized.

jimrandomh commented Oct 24, 2018

This could be a red herring; 100% CPU usage and a stack trace in the garbage collector, is also what I would expect if some unrelated code had gone into an infinite loop and that loop was allocation-heavy.

@erikolofsson

This comment has been minimized.

erikolofsson commented Oct 24, 2018

I downgraded to node 8.11.4 and the problem persists, now with a slightly different call stack:

#0  0x00007fc8e0e33a70 in std::_Rb_tree_increment(std::_Rb_tree_node_base const*) () from /usr/lib/x86_64-linux-gnu/libstdc++.so.6
#1  0x0000000000da8420 in v8::internal::RememberedSetUpdatingItem<v8::internal::MajorNonAtomicMarkingState>::Process() ()
#2  0x0000000000d8775b in v8::internal::PointersUpatingTask::RunInParallel() ()
#3  0x0000000000d89c11 in v8::internal::ItemParallelJob::Run() ()
#4  0x0000000000da44d9 in v8::internal::MarkCompactCollector::Evacuate() [clone .constprop.846] ()
#5  0x0000000000daaaf5 in v8::internal::MarkCompactCollector::CollectGarbage() ()
#6  0x0000000000d5c5ca in v8::internal::Heap::MarkCompact() ()
#7  0x0000000000d75d09 in v8::internal::Heap::PerformGarbageCollection(v8::internal::GarbageCollector, v8::GCCallbackFlags) ()
#8  0x0000000000d76afb in v8::internal::Heap::CollectGarbage(v8::internal::AllocationSpace, v8::internal::GarbageCollectionReason, v8::GCCallbackFlags) [clone .constprop.960] ()
#9  0x0000000000d771db in v8::internal::Heap::HandleGCRequest() ()
#10 0x0000000000d12b24 in v8::internal::StackGuard::HandleInterrupts() ()
#11 0x0000000000fd44c4 in v8::internal::Runtime_StackGuard(int, v8::internal::Object**, v8::internal::Isolate*) ()

It's stuck in this loop, with rdx not changing between loops, so it seems that the rbtree is corrupted.

>│0x7fc8e0e33a70 <_ZSt18_Rb_tree_incrementPKSt18_Rb_tree_node_base+16>            mov    %rdx,%rax
 │0x7fc8e0e33a73 <_ZSt18_Rb_tree_incrementPKSt18_Rb_tree_node_base+19>            mov    0x10(%rax),%rdx
 │0x7fc8e0e33a77 <_ZSt18_Rb_tree_incrementPKSt18_Rb_tree_node_base+23>            test   %rdx,%rdx
 │0x7fc8e0e33a7a <_ZSt18_Rb_tree_incrementPKSt18_Rb_tree_node_base+26>            jne    0x7fc8e0e33a70 <_ZSt18_Rb_tree_incrementPKSt18_Rb_tree_node_base+16>
@KoenLav

This comment has been minimized.

KoenLav commented Oct 24, 2018

Judging from the report above it is now more likely that the error originates in Meteor or one of its dependencies, I guess.

@jimrandomh

This comment has been minimized.

jimrandomh commented Oct 25, 2018

That sounds like memory corruption, which narrows the scope for where the bug could be to things that involve native code.

menelike added a commit to risetechnologies/meteor that referenced this issue Nov 5, 2018

@AndrewMorsillo

This comment has been minimized.

AndrewMorsillo commented Nov 21, 2018

I'm also seeing problems on meteor 1.8 and node 8.12.0. If too many users connect at once my instance goes into some sort of garbage collection loop that it never recovers from. Ram usage climbs up to max_old_space_size, CPU stays pegged at 100% and 80%+ of time is spent in the GC. Most requests get dropped and the app is unusable. This only happens if lots of users connect at once however. The same amount of users connecting over a period of time will work just fine and the server will hum along without problem.

Here is a CPU profile and some screenshots to illustrate. I scaled my app down from 6 pods to 1 to trigger this problem. As soon as the users from the other pods connect to the single remaining one then it goes haywire. If I scale back up then things calm down again.

screen shot 2018-11-21 at 2 28 11 pm

screen shot 2018-11-21 at 2 27 39 pm

2018-11-21T19_27_28.302Z.cpuprofile.zip

Edit: I tried rolling back to node 8.11.4 with no success, same symptoms, nearly the same cpu profile
2018-11-21T20_45_16.234Z.cpuprofile.zip

@AndrewMorsillo

This comment has been minimized.

AndrewMorsillo commented Nov 21, 2018

I think my problem is unrelated to this issue. It seems what I am experiencing is a feedback loop of disconnecting/reconnecting that places continually more load on my servers until they crash. See issue here #10346

@AndrewMorsillo

This comment has been minimized.

AndrewMorsillo commented Nov 27, 2018

After fixing that feedback loop issue I'm still seeing strange behavior. Not entirely sure if it is due to Meteor 1.8 or Node 8.12.0, but some subscriptions appear to leak a lot of memory. This heap analysis represents a memory usage growth of ~2gb over a period of 10 minutes when my connected client count went from 29 to 32. Three clients joined, 2gb of memory was added and never released. It looks like all the allocation is in subscriptions and mongo fetching data. Most of my clients use only a few MB of ram per connection and while I do have some accounts that fetch a lot of extra data it is not 100+ times as much as a normal client.

I need to do some more digging but something seems fishy here.

screen shot 2018-11-27 at 11 22 28 am

@laverdet

This comment has been minimized.

laverdet commented Nov 27, 2018

@AndrewMorsillo do you have the ability to build fibers from source on your system? If so could you please try to rebuild from source while on nodejs v8.12.0 and see if that changes anything? You can do this by doing cd node_modules/fibers; node build -f.

@AndrewMorsillo

This comment has been minimized.

AndrewMorsillo commented Nov 27, 2018

@laverdet I can't reproduce this issue in development. I only see it in production. I suppose I could try to add a rebuild of fibers to my Dockerfile. What change would you expect to see by rebuilding fibers? Any specific issue you think that would solve that you can point me to?

@laverdet

This comment has been minimized.

laverdet commented Nov 27, 2018

nodejs/node#21021 describes an issue with the v8 function AdjustAmountOfExternalAllocatedMemory, which fibers invokes pretty often. The fix for this was backported from v8 and landed in nodejs v8.12.0, but you have to recompile fibers from source to pick up the patch.

Actually if you're on Linux you can pick up this change by installing fibers@3.1.0. I originally learned about this issue because the node team's fix introduced an ABI incompatibility where modules compiled for v8.12.0 would not work on other releases of node from v8.x, so their fix was modified for v8.13.0. When I pushed fibers@3.1.0 I broke everyone's stuff because I compiled it against nodejs v8.12.0 so I had to push 3.1.1 shortly after to fix fibers on earlier versions from the same LTS branch.

@AndrewMorsillo

This comment has been minimized.

AndrewMorsillo commented Nov 27, 2018

OK I've deployed a new version on fibers@3.1.0 and node v8.12.0. So far things seem somewhat more stable. CPU usage is lower, more consistent, and less spiky. The initial connection spike that used to keep my server pegged for several minutes only lasted ~1min this time. Memory usage still seems high but not as high as before. I also set Fiber.poolSize = 1e9. I will need some time to watch and analyze this to see if strange behavior pops up again.

Edit: a question about the node garbage collector. How "lazy" is it? If I set max_old_space_size to a high value will it never bother to release some memory that it could possibly release as long as it is below max_old_space_size?

@KoenLav

This comment has been minimized.

KoenLav commented Dec 1, 2018

@AndrewMorsillo thanks for digging into this!

What are your findings so far?

@AndrewMorsillo

This comment has been minimized.

AndrewMorsillo commented Dec 3, 2018

@KoenLav I seem to have achieved some stability by removing all "reactive publishing" from my app. I previously had issues with the peerlibrary:reactive-publish package so I removed that and replaced it with a custom solution using this.added and observeChanges. After a lot of fighting with attempting to take heap dumps which always crashed node I decided to modify my app to remove any of this reactive publishing. After doing so my app is much more stable. I'm running with about 50 users per instance with each instance hovering around 2gb of memory and 30% cpu. 2gb and 30% still seems a little high so I may do more digging but it also may be good enough for our purposes.

I think the next move for me will be to move some data loading to a non-reactive solution like Apollo. I've spent so much time fighting with Meteor and its opaque behavior in terms of resource usage and performance. It may save time to switch some data loading to a solution that has more manageable performance characteristics rather than trying to further optimize Meteor publications.

@KoenLav

This comment has been minimized.

KoenLav commented Dec 3, 2018

@AndrewMorsillo thanks for your insights!

I definitely don't think Meteor/DDP's design is flawed at its core, however it seems it could use some re-architecture.

@AndrewMorsillo

This comment has been minimized.

AndrewMorsillo commented Dec 3, 2018

I don't think it is flawed at its core either, just that it is more suitable for some jobs vs others and that isn't very clear. In my case publishing a lot of data in a complex app that is strictly per-user seems to bring out some of the less desirable performance characteristics of DDP. I think in a simpler app that also shared more data between clients that things would probably look at lot better.

@evolross

This comment has been minimized.

evolross commented Dec 3, 2018

Pub-sub definitely has overhead. For example, every single user's current published data set is kept in memory on the server despite cursor reuse. So if each user has megabytes of data in your case... that could add up.

@nick-gudumac

This comment has been minimized.

Contributor

nick-gudumac commented Dec 7, 2018

@benjamn I've just tested the 1.8.1 beta 8 and it looks like there's no more CPU pegging like before. Though, I haven't seen any performance improvements from 1.8 version (with node 8.11.2). Also, the CPU still spikes if there are bulk updates in MongoDB (using Oplog), similar to previous versions.

@benjamn

This comment has been minimized.

Member

benjamn commented Dec 7, 2018

Thanks for that confirmation @nick-gudumac! I was just about to comment here asking folks to run

meteor update --release 1.8.1-beta.8

Note that we've moved two minor versions beyond Node 8.12.0 at this point, to Node 8.14.0!

@benjamn

This comment has been minimized.

Member

benjamn commented Dec 7, 2018

In order to reflect what's been fixed in the Meteor 1.8.1 milestone so far, I'm going to close this issue now, though of course you'll need to run

meteor update --release 1.8.1-beta.8

to get Node 8.14.0. If you're not using Galaxy, and you have the freedom to choose the Node version you're using in production independently of your Meteor version, you may be able to run just

meteor update promise

instead of updating to Meteor 1.8.1 (yet).

@benjamn benjamn closed this Dec 7, 2018

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment