-
Notifications
You must be signed in to change notification settings - Fork 295
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
2.0: Replication of 1M documents never completes and high CPU/memory usage #1970
Comments
@raghusarangapani and @sridevi-15 Do we have similar functional test to this? |
@hrajput89 and @sridevi-15 were working on setting up a system test with 1 million docs. They might be able to answer more. |
we have system test, just that it did not run with 1 Million yet due to other priorities. |
I have some large databases on S3, including a 20GB subset of Wikipedia, but I'm not sure if any have 1M documents. |
My application crashed with below error -
|
|
I was able to either push and pull 1M docs (each doc has
The memory and CPU usage seemed to be high. I will need to retest PULL replicator to see why the memory didn't go back to normal after the replication is done. Note:
|
@pasin, could you save a copy of the 1M-doc database somewhere? The performance is slower than I'd expect; were you using a debug or release build? |
I have retested with the current 2.0-dev binary. I'm using SGW with persistent walrus. PUSH (PUSH AND PULL):
PULL (PUSH AND PULL):
PUSH (PUSH ONLY)
PULL (PULL ONLY)
Walrus db file : https://www.dropbox.com/s/sjkgbbxd6wxqc8d/db.zip?dl=0 Pull replicator is very slow when using the persistent walrus. I'm testing with Couchbase Server to see if the result is different. |
The result is much better especially the pull replication when testing with Couchbase Server (v5.01). There seems to be a memory issue with pull replicator. PUSH (PUSH ONLY)
PULL (PULL ONLY)
|
Strange that pull would be slower than push with Walrus. The performance problem with persistent Walrus stores is on writes, since they rewrite the entire database every time. |
@pasin, what client device were your tests run on? I just ran a test using the
And with the client on my iMac, but SG still on the MBP, over WiFi:
|
I used an iPhone 6 device. |
I fixed couchbase/couchbase-lite-core#404, which will help push memory usage a lot. |
Still TBD: Testing performance of the replication on a real iOS device. @pasin said he'd send me the app project he used to run the replication, since we don't have any existing tool to do this. |
@snej here is the app : https://github.com/pasin/TestMDocs. Note :
|
Results on an iPhone 6s+ using Pasin's test app with latest CBL, and SG with a non-persistent Walrus bucket:
Since the pull memory leak does not occur with LiteCore alone, I'm wondering if there's a leak in CBLReplication. |
Ah, the memory usage looks to come mostly from SequenceTracker; there are about 400k 112-byte malloc blocks created by the following backtrace:
|
Found a sort-of memory leak in LiteCore's SequenceTracker class -- sequence entries were being added to the main thread's SequenceTracker (copied from the replicator thread's) but the main thread didn't have any observers so it never ended up cleaning up obsolete entries (calling Now that I've fixed the bug, pull memory usage stays in the range of 30-40MB. Pulls are slightly faster too, about 430sec (~2300 docs/sec). |
From forums https://forums.couchbase.com/t/couchbaselite-2-0-swift-and-gateway-1-5-slow-pulling/14902/2
The text was updated successfully, but these errors were encountered: