-
Hey! This is a question about a performance loss when updating from CouchDB 2.3.1 to 3.1.1. We use CouchDB in a cluster configuration, with After we upgraded to 3.1.1, the script became 4× slower. On average: 8 min with CouchDB 2, 25 min with CouchDB 3. To solve that, we tried:
What we consider trying:
My question is if someone can give us clues of parameters we could change to get back the performance we lost? Is there some kind of new restrain/concurrency limit in CouchDB 3? |
Beta Was this translation helpful? Give feedback.
Replies: 4 comments 7 replies
-
First step: check
Both can be adjusted to be less aggressive, or if desired, disabled completely. |
Beta Was this translation helpful? Give feedback.
-
Hi! I have two new elements to add to this issue. First we tried to Secondly, we made a small chart of the number of databases we can read per second. We do a Three gaps, at 1:15:00, 1:25:00, and 1:35:00 are due to the limitation of the logger and should be ignored. With this chart, we can see the performance is constant over time. So we still don't know why there is this 4× performance decrease between CouchDB 2 and 3.
|
Beta Was this translation helpful? Give feedback.
-
Changing the Are you tracking individual request latencies? You're increasing concurrency without seeing any throughput improvement in your tests against 3.1.1, so I'd expect to see latencies rising as a function of concurrency. Do you happen to see any difference for the individual document requests vs. the ones requesting multiple keys? Just curious. This behavior of CouchDB running stably, not using all its CPU and not delivering great throughput sounds like an aggressive IOQ to me. Can you share the server configuration you're using, or at least the |
Beta Was this translation helpful? Give feedback.
-
Hi! We finally understood what was happening, I’m posting the explanation here in the hope it can help other users using the same hardware as we do. Our disks are AWS EBS volumes (SSD called io1), they are “IOPS provisioned” with a user-defined maximum limit of 300 disk I/O operations per second. AWS documentation explains how IOPS are counted:
It appeared that our IOPS value of 300 IOPS was the bottleneck. After raising the IOPS limit, CouchDB 3 now performs very well. For our usage, it is as fast as CouchDB v2, possibly better. @wohali, @kocolosk, and anyone that has taken the time to read this, thanks for the help. We think that CouchDB 3 performs more disk operations than CouchDB 2, is it possible? |
Beta Was this translation helpful? Give feedback.
Hi!
We finally understood what was happening, I’m posting the explanation here in the hope it can help other users using the same hardware as we do.
Our disks are AWS EBS volumes (SSD called io1), they are “IOPS provisioned” with a user-defined maximum limit of 300 disk I/O operations per second. AWS documentation explains how IOPS are counted:
It appeared that our IOPS value of 300 IOPS was the bottleneck. After raising the IOPS limit, CouchDB 3 now performs very well. For our usage…