New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Performance issue while serving responses of around 180Kb #4656
Comments
I don't fully understand what you're trying to measure. In one end point, you're serving a static json resource, which you should not use On a second end point, you're doing much more:
I'd expect some CPU usage here, also using random UUID will require lots of entropy as the implementation relies on SecureRandom, which causes more load on the CPU. If the bottleneck of the application is the json encode/decode perhaps you should consider using a different parser? For example: https://jsoniter.com/ can be used with Vert.x too. Given that that parser can parse to a new JsonObject(Jsoniter.deserialize(input)) And the reverse: JsonStream.serialize(vertxJsonObject.getMap()) |
Hi @pmlopes Thanks for looking into the issue. I do not think toString is an issue as it calls encode internally.
For the second point, I did went down jsoniter few days back but still the performance is bad.
|
Hi @himanshumps, I ran your reproducer in Apache JMeter and tested with 50 concurrent users. Here are my findings: The CPU does in fact reach 100% when there are a few concurrent users calling your simple endpoint. However, this only happens when the same user is re-used. If it's a new connection each time, the CPU does not exceed any unexpected thresholds. I was seeing highs of around 10% CPU on my system when I unticked "Same user on each iteration" in JMeter. But the important thing is, it's not the Vert.x process that has the high CPU usage. It's Apache JMeter that is eating up my CPU. Therefore, I don't see any bugs with the code you have provided. I suspect Plow to be doing the same. You should confirm what process is eating up your CPU. Hope this helps. |
thanks @surajkumar for your investigations performance testing should always be done using 2 distincts machine (or 3 if there is a database), I will close this issue |
Version
4.3.7
Context
I encountered an exception which looks suspicious while doing the deserialization to JsonObject (without model classes) as well as the performance is degraded. The CPU utilisation is also going 100%.
Do you have a reproducer?
https://github.com/himanshumps/large-response.git
Steps to reproduce
There are two endpoints: http://localhost:8080/noProcessing and http://localhost:8080/jsonProcessing
Here is the DockerFile
And here is the docker command to run it
docker run -d --memory="4g" --cpus="2" -p 8888:8080 large_response
I ran through plow but you can use wrk as well (50 users for a duration of 5minutes).
Extra
https://groups.google.com/g/vertx/c/j3IcS8b8nMo
The text was updated successfully, but these errors were encountered: