-
-
Notifications
You must be signed in to change notification settings - Fork 578
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Performance issue with NATS transporter (v2.x.x) #1237
Comments
It looks that it's an issue with the |
For more context, libs:
Used OS for local testing: MacOS 13.5 |
Could you switch back to nats |
Already tried. Initially, it was using nats |
What is the NATS server version? |
|
plz try with the latest version 2.9.21 |
It appears that I've identified the root cause of the issue. Upon experimenting with various broker settings, I found that disabling the metrics feature resolved the problem. In my project, I use a StatsD reporter. Everything is functioning as anticipated with the Prometheus reporter. The StatsD reporter configuration looks like this {
type: 'StatsD',
options: {
// Server host
host: 'localhost',
// Server port
port: 8125,
// Maximum payload size.
maxPayloadSize: 1300,
}
}, So it seems definitely not the NATS issue. I'll turn off StatsD for now. |
It's strange because I can reproduce this issue without any metrics, only with 2.x.x nats lib. |
I've found the problem inside the |
This is a significant improvement, waiting for this fix then 😃 Pushed my tests just in case https://github.com/mrprigun/moleculer-benchmark-test There are two dedicated nodes and nats in docker-compose. With enabled StatsD reporter I receive ~ Nats service: 2.9.21 |
NATS fixed the issue in 2.16.0, my results: |
Discussed in #1235
Originally posted by mrprigun August 8, 2023
Hello, I encountered certain obstacles in my use case while attempting to execute the molecular actions. Briefly, I have a service (gateway for the main app with
moleculer-web
) that looks like this :external.call
is located on another node and invocation happens using NATS transporter. By design actionshello
andsecond_call
are gonna be loaded to process unique tasks. Before publishing it to production I made benchmarks for each action and received such results on my laptop:hello
action works pretty well, the result was ~600rpssecond_call
was very much degraded, I received just ~40rpsAlso, I received similar results after deployment to a prod-like environment. I believe this occurs due to external triggering, which is anticipated, but why so much? I've tried to use different load-balancing strategies and
bulkhead
but didn't receive any significant improvements. Is there a possibility of configuring Moleculer to enhance this behavior or it's some kind of bug?The text was updated successfully, but these errors were encountered: