-
Notifications
You must be signed in to change notification settings - Fork 981
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Memory leak in 2.5.5 #4491
Comments
Hi @vladzu . |
Hi @renecannao ,
|
Hi @vladzu, based on the provided memory dumps, there is no obvious memory usage increase that can be appreciated as an indication of a memory leak. This could also mean that the memory dump doesn't contains the leak scenario, memory usage didn't seem to increase much during the This could also mean that there is no real leak, and that memory is just being rightfully used for
Then several stats tables:
Also mention that the metrics that you reported from that particular instance are a bit odd, the uptime is high enough (
And
I'm just curious if this is an instance setup with the purpose of reproducing the leak, or it's a real scenario instance in which the grow in memory usage was observed? To summarize, I don't see a leak being present in this instance, if you could please share more details about this one, or other more affected I would gladly follow up. Thank you, Javier. |
Hi @vladzu, since it can add more helpful information, could you also paste the output of the following script?
If there is any table retaining extra info, would be easy to spot. Thank you, Javier. |
Hi @JavierJF, First of all some comments on that: Yes, test was run only for 4 hours but if you look at the 1st graph provided - you can see it is growing constantly (drops are restarts). Currently within 10 days memory grown from 350Mb to 700Mb with slow growing pace. It can be Sqlite3 footpring but what is causing it ? Providing requested data:
Indeed, currently as you mentioned - this host does not server any traffic but memory usage is increasing. But we see the same trend with machines that are serving traffic.
This is because we dynamically create and delete mysql schemas and we sync them to proxysql as hostgroups and mysql_servers . If you want we can run memory profiler for a longer period and on machine where there is a live traffic |
Hi @vladzu,
Yes, I understand, my comment was regarding the dumps themselves within that time frame. In them, it can't be appreciated something specific as the cause of the memory usage increase. This grow itself isn't usual, specially because looks related to SQLite3, and it's an idle system (outside of the regular configuration updates). In the previous output I can't see any config that could cause this steady memory increase.
Thanks for clarifying. Could you please share with us the output of the previous script I have provided? Just in case we are missing a growing table in one particular place. Thank you, Javier. |
Sorry, missed that 11770 : stats.stats_mysql_query_digest_reset |
any updates on that ? just to note here - once a day we do: |
Hi @vladzu .
I want to assure you that we are committed to supporting all our users, including both our community participants and our paying subscribers. To ensure efficient and dedicated support, we prioritize direct inquiries from our paying customers through our private support channels. This model allows us to provide them with timely and extensive assistance, as part of the benefits of their subscription plan. To be more specific on this issue, we have been working on this and we are making interesting progress.
Please run
This only reset some counter, it is not relevant about memory usage. |
Hi @renecannao , Thank you for your proposal, we will test this out and come back with the results |
Hi @vladzu, good afternoon, as @renecannao mentioned, we decided to look back to this issue, since the information you provided pointed something to be off. Sadly, probably due to the steady allocation pattern of the instance and the default profiling mode of Also, we have taken action for improving the granularity we get for Thanks for the report, and for the supplied information, regards, Javier. |
We have upgraded and running ProxySQL version 2.5.5-10-g195bd70, package used - proxysql_2.5.5-ubuntu20_amd64.deb.
Before we used version 2.5.3
Distro we use: Ubuntu 20.04.6 LTS \n \l
We're having memory leak issues in all our proxysql instances. It was issue v=before with 2.5.3 and with current 2.5.5.
We do truncate
stats_mysql_query_digest
once a day.Also from here you can see that it is not the main issue:
this is an example of one instance memory growing. other instances have the same pattern
As you can see from the picture - we did 2 restarts during this time
proxysql.cnf
To mention inside proxysql we have:
We utilise mysql_servers to have custom max_connections limits per hostgroup.
Including memory profile files here -
proxysql_memory_dump.tar.gz
It includes 4h of memory dumps, the memory leakage trend can be seen here as well:
The text was updated successfully, but these errors were encountered: