-
Notifications
You must be signed in to change notification settings - Fork 960
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ever increasing 'stats_mysql_query_digest' results in sporadic client timings #4531
Comments
Hi @flopex , thank you for your report. Why a large Your test with 5000 simple If the query digest table is responsible for these stalls it means that the query digest digest itself is blocked: some thread is performing some operation other than lookup or insert.
The first 2 operations are extremely fast. Because you have spikes at regular interval, I assume that the table is not truncated, thus point 4 shouldn't apply to you.
I hope the above explains it.
I think the first step is to identify why And finally, as you would expect, "upgrade" is often a solution :) In ProxySQL 2.4.0 parsing of queries and the query digests was extensively improved, and at least 3 new variables were introduced to control how query digest are generated. Also, In ProxySQL 2.5.2 there is an improvement that makes point 5 (the generation of As a final note, in latest ProxySQL the 5 operations on query digest table ordered by blocking time should be:
|
Thank you for the very detailed response, it all makes more sense now. We'll look into further testing out these to find the root cause, but the Prometheus exporter does sound like an interesting one, we do run it and could explain the predictable timings. |
Hi @flopex
I am not sure if this was only for testing purposes, or if you identified that this is the reason (a lot of savepoints with unique identifiers) why you have a large table. |
Not only for testing, because we also noticed the bloat of the query-digest was because of Django, similar to this other issue https://groups.google.com/g/proxysql/c/MeCodsiqlo0 We found this to be the easiest way to fill up the stats metrics to similar levels seen in prod. |
We recently ran into what seemed very high connection and execution timings from client to ProxySQL. After much debugging it turned out to be the ever increasing size of the
stats_mysql_query_digest
data and runningTRUNCATE
resolved our issues.stat metric at time of issues:
What we found interesting was the pattern in the bad performing requests. We noticed a pattern as to when a bad request would happen.
This is the debugging test we ran, a series of 5000 tests against a ProxySQL host where we ran a simple
![image](https://private-user-images.githubusercontent.com/2357611/326120466-a6e4770e-03ca-4b92-9b4f-cbdb183f2f12.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MjA1OTQ0MzEsIm5iZiI6MTcyMDU5NDEzMSwicGF0aCI6Ii8yMzU3NjExLzMyNjEyMDQ2Ni1hNmU0NzcwZS0wM2NhLTRiOTItOWI0Zi1jYmRiMTgzZjJmMTIucG5nP1gtQW16LUFsZ29yaXRobT1BV1M0LUhNQUMtU0hBMjU2JlgtQW16LUNyZWRlbnRpYWw9QUtJQVZDT0RZTFNBNTNQUUs0WkElMkYyMDI0MDcxMCUyRnVzLWVhc3QtMSUyRnMzJTJGYXdzNF9yZXF1ZXN0JlgtQW16LURhdGU9MjAyNDA3MTBUMDY0ODUxWiZYLUFtei1FeHBpcmVzPTMwMCZYLUFtei1TaWduYXR1cmU9MmU5MzY2ZWU4MTVmOTIzZWIxNGRlMWU1ZjZmYWNkZWViOGVhNGZjZTkyYzEyYTM0MmQxNzgxYzQ0Njk2Y2NkMyZYLUFtei1TaWduZWRIZWFkZXJzPWhvc3QmYWN0b3JfaWQ9MCZrZXlfaWQ9MCZyZXBvX2lkPTAifQ.40UdTU3Uh_07AuPqiwONB3WvmRCH-EoO7GikkCgz47M)
SELECT 1
using Python, result timing in seconds.Would be curious to know if there was an explanation to the pattern seen during these tests. What is the suggested mitigation/setup for keeping
stats_mysql_query_digest
at a reasonable size?ProxySQL Version: 2.3.2
OS: Ubuntu 18.04
I'm aware of the other reports surrounding this issue, but wanted to get more insight on expected behaviour and possible "fixes".
Ref:
#3482 (comment)
#2368 (comment)
The text was updated successfully, but these errors were encountered: