-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Abort too-large non-paged queries #5870
Comments
For paged queries we have already changed the code so we will accumulate up to 1M in a result page and then return it, making sure the query at least on the coordinator will not consume to much memory. We should abort queries that consume to much memory on the coordinator side and have a counter attached to that. |
@slivne we have to do this on the replicas, as by the time results get to the coordinator it might be too late, a single replica can have more data then memory. |
@denesb according to @nyh MVs do not issue queries as such, and this feature should not break that functionality. See https://github.com/scylladb/scylla-enterprise/issues/1279#issuecomment-596234707 |
@dyasny rows can be big, even a single cell can be bigger than the limit (of 1MB). Since we know MVs read single rows, we can tag its queries as system and as such not subject to the limit. |
On Tue, Mar 10, 2020 at 06:11:20AM -0700, Botond Dénes wrote:
@dyasny rows can be big, even a single cell can be bigger than the limit (of 1MB). Since we know MVs read single rows, we can tag its queries as system and as such not subject to the limit.
I do not think we enforce a limit for one row otherwise it will be
impossible to read a large row.
…--
Gleb.
|
@gleb-cloudius we don't. For paged queries, we just close the page off whenever we go above the limit (by however much), for unpaged queries there is no limit. The plan is to introduce a limit for unpaged queries as well, and be mores strict about it. Fail any queries that go above it. |
On Tue, Mar 10, 2020 at 06:14:11AM -0700, Botond Dénes wrote:
The plan is to introduce a limit for unpaged queries as well, and be mores strict about it. Fail any queries that go above it.
We can punish unpaged queries all we want, but if we apply a limit (for
paged queries) before there is a row ready to be returned it means there
will be no way to read the row. Of course there is already such limit:
shard memory.
…--
Gleb.
|
So maybe
|
I don't plan to do any changes for paged queries. For those the sky is already the shard's memory. |
What if there is another unpaged query executing already? In general determining how much memory is safely consumable is very hard. Users just shouldn't do unpaged queries.
This I planned to do anyway, in fact there is already such a limit introduced in 75efa70 in scope of #5804. The limit is just not used for unpaged queries yet. |
This is a request similar to #5804 but wrt non-paged queries.
We keep receiving reports about memory allocation issues on nodes, when in fact the client was running a non-paged query.
I'd like to request to add a special treatment for non-paged queries, so when they are too large they will get aborted and an error will be sent to the client, instead of crashing nodes on bad_allocs
The text was updated successfully, but these errors were encountered: