-
Notifications
You must be signed in to change notification settings - Fork 3.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Memory leak in long running query crashes node #66849
Labels
A-cdc
Change Data Capture
C-bug
Code not up to spec/doc, specs & docs deemed correct. Solution expected to change code/behavior.
O-community
Originated from the community
T-cdc
T-sql-foundations
SQL Foundations Team (formerly SQL Schema + SQL Sessions)
X-blathers-triaged
blathers was able to find an owner
Comments
joesankey
added
the
C-bug
Code not up to spec/doc, specs & docs deemed correct. Solution expected to change code/behavior.
label
Jun 24, 2021
Hello, I am Blathers. I am here to help you get the issue triaged. Hoot - a bug! Though bugs are the bane of my existence, rest assured the wretched thing will get the best of care here. I have CC'd a few people who may be able to assist you:
If we have not gotten back to your issue within a few business days, you can try the following:
🦉 Hoot! I am a Blathers, a bot for CockroachDB. My owner is otan. |
blathers-crl
bot
added
A-cdc
Change Data Capture
O-community
Originated from the community
X-blathers-triaged
blathers was able to find an owner
labels
Jun 24, 2021
blathers-crl
bot
added
the
T-sql-foundations
SQL Foundations Team (formerly SQL Schema + SQL Sessions)
label
Jul 1, 2021
joesankey
added a commit
to joesankey/cockroach
that referenced
this issue
Jul 1, 2021
The connection statement buffer grows indefinitely when the client uses the execute portal with limit feature of the Postgres protocol, eventually causing the node to crash out of memory. Any long running query that uses the limit feature will cause this memory leak such as the `EXPERIMENTAL CHANGEFEED FOR` statement. The execute portal with limit feature of the Postgres protocol is used by the JDBC Postgres driver to fetch a limited number of rows at a time. The leak is caused by commands accumulating in the buffer and never getting cleared out. The client sends 2 commands every time it wants to receive more rows: - `Execute {"Portal": "C_1", "MaxRows": 1}` - `Sync` The server processes the commands and leaves them in the buffer, every iteration causes 2 more commands to leak. A similar memory leak was fixed by cockroachdb#48859, however the execute with limit feature is implemented by a side state machine in limitedCommandResult. The cleanup routine added by cockroachdb#48859 is never executed for suspended portals as they never return to the main conn_executor loop. After this change the statement buffer gets trimmed to reclaim memory after each client command is processed in the limitedCommandResult side state machine. The StmtBuf.Ltrim function was changed to be public visibility to enable this. While this is not ideal, it does scope the fix to the limitedCommandResult side state machine and could be addressed when the limitedCommandResult functionality is refactored into the conn_executor. Added a unit test which causes the leak, used the PGWire client in the test as neither the pg or pgx clients use execute with limit, so cant be used to demonstrate the leak. Also tested the fix in a cluster by following the steps outlined in cockroachdb#66849. Resolves: cockroachdb#66849 See also: cockroachdb#48859 Release note (bug fix): fix statement buffer memory leak when using suspended portals.
craig bot
pushed a commit
that referenced
this issue
Jul 7, 2021
66919: sql: cache hashed password and auth info during authentication r=knz,RichardJCai a=rafiss See individual commits fixes #58869 --- This improves authentication performance in multiregion clusters and improves availability when the system range leaseholder goes down. I ran the test from #66768 and observed that there was no period of authentication unavailability. Previously, each authentication attempt would require one read from each of system.users and system.role_options. In multiregion clusters, this adds quite a bit of excess latency. Also, if those tables' leaseholders were not available, then authentication to all nodes would be affected. This cache is meant to alleviate both problems. It's kept up to date by updating the table descriptors' versions whenever authentication related information is changed with CREATE/ALTER/DROP ROLE. Release note (performance improvement): The latency of authenticating a user has been improved by adding a cache for lookups of authentication related information. Release note (ops change): There is now a server.authentication_cache.enabled cluster setting. The setting defaults to true. When enabled, this cache stores authentication-related data and will improve the latency of authentication attempts. Keeping the cache up to date adds additional overhead when using the CREATE, UPDATE, and DELETE ROLE commands. To minimize the overhead, any bulk ROLE operations should be run inside of a transaction. To make the cache more effective, any regularly-scheduled ROLE updates should be done all together, rather than occurring throughout the day at all times. Release note (security update): There is now a cache for per-user authentication related information. The data in the cache is always kept up-to-date because it checks if any change to the underlying authentication tables has been made since the last time the cache was updated. The cached data includes the user's hashed password, the NOLOGIN role option, and the VALID UNTIL role option. 67135: sql/pgwire: fix statement buffer memory leak when using suspended portals r=rafiss a=joesankey The connection statement buffer grows indefinitely when the client uses the execute portal with limit feature of the Postgres protocol, eventually causing the node to crash out of memory. Any long running query that uses the limit feature will cause this memory leak such as the `EXPERIMENTAL CHANGEFEED FOR` statement. The execute portal with limit feature of the Postgres protocol is used by the JDBC Postgres driver to fetch a limited number of rows at a time. The leak is caused by commands accumulating in the buffer and never getting cleared out. The client sends 2 commands every time it wants to receive more rows: - `Execute {"Portal": "C_1", "MaxRows": 1}` - `Sync` The server processes the commands and leaves them in the buffer, every iteration causes 2 more commands to leak. A similar memory leak was fixed by #48859, however the execute with limit feature is implemented by a side state machine in limitedCommandResult. The cleanup routine added by #48859 is never executed for suspended portals as they never return to the main conn_executor loop. After this change the statement buffer gets trimmed to reclaim memory after each client command is processed in the limitedCommandResult side state machine. The StmtBuf.Ltrim function was changed to be public visibility to enable this. While this is not ideal, it does scope the fix to the limitedCommandResult side state machine and could be addressed when the limitedCommandResult functionality is refactored into the conn_executor. Added a unit test which causes the leak, used the PGWire client in the test as neither the pg or pgx clients use execute with limit, so cant be used to demonstrate the leak. Also tested the fix in a cluster by following the steps outlined in #66849. Resolves: #66849 See also: #48859 Release note (bug fix): fix statement buffer memory leak when using suspended portals. @asubiotto @andreimatei Co-authored-by: Rafi Shamim <rafi@cockroachlabs.com> Co-authored-by: joesankey <joesankey@gmail.com>
craig bot
pushed a commit
that referenced
this issue
Jul 8, 2021
67135: sql/pgwire: fix statement buffer memory leak when using suspended portals r=rafiss a=joesankey The connection statement buffer grows indefinitely when the client uses the execute portal with limit feature of the Postgres protocol, eventually causing the node to crash out of memory. Any long running query that uses the limit feature will cause this memory leak such as the `EXPERIMENTAL CHANGEFEED FOR` statement. The execute portal with limit feature of the Postgres protocol is used by the JDBC Postgres driver to fetch a limited number of rows at a time. The leak is caused by commands accumulating in the buffer and never getting cleared out. The client sends 2 commands every time it wants to receive more rows: - `Execute {"Portal": "C_1", "MaxRows": 1}` - `Sync` The server processes the commands and leaves them in the buffer, every iteration causes 2 more commands to leak. A similar memory leak was fixed by #48859, however the execute with limit feature is implemented by a side state machine in limitedCommandResult. The cleanup routine added by #48859 is never executed for suspended portals as they never return to the main conn_executor loop. After this change the statement buffer gets trimmed to reclaim memory after each client command is processed in the limitedCommandResult side state machine. The StmtBuf.Ltrim function was changed to be public visibility to enable this. While this is not ideal, it does scope the fix to the limitedCommandResult side state machine and could be addressed when the limitedCommandResult functionality is refactored into the conn_executor. Added a unit test which causes the leak, used the PGWire client in the test as neither the pg or pgx clients use execute with limit, so cant be used to demonstrate the leak. Also tested the fix in a cluster by following the steps outlined in #66849. Resolves: #66849 See also: #48859 Release note (bug fix): fix statement buffer memory leak when using suspended portals. @asubiotto @andreimatei 67193: opt: only prune synthesized check constraint columns for UPDATEs r=mgartner a=mgartner Check constraint columns are synthesized columns that allow the execution engine to enforce check constraints. Previously, the optimizer would prune these columns for INSERT and UPSERT mutations if the check expression did not reference any mutating columns. Normally, these columns were never pruned because inserts mutate all columns in the table. However, if the check expression did not reference any columns, like in `CHECK (false)`, the columns were pruned and the check constraint was not enforced. This allowed rows to be inserted into the table when they should have instead violated the check constraint. This commit fixes the issue by ensuring that synthesized check constraint columns are only pruned for UPDATE mutations. Fixes #67100 Release note (bug fix): A bug has been fixed that allowed rows to be inserted into a table with a check constraint that always evaluated to false, like `CHECK (false)`. This bug was present since version 21.1.0. 67327: cluster-ui: fix statement name label on column selector r=maryliag a=maryliag The label for Statement Time wasn't properly showing on the column selctor on Statements page. Release note (bug fix): Display Statement Time label on column selector on Statements page. Co-authored-by: joesankey <joesankey@gmail.com> Co-authored-by: Marcus Gartner <marcus@cockroachlabs.com> Co-authored-by: Marylia Gutierrez <marylia@cockroachlabs.com>
joesankey
added a commit
to joesankey/cockroach
that referenced
this issue
Jul 8, 2021
…tals The connection statement buffer grows indefinitely when the client uses the execute portal with limit feature of the Postgres protocol, eventually causing the node to crash out of memory. Any long running query that uses the limit feature will cause this memory leak such as the `EXPERIMENTAL CHANGEFEED FOR` statement. The execute portal with limit feature of the Postgres protocol is used by the JDBC Postgres driver to fetch a limited number of rows at a time. The leak is caused by commands accumulating in the buffer and never getting cleared out. The client sends 2 commands every time it wants to receive more rows: - `Execute {"Portal": "C_1", "MaxRows": 1}` - `Sync` The server processes the commands and leaves them in the buffer, every iteration causes 2 more commands to leak. A similar memory leak was fixed by cockroachdb#48859, however the execute with limit feature is implemented by a side state machine in limitedCommandResult. The cleanup routine added by cockroachdb#48859 is never executed for suspended portals as they never return to the main conn_executor loop. After this change the statement buffer gets trimmed to reclaim memory after each client command is processed in the limitedCommandResult side state machine. The StmtBuf.Ltrim function was changed to be public visibility to enable this. While this is not ideal, it does scope the fix to the limitedCommandResult side state machine and could be addressed when the limitedCommandResult functionality is refactored into the conn_executor. Added a unit test which causes the leak, used the PGWire client in the test as neither the pg or pgx clients use execute with limit, so cant be used to demonstrate the leak. Also tested the fix in a cluster by following the steps outlined in cockroachdb#66849. Resolves: cockroachdb#66849 See also: cockroachdb#48859 Release note (bug fix): fix statement buffer memory leak when using suspended portals
joesankey
added a commit
to joesankey/cockroach
that referenced
this issue
Jul 8, 2021
…tals The connection statement buffer grows indefinitely when the client uses the execute portal with limit feature of the Postgres protocol, eventually causing the node to crash out of memory. Any long running query that uses the limit feature will cause this memory leak such as the `EXPERIMENTAL CHANGEFEED FOR` statement. The execute portal with limit feature of the Postgres protocol is used by the JDBC Postgres driver to fetch a limited number of rows at a time. The leak is caused by commands accumulating in the buffer and never getting cleared out. The client sends 2 commands every time it wants to receive more rows: - `Execute {"Portal": "C_1", "MaxRows": 1}` - `Sync` The server processes the commands and leaves them in the buffer, every iteration causes 2 more commands to leak. A similar memory leak was fixed by cockroachdb#48859, however the execute with limit feature is implemented by a side state machine in limitedCommandResult. The cleanup routine added by cockroachdb#48859 is never executed for suspended portals as they never return to the main conn_executor loop. After this change the statement buffer gets trimmed to reclaim memory after each client command is processed in the limitedCommandResult side state machine. The StmtBuf.Ltrim function was changed to be public visibility to enable this. While this is not ideal, it does scope the fix to the limitedCommandResult side state machine and could be addressed when the limitedCommandResult functionality is refactored into the conn_executor. Added a unit test which causes the leak, used the PGWire client in the test as neither the pg or pgx clients use execute with limit, so cant be used to demonstrate the leak. Also tested the fix in a cluster by following the steps outlined in cockroachdb#66849. Resolves: cockroachdb#66849 See also: cockroachdb#48859 Release note (bug fix): fix statement buffer memory leak when using suspended portals
joesankey
added a commit
to joesankey/cockroach
that referenced
this issue
Jul 8, 2021
…tals The connection statement buffer grows indefinitely when the client uses the execute portal with limit feature of the Postgres protocol, eventually causing the node to crash out of memory. Any long running query that uses the limit feature will cause this memory leak such as the `EXPERIMENTAL CHANGEFEED FOR` statement. The execute portal with limit feature of the Postgres protocol is used by the JDBC Postgres driver to fetch a limited number of rows at a time. The leak is caused by commands accumulating in the buffer and never getting cleared out. The client sends 2 commands every time it wants to receive more rows: - `Execute {"Portal": "C_1", "MaxRows": 1}` - `Sync` The server processes the commands and leaves them in the buffer, every iteration causes 2 more commands to leak. A similar memory leak was fixed by cockroachdb#48859, however the execute with limit feature is implemented by a side state machine in limitedCommandResult. The cleanup routine added by cockroachdb#48859 is never executed for suspended portals as they never return to the main conn_executor loop. After this change the statement buffer gets trimmed to reclaim memory after each client command is processed in the limitedCommandResult side state machine. The StmtBuf.Ltrim function was changed to be public visibility to enable this. While this is not ideal, it does scope the fix to the limitedCommandResult side state machine and could be addressed when the limitedCommandResult functionality is refactored into the conn_executor. Added a unit test which causes the leak, used the PGWire client in the test as neither the pg or pgx clients use execute with limit, so cant be used to demonstrate the leak. Also tested the fix in a cluster by following the steps outlined in cockroachdb#66849. Resolves: cockroachdb#66849 See also: cockroachdb#48859 Release note (bug fix): fix statement buffer memory leak when using suspended portals
joesankey
added a commit
to joesankey/cockroach
that referenced
this issue
Jul 9, 2021
…tals The connection statement buffer grows indefinitely when the client uses the execute portal with limit feature of the Postgres protocol, eventually causing the node to crash out of memory. Any long running query that uses the limit feature will cause this memory leak such as the `EXPERIMENTAL CHANGEFEED FOR` statement. The execute portal with limit feature of the Postgres protocol is used by the JDBC Postgres driver to fetch a limited number of rows at a time. The leak is caused by commands accumulating in the buffer and never getting cleared out. The client sends 2 commands every time it wants to receive more rows: - `Execute {"Portal": "C_1", "MaxRows": 1}` - `Sync` The server processes the commands and leaves them in the buffer, every iteration causes 2 more commands to leak. A similar memory leak was fixed by cockroachdb#48859, however the execute with limit feature is implemented by a side state machine in limitedCommandResult. The cleanup routine added by cockroachdb#48859 is never executed for suspended portals as they never return to the main conn_executor loop. After this change the statement buffer gets trimmed to reclaim memory after each client command is processed in the limitedCommandResult side state machine. The StmtBuf.Ltrim function was changed to be public visibility to enable this. While this is not ideal, it does scope the fix to the limitedCommandResult side state machine and could be addressed when the limitedCommandResult functionality is refactored into the conn_executor. Added a unit test which causes the leak, used the PGWire client in the test as neither the pg or pgx clients use execute with limit, so cant be used to demonstrate the leak. Also tested the fix in a cluster by following the steps outlined in cockroachdb#66849. Resolves: cockroachdb#66849 See also: cockroachdb#48859 Release note (bug fix): fix statement buffer memory leak when using suspended portals
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
A-cdc
Change Data Capture
C-bug
Code not up to spec/doc, specs & docs deemed correct. Solution expected to change code/behavior.
O-community
Originated from the community
T-cdc
T-sql-foundations
SQL Foundations Team (formerly SQL Schema + SQL Sessions)
X-blathers-triaged
blathers was able to find an owner
Describe the problem
Executing a long running query, such as a core changefeed, results in a memory leak which crashes the node it is running on.
Notes
The leak appears to be caused by the connection statement buffer accumulating statements that are not cleared out after being processed, see flamegraph screenshot. A similar bug was fixed by this commit by clearing out the statement buffer. This appears to not work for long running queries, maybe related to the use of the limitedCommandResult side state machine which handles the extended Postgres protocol. Each
FETCH
command of the core changefeed query is likely accumulating in the statement buffer.To Reproduce
Expected behavior
Executing a long running query will not leak memory and can continue indefinitely.
Additional data / screenshots
Cluster memory usage. The core changefeed query is running on node 3 (red line)
Node 3 memory usage breakdown
Flame graph showing statement buffer growing
Environment:
cockroach sql
/JDBC
Additional context
The text was updated successfully, but these errors were encountered: