Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

sql stats: reduce flush count check #109619

Closed
j82w opened this issue Aug 28, 2023 · 0 comments · Fixed by #109696 or #110173
Closed

sql stats: reduce flush count check #109619

j82w opened this issue Aug 28, 2023 · 0 comments · Fixed by #109696 or #110173
Assignees
Labels
A-cluster-observability Related to cluster observability C-enhancement Solution expected to add code/behavior + preserve backward-compat (pg compat issues are exception)

Comments

@j82w
Copy link
Contributor

j82w commented Aug 28, 2023

Currently the check to validate the stats has not exceeded the max size is done on every flush which is done on every node every 10minutes by default. This check is unnecessary most of the time and can cause errors since it does the count across the entire table. The check should be reduce to at least 1 once an hour. This will reduce the overhead and locks.

Jira issue: CRDB-31021

@j82w j82w added C-enhancement Solution expected to add code/behavior + preserve backward-compat (pg compat issues are exception) T-cluster-observability A-cluster-observability Related to cluster observability labels Aug 28, 2023
@blathers-crl blathers-crl bot added this to Triage in Cluster Observability Aug 28, 2023
craig bot pushed a commit that referenced this issue Aug 29, 2023
109696: sql: optimize persistedsqlstats flush size check r=j82w a=j82w

Problem:
The `persistedsqlstats` size check to make sure the table is not 1.5x the max size is done on every flush which is done on every node every 10 minutes by default. This can cause serialization issues as it is over the entire table. The check is unnecessary most of the time, because it should only fail if the compaction job is failing.

Solution:
1. Reduce the check interval to only be done once an hour by default, and make it configurable.
2. The system table is split in to 8 shards. Instead of checking the entire table count limit it to only one shard. This reduces the scope of the check and reduces the chance of serialization issues.

Fixes: #109619

Release note (sql change): The persistedsqlstats table max size check is now done once an hour instead of every 10 minutes. This reduces the risk of serialization errors on the statistics tables.

Co-authored-by: j82w <jwilley@cockroachlabs.com>
@craig craig bot closed this as completed in 213a64b Aug 30, 2023
Cluster Observability automation moved this from Triage to Done Aug 30, 2023
blathers-crl bot pushed a commit that referenced this issue Aug 30, 2023
Problem:
The `persistedsqlstats` size check to make sure the table is not 1.5x
the max size is done on every flush which is done on every node every
10 minutes by default. This can cause serialization issues as it is over
the entire table. The check is unnecessary most of the time, because it
should only fail if the compaction job is failing.

Solution:
1. Reduce the check interval to only be done once an hour by default,
   and make it configurable.
2. The system table is split in to 8 shards. Instead of checking the
   entire table count limit it to only one shard. This reduces the scope
   of the check and reduces the chance of serialization issues.

Fixes: #109619

Release note (sql change): The persistedsqlstats table max size check
is now done once an hour instead of every 10 minutes. This reduces the
risk of serialization errors on the statistics tables.
@j82w j82w reopened this Sep 1, 2023
Cluster Observability automation moved this from Done to Triage Sep 1, 2023
@maryliag maryliag moved this from Triage to Active Issues in Cluster Observability Sep 11, 2023
craig bot pushed a commit that referenced this issue Sep 12, 2023
110150: cli: fix debug pebble commands on encrypted stores r=RaduBerinde a=RaduBerinde

Currently the debug pebble commands only work correctly on an
encrypted store if the encrypted store's path is `cockroach-data` or
the store directory is passed using `--store` (in addition to being
passed to the pebble subcommand itself). What's worse, knowledge of
this subtle fact was lost among team members.

The root cause is that we are trying to resolve encryption options
using the server config.  The difficulty is that there are a bunch of
different commands and there is no unified way to obtain the store
directory of interest

To fix this, we create `autoDecryptFS`. This is a `vfs.FS`
implementation which is able to automatically detect encrypted paths
and use the correct unencrypted FS. It does this by having a list of
known encrypted stores (the ones in the `--enterprise-encryption`
flag), and looking for any of these paths as ancestors of any path in
an operation. This new implementation replaces `swappableFS` and
`absoluteFS`.

We also improve the error message when we try to open an encrypted
store without setting up the key correctly.

Fixes: #110121

Release note (bug fix): `cockroach debug pebble` commands now work
correctly with encrypted stores which don't use the default
`cockroach-data` path without having to also pass `--store`.

110173: sql: optimize persistedsqlstats flush size check r=j82w a=j82w

Problem:
The `persistedsqlstats` size check to make sure the table is not 1.5x the max size is done on every flush which is done on every node every 10 minutes by default. This can cause serialization issues as it is over the entire table. The check is unnecessary most of the time, because it should only fail if the compaction job is failing.

Solution:
1. Reduce the check interval to only be done once an hour by default, and make it configurable.
2. The system table is split in to 8 shards. Instead of checking the entire table count limit it to only one shard. This reduces the scope of the check and reduces the chance of serialization issues.

This was preivously reverted because of a flakey test because the size check is only done on a single shard. The tests are updated to increase the limit and the number of statements to make sure every shard has data.

Fixes: #109619

Release note (sql change): The persistedsqlstats table max size check is now done once an hour instead of every 10 minutes. This reduces the risk of serialization errors on the statistics tables.

110264: c2c: add region constraints replication test r=msbutler a=msbutler

This patch adds a test that ensures that a replicating tenant's regional
constraints are obeyed in the destination cluster. This test serves as an end
to end test of the span config replication work tracked in #106823.

This patch also sets the following source system tenant cluster settings in
the c2c e2e framework: kv.rangefeed.closed_timestamp_refresh_interval: 200ms,
kv.closed_timestamp.side_transport_interval: 50 ms. CDC e2e tests also set
these cluster settings.

Informs #109059

Release note: None

110334: roachtest: ensure c2c/shutdown tests start destination tenant with online node r=stevendanna a=msbutler

An earlier patch #110033 introduced a change that starts the destination tenant from any destination node, but did not consider if that node was shut down.  If the driver attempts to connect to the shut down node, the roachtest fails. This patch ensures that the tenant is started on a node that will be online.

Fixes #110317

Release note: None

110364: upgrade: remove buggy TTL repair r=rafiss a=ecwall

Fixes #110363

The TTL descriptor repair in FirstUpgradeFromReleasePrecondition incorrectly
removes TTL fields from table descriptors after incorrectly comparing the
table descriptor's TTL job schedule ID to a set of job IDs.

This change removes the repair until tests are properly added.

Release note (bug fix): Remove buggy TTL descriptor repair. Previously,
upgrading from 22.2.X to 23.1.9 incorrectly removed TTL storage params from
tables (visible via `SHOW CREATE TABLE <ttl-table>;`) while attempting to
repair table descriptors. This resulted in the node that attempts to run the
TTL job crashing due to a panic caused by the missing TTL storage params.
Clusters currently on 22.2.X should NOT be upgraded to 23.1.9 and should
be upgraded to 23.1.10 or later directly.

110431: workflows: stale.yml: update action version r=RaduBerinde a=RaduBerinde

The stale bot closes issues as "completed" instead of "not planned". More recent versions have added a configuration setting for this, and it defaults to "not planned". This commit updates the action to the latest version.

Epic: none
Release note: None

110451: engineccl: skip BenchmarkTimeBoundIterate r=RaduBerinde a=jbowens

This benchmark's assertions have recently become flaky.

Epic: none
Informs: #110299
Release note: none

Co-authored-by: Radu Berinde <radu@cockroachlabs.com>
Co-authored-by: j82w <jwilley@cockroachlabs.com>
Co-authored-by: Michael Butler <butler@cockroachlabs.com>
Co-authored-by: Evan Wall <wall@cockroachlabs.com>
Co-authored-by: RaduBerinde <radu@cockroachlabs.com>
Co-authored-by: Jackson Owens <jackson@cockroachlabs.com>
@craig craig bot closed this as completed in 23f829b Sep 12, 2023
Cluster Observability automation moved this from Active Issues to Done Sep 12, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
A-cluster-observability Related to cluster observability C-enhancement Solution expected to add code/behavior + preserve backward-compat (pg compat issues are exception)
Projects
No open projects
Development

Successfully merging a pull request may close this issue.

1 participant