Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

opt: update scan cost model with column size from table statistics #72332

Closed
rharding6373 opened this issue Nov 2, 2021 · 1 comment · Fixed by #77019
Closed

opt: update scan cost model with column size from table statistics #72332

rharding6373 opened this issue Nov 2, 2021 · 1 comment · Fixed by #77019
Assignees
Labels
C-enhancement Solution expected to add code/behavior + preserve backward-compat (pg compat issues are exception) T-sql-queries SQL Queries Team

Comments

@rharding6373
Copy link
Collaborator

rharding6373 commented Nov 2, 2021

This is follow-up work to #55697 to use column size information in the optimizer cost model for scans.

Epic: CRDB-10034

Jira issue: CRDB-13893

@rharding6373 rharding6373 added C-enhancement Solution expected to add code/behavior + preserve backward-compat (pg compat issues are exception) T-sql-queries SQL Queries Team labels Nov 2, 2021
@rharding6373
Copy link
Collaborator Author

Before closing out this issue we should rerun TPC-C and TPC-H stats and stats quality tests. It's possible (but unlikely) that query plans will change once the cost model is updated.

rharding6373 added a commit to rharding6373/cockroach that referenced this issue Jan 11, 2022
We recently added a new table stat, `avgSize`, that is the average size,
in bytes, of a table column. This PR is the first in a series of commits
to use the new stat for more accurate cost modeling in the optimizer.

This commit applies `avgSize` to `statisticsBuilder` in the following
ways:
* It loads `avgSize` when it fetches table statistics.
* It modifies the `avgSize` for some operators (e.g., union) which
  may affect the average size of a column.
* If the `avgSize` is not found for a column (e.g., if it is not in the
  table statistics or the column is synthesized), a default value of 4
is applied unless all the rows are known to be `NULL`.

This change also prints out `avgSize` as part of `EXPLAIN` if stats are
requested. It does not affect how queries are costed.

Informs: cockroachdb#72332

Release note: None
rharding6373 added a commit to rharding6373/cockroach that referenced this issue Jan 13, 2022
In this PR we add a session setting to gate usage of an upcoming feature
in which the optimizer will use `AvgSize`, the average column size, from
table stats to cost scans and index joins. When enabled, the optimizer
will revert to the old default method of costing scans, where each
column is treated as the same size. By default, this setting will be
off.

Informs: cockroachdb#72332

Release note: None
rharding6373 added a commit to rharding6373/cockroach that referenced this issue Jan 13, 2022
Before this change, the optimizer cost scans per row regardless of the
size of the columns comprising the row. This fails to account for time
to read or transport a large number of bytes over the network. It can
also lead to undesirable plans when there are multiple options for scans
or joins that read directly from tables.

For example, let's say we have the following table with secondary
indexes, and the following query.

```
CREATE TABLE t (
  k INT PRIMARY KEY,
  x INT,
  y INT,
  z INT,
  j JSONB,
  INDEX xj (x, j),
  INDEX xy (x, y));

SELECT k, x, z FROM t where x > 3;
```

Before this change, the optimizer may choose to scan index xj and
perform an index join, even if the average column size of j (and
therefore the number of bytes scanned reading index xj) is much
greater than the average column size of y (and therefore the number
of bytes scanned reading index xy).

This change utilizes the `avg_size` table statistic to cost scans and
relevant joins relative to the average column size of the columns
being scanned. If the table does not have an average size statistic
available for a column, the default value of 4 bytes results in the same
cost as before this change.

Informs: cockroachdb#72332

Release note (sql change): Modifies query cost based on the `avg_size`
table statistic, which may change query plans. This is gated by the
cluster setting CostScansWithDefaultColSize, and can be disabled by
setting it to true via `set cluster setting
sql.defaults.cost_scans_with_default_col_size.enabled=true`.
rharding6373 added a commit to rharding6373/cockroach that referenced this issue Jan 18, 2022
We recently added a new table stat, `avgSize`, that is the average size,
in bytes, of a table column. This PR is the first in a series of commits
to use the new stat for more accurate cost modeling in the optimizer.

This commit applies `avgSize` to `statisticsBuilder` in the following
ways:
* It loads `avgSize` when it fetches table statistics.
* It modifies the `avgSize` for some operators (e.g., union) which
  may affect the average size of a column.
* If the `avgSize` is not found for a column (e.g., if it is not in the
  table statistics or the column is synthesized), a default value of 4
is applied unless all the rows are known to be `NULL`.

This change also prints out `avgSize` as part of `EXPLAIN` if stats are
requested. It does not affect how queries are costed.

Informs: cockroachdb#72332

Release note: None
rharding6373 added a commit to rharding6373/cockroach that referenced this issue Jan 18, 2022
In this PR we add a session setting to gate usage of an upcoming feature
in which the optimizer will use `AvgSize`, the average column size, from
table stats to cost scans and index joins. When enabled, the optimizer
will revert to the old default method of costing scans, where each
column is treated as the same size. By default, this setting will be
off.

Informs: cockroachdb#72332

Release note: None
rharding6373 added a commit to rharding6373/cockroach that referenced this issue Jan 18, 2022
Before this change, the optimizer cost scans per row regardless of the
size of the columns comprising the row. This fails to account for time
to read or transport a large number of bytes over the network. It can
also lead to undesirable plans when there are multiple options for scans
or joins that read directly from tables.

For example, let's say we have the following table with secondary
indexes, and the following query.

```
CREATE TABLE t (
  k INT PRIMARY KEY,
  x INT,
  y INT,
  z INT,
  j JSONB,
  INDEX xj (x, j),
  INDEX xy (x, y));

SELECT k, x, z FROM t where x > 3;
```

Before this change, the optimizer may choose to scan index xj and
perform an index join, even if the average column size of j (and
therefore the number of bytes scanned reading index xj) is much
greater than the average column size of y (and therefore the number
of bytes scanned reading index xy).

This change utilizes the `avg_size` table statistic to cost scans and
relevant joins relative to the average column size of the columns
being scanned. If the table does not have an average size statistic
available for a column, the default value of 4 bytes results in the same
cost as before this change.

Informs: cockroachdb#72332

Release note (sql change): Modifies query cost based on the `avg_size`
table statistic, which may change query plans. This is gated by the
cluster setting CostScansWithDefaultColSize, and can be disabled by
setting it to true via `set cluster setting
sql.defaults.cost_scans_with_default_col_size.enabled=true`.
rharding6373 added a commit to rharding6373/cockroach that referenced this issue Jan 18, 2022
We recently added a new table stat, `avgSize`, that is the average size,
in bytes, of a table column. This PR is the first in a series of commits
to use the new stat for more accurate cost modeling in the optimizer.

This commit applies `avgSize` to `statisticsBuilder` in the following
ways:
* It loads `avgSize` when it fetches table statistics.
* It modifies the `avgSize` for some operators (e.g., union) which
  may affect the average size of a column.
* If the `avgSize` is not found for a column (e.g., if it is not in the
  table statistics or the column is synthesized), a default value of 4
is applied unless all the rows are known to be `NULL`.

This change also prints out `avgSize` as part of `EXPLAIN` if stats are
requested. It does not affect how queries are costed.

Informs: cockroachdb#72332

Release note: None
rharding6373 added a commit to rharding6373/cockroach that referenced this issue Jan 21, 2022
We recently added a new table stat, `avgSize`, that is the average size,
in bytes, of a table column. This PR is the first in a series of commits
to use the new stat for more accurate cost modeling in the optimizer.

This commit applies `avgSize` to `statisticsBuilder` in the following
ways:
* It loads `avgSize` when it fetches table statistics.
* It modifies the `avgSize` for some operators (e.g., union) which
  may affect the average size of a column.
* If the `avgSize` is not found for a column (e.g., if it is not in the
  table statistics or the column is synthesized), a default value of 4
is applied unless all the rows are known to be `NULL`.

This change also prints out `avgSize` as part of `EXPLAIN` if stats are
requested. It does not affect how queries are costed.

Informs: cockroachdb#72332

Release note: None
rharding6373 added a commit to rharding6373/cockroach that referenced this issue Jan 21, 2022
We recently added a new table stat, `avgSize`, that is the average size,
in bytes, of a table column. This PR is the first in a series of commits
to use the new stat for more accurate cost modeling in the optimizer.

This commit applies `avgSize` to `statisticsBuilder` in the following
ways:
* It loads `avgSize` when it fetches table statistics.
* It modifies the `avgSize` for some operators (e.g., union) which
  may affect the average size of a column.
* If the `avgSize` is not found for a column (e.g., if it is not in the
  table statistics or the column is synthesized), a default value of 4
is applied unless all the rows are known to be `NULL`.

This change also prints out `avgSize` as part of `EXPLAIN` if stats are
requested. It does not affect how queries are costed.

Informs: cockroachdb#72332

Release note: None
craig bot pushed a commit that referenced this issue Jan 25, 2022
72665: sql: remove invalid database privileges  r=rafiss a=RichardJCai

sql: remove invalid database privileges 

Release note (sql change): SELECT, INSERT, DELETE, UPDATE can
no longer be granted/revoked on databases. Previously
SELECT, INSERT, DELETE, UPDATE would be converted to ALTER DEFAULT PRIVILEGES
on GRANTs and were revokable but now they are no longer revokable either.

Resolves #68731

74251: opt: add avgSize stat to statisticsBuilder r=rharding6373 a=rharding6373

We recently added a new table stat, `avgSize`, that is the average size,
in bytes, of a table column. This PR is the first in a series of commits
to use the new stat for more accurate cost modeling in the optimizer.

This commit applies `avgSize` to `statisticsBuilder` in the following
ways:
* It loads `avgSize` when it fetches table statistics.
* It modifies the `avgSize` for some operators (e.g., union) which
  may affect the average size of a column.
* If the `avgSize` is not found for a column (e.g., if it is not in the
  table statistics or the column is synthesized), a default value of 4
is applied unless all the rows are known to be `NULL`.

This change also prints out `avgSize` as part of `EXPLAIN` if stats are
requested. It does not affect how queries are costed.

Informs: #72332

Release note: None

74831: sql, server: fix errors seen from combined statements endpoint r=Azhng,maryliag a=xinhaoz

## Commit 1 sql, server: use a view to join persisted and in-mem sql stats
Partially addresses: #71245

Previously, we created virtual tables to join in-memory and
persisted disk statement and transaction statistics. This
proved to be inefficient as requests to the the virtual
tables lead to full scans of the underlying system tables.

This commit utilizes virtual views to combine in-memory and disk
stats. The view queries in-memory stats from the virtual tables
`crdb_internal.cluster_{statement, transactions}_statistics,`
and combines them with the results from the system tables.
This allows us to push down query filters into the system
tables, leveraging their existing indexes.

Release note: None

## Commit 2 sql: decrease stats flush interval to every 10 mins 

Previously, we flush in-memory sql stats collected by each
node on an hourly interval. We have found that this hourly
interval might be too conservative, and the size of the
returned cluster-wide stats after an hour can also be
quite large, sometimes exceeding the gRPC max message
size.

This commit lowers the flush interval to every 10 minutes.
Since we want to continue to aggregate stats on an hourly
interval, we introduce a new cluster setting
`sql.stats.aggregation.interval` to control the
aggregation interval separately from the flush frequency.

Release note (sql change): The default sql stats flush interval
is now 10 minutes. A new cluster setting
`sql.stats.aggregatinon.interval` controls the aggregation
interval of sql stats, with a default value of 1 hour.

## Commit 3 server: allow statements EP to optionally exclude stmt or txns stats

Closes: #71829

Previously, the /statements endpoint returned cluster-wide
in-memory stats, containing  both statements and transactions stats.
In the past, we've observed the Statements endpoint response being
too large for gRPC. Because this endpoint is being used by virtual
tables that powers our combined stats api,
cluster_{statement,transactions}_stats, we might continue to surpass
the gRPC message size in the new api. However, each virtual table
only uses roughly half the response size (either statement or txn
stats).

This commit allows the virtual tables to exclude statement or txn
stats from the Statements endpoint resposne by introducing new
request parameters to /statements. This reduces the response size
in the stats virtual tables.

Release note: None

75058: sql: native evaluation support for NotExpr r=yuzefovich a=RajivTS

The commit includes the following changes to provide native evaluation support for tree.NotExpr:
1. Defined new operators NotExprProjOp for projection and NotExprSelOp for selection when evaluating the result of a NotExpr
2. Defined NotNullProjOp for projection when the underlying type is non-bool and contains only Nulls.
3. Defined the Next method for both the projection and selection operators.
4. Added test cases for testing the functionality of NotExprProjOp, NotNullProjOp and NotExprSelOp operators.

Fixes: #70713

Release note (performance improvement): queries using `NOT expr` syntax can now be evaluated faster in some cases.

75076: schemachanger: columns are not always backfilled in transactions r=fqazi a=fqazi

Fixes: #75074

Previously, when multiple columns were added in a transaction,
the schema changer incorrectly determined if a backfill was
required based on the last column that was added. This was
inadequate because in a transaction multiple columns can
be added concurrently, where some may require a backfill,
and others may not. To address this, this patch checks
if any of the columns added need a backfill and uses that
to determine if a backfill is required.

Release note (bug fix): If multiple columns were added to
a table inside a transaction, then none of the columns
will be backfilled if the last column did not require
a backfill.

75465: ci: add `go_transition_test` support to `bazci` r=rail a=rickystewart

These targets have special bespoke output directories for `testlogs`, so
we can't find them in the standard location.

Also allow `bazci run`.

Closes #75184.

Release note: None

75504: pkg/sql: add `-linecomment` when building `roleoption` `stringer` file r=maryliag a=rickystewart

Release note: None

Co-authored-by: richardjcai <caioftherichard@gmail.com>
Co-authored-by: rharding6373 <rharding6373@users.noreply.github.com>
Co-authored-by: Xin Hao Zhang <xzhang@cockroachlabs.com>
Co-authored-by: RajivTS <rajshar.email@gmail.com>
Co-authored-by: Faizan Qazi <faizan@cockroachlabs.com>
Co-authored-by: Ricky Stewart <ricky@cockroachlabs.com>
rharding6373 added a commit to rharding6373/cockroach that referenced this issue Jan 25, 2022
In this PR we add a session setting to gate usage of an upcoming feature
in which the optimizer will use `AvgSize`, the average column size, from
table stats to cost scans and index joins. When enabled, the optimizer
will revert to the old default method of costing scans, where each
column is treated as the same size. By default, this setting will be
off.

Informs: cockroachdb#72332

Release note: None
rharding6373 added a commit to rharding6373/cockroach that referenced this issue Jan 25, 2022
Before this change, the optimizer cost scans per row regardless of the
size of the columns comprising the row. This fails to account for time
to read or transport a large number of bytes over the network. It can
also lead to undesirable plans when there are multiple options for scans
or joins that read directly from tables.

For example, let's say we have the following table with secondary
indexes, and the following query.

```
CREATE TABLE t (
  k INT PRIMARY KEY,
  x INT,
  y INT,
  z INT,
  j JSONB,
  INDEX xj (x, j),
  INDEX xy (x, y));

SELECT k, x, z FROM t where x > 3;
```

Before this change, the optimizer may choose to scan index xj and
perform an index join, even if the average column size of j (and
therefore the number of bytes scanned reading index xj) is much
greater than the average column size of y (and therefore the number
of bytes scanned reading index xy).

This change utilizes the `avg_size` table statistic to cost scans and
relevant joins relative to the average column size of the columns
being scanned. If the table does not have an average size statistic
available for a column, the default value of 4 bytes results in the same
cost as before this change.

Informs: cockroachdb#72332

Release note (sql change): Modifies query cost based on the `avg_size`
table statistic, which may change query plans. This is gated by the
cluster setting `cost_scans_with_default_col_size`, and can be disabled by
setting it to true via `set cluster setting
sql.defaults.cost_scans_with_default_col_size.enabled=true`.
rharding6373 added a commit to rharding6373/cockroach that referenced this issue Jan 26, 2022
In this PR we add a session setting to gate usage of an upcoming feature
in which the optimizer will use `AvgSize`, the average column size, from
table stats to cost scans and index joins. When enabled, the optimizer
will revert to the old default method of costing scans, where each
column is treated as the same size. By default, this setting will be
off.

Informs: cockroachdb#72332

Release note: None
rharding6373 added a commit to rharding6373/cockroach that referenced this issue Jan 26, 2022
Before this change, the optimizer cost scans per row regardless of the
size of the columns comprising the row. This fails to account for time
to read or transport a large number of bytes over the network. It can
also lead to undesirable plans when there are multiple options for scans
or joins that read directly from tables.

For example, let's say we have the following table with secondary
indexes, and the following query.

```
CREATE TABLE t (
  k INT PRIMARY KEY,
  x INT,
  y INT,
  z INT,
  j JSONB,
  INDEX xj (x, j),
  INDEX xy (x, y));

SELECT k, x, z FROM t where x > 3;
```

Before this change, the optimizer may choose to scan index xj and
perform an index join, even if the average column size of j (and
therefore the number of bytes scanned reading index xj) is much
greater than the average column size of y (and therefore the number
of bytes scanned reading index xy).

This change utilizes the `avg_size` table statistic to cost scans and
relevant joins relative to the average column size of the columns
being scanned. If the table does not have an average size statistic
available for a column, the default value of 4 bytes results in the same
cost as before this change.

Informs: cockroachdb#72332

Release note (sql change): Modifies query cost based on the `avg_size`
table statistic, which may change query plans. This is gated by the
session setting `cost_scans_with_default_col_size`, and can be disabled by
setting it to true via `SET cost_scans_with_default_col_size=true`.
craig bot pushed a commit that referenced this issue Jan 27, 2022
74551: sql, opt: add AvgSize stat to scan row cost and session setting to enable/disable the old cost methodology r=rharding6373 a=rharding6373

sql: add session setting for scan costing methodology
    
In this PR we add a session setting to gate usage of an upcoming feature
in which the optimizer will use `AvgSize`, the average column size, from
table stats to cost scans and index joins. When enabled, the optimizer
will revert to the old default method of costing scans, where each
column is treated as the same size. By default, this setting will be
off.

Informs: #72332
    
Release note: None


opt: add AvgSize to scan row cost
    
Before this change, the optimizer cost scans per row regardless of the
size of the columns comprising the row. This fails to account for time
to read or transport a large number of bytes over the network. It can
also lead to undesirable plans when there are multiple options for scans
or joins that read directly from tables.
    
For example, let's say we have the following table with secondary
indexes, and the following query.
    
```
CREATE TABLE t (
      k INT PRIMARY KEY,
      x INT,
      y INT,
      z INT,
      j JSONB,
      INDEX xj (x, j),
      INDEX xy (x, y));
    
SELECT k, x, z FROM t where x > 3;
```
    
Before this change, the optimizer may choose to scan index xj and
perform an index join, even if the average column size of j (and
therefore the number of bytes scanned reading index xj) is much
greater than the average column size of y (and therefore the number
of bytes scanned reading index xy).
    
This change utilizes the `avg_size` table statistic to cost scans and
relevant joins relative to the average column size of the columns
being scanned. If the table does not have an average size statistic
available for a column, the default value of 4 bytes results in the same
cost as before this change.
    
Informs: #72332

Release note (sql change): Modifies query cost based on the `avg_size`
table statistic, which may change query plans. This is gated by the
session setting `cost_scans_with_default_col_size`, and can be disabled by
setting it to true via `SET cost_scans_with_default_col_size=true`.



Co-authored-by: rharding6373 <rharding6373@users.noreply.github.com>
rharding6373 added a commit to rharding6373/cockroach that referenced this issue Feb 25, 2022
This change rewrites the stats for tpcc and tpch to include the new
table statistic avg_size.

Fixes: cockroachdb#72332

Release note: None
craig bot pushed a commit that referenced this issue Feb 25, 2022
…77019 #77045 #77047 #77049

72925: sql, cli: support basic auto complete for sql keywords r=rafiss a=RichardJCai

sql: add SHOW COMPLETIONS AT offset FOR syntax 

Release note (sql change): Support
SHOW COMPLETIONS AT OFFSET <offset> FOR <stmt> syntax that
returns a set of SQL keywords that can complete the keyword at
<offset> in the given <stmt>.

If the offset is in the middle of a word, then it returns the
full word.
For example SHOW COMPLETIONS AT OFFSET 1 FOR "SELECT" returns select.

cli: support autocomplete 

Release note (cli change): CLI now auto completes on tab
by using `SHOW COMPLETIONS AT OFFSET`.

76539: cli: Enable profiles and other debug info for tenants r=rimadeodhar a=rimadeodhar

This PR updates debug.zip functionality to
collect goroutine stacks and profiles for all
active SQL instances for a tenant.
This PR also addresses a bug where the nodes.json
and status.json data was not getting populated
correctly due to the switch to the `NodesList` API.
This bug has been addressed by using the p;d
`Nodes` API when the debug zip command is run against
a storage server.

Release note: None

76676: telemetry,sql: remove redaction from operational sql data r=abarganier a=dhartunian

Previously, when redaction was introduced into CRDB, all unidentifiable
strings were marked as redacted since that was the safer approach. We
expected to later return with a closer look and differentiate more
carefully between what should be redacted and what shouldn't.

SQL names have been identified as operational-sensitive data that should
not be redacted since it provides very useful debugging information and,
while user-controlled, do not typically contain user-data since those
would be stored in a Datum. This commit marks names as safe from
redaction for telemetry logging in cases where the
`sql.telemetry.query_sampling.enabled` cluster setting is enabled.

Additionally, some log tags such as client IP addresses are not to be
considered sensitive and are critical to debugging operational issues.
They have also been marked as safe.

In order to help with documenting these cases, a helper
`SafeOperational()` has been added to the `log` package. This helps us
mark strings as safe while documenting *why* we're doing so.

Resolves #76595

Release note (security update, ops change): When the
`sql.telemetry.query_sampling.enabled` cluster setting is enabled, SQL
names and client IPs are no longer redacted in telemetry logs.

76754: physicalplan: add support for multi-stage execution of corr, covar_samp, sqrdiff, and regr_count aggregate functions. r=yuzefovich a=mneverov

Fixes: #58347.

Release note (performance improvement): corr, covar_samp, sqrdiff, and
regr_count aggregate functions are now evaluated more efficiently in a
distributed setting

76908: roachtest: update 22.1 version map to v21.2.6 r=bananabrick a=bananabrick

Release note: None

76948: spanconfigreconciler{ccl}: apply system span config diffs to the store r=arulajmani a=adityamaru

This change teaches the reconciler about system span configs. Concretely,
we make the following changes:

- A full reconciliation when checking for existing span configurations now
asks for SpanConfigs corresponding to the SystemTargets relevant to the tenant.

For the host tenant this includes the SystemTarget for the `entire-keyspace` as
well as the SystemTarget for span configs installed by the host tenant on its
tenant keyspace, and on other secondary tenant keyspaces.

For secondary tenants this only includes the SystemTarget for span configs installed
by it on its own tenant keyspace.

- During incremental reconciliation, before applying our updates to the Store,
we now also check for "missing protected timestamp system targets". These correspond
to protected timestamp records that target a `Cluster` or a `Tenant` but no longer
exist in the system.protected_ts_records table as they have been released by the client.
For every such unique missing system target we apply a spanconfig.Deletion to the Store.

In order to make the above possible, this change moves the ptsStateReader from the
`spanconfigsqltranslator` package, to the top level `spanconfig` package.

Informs: #73727

Release note: None

76990: sql, geo: Fix upper case geohash parsing and allow NULL arguments r=otan a=RichardJCai

Release note (sql change): ST_Box2DFromGeoHash now accepts NULL arguments,
the precision being NULL is the same as no precision being passed in at all.

Upper case characters are now parsed as lower case characters for geohash,
this matches Postgis behaviour.

Resolves #75537

77006: changefeedccl: Fix data race in a test. r=miretskiy a=miretskiy

Fix data race in TestChangefeedSendError.

Release Notes: None

77014: log: Set content type header for http sink r=dhartunian a=rimadeodhar

The content type header for the output of HTTP log sink
is always set to text/plain irrespective of the log format.
If the log format is JSON, we should set the content
type to be application/json.

Release note (bug fix): The content type header for the
HTTP log sink is set to application/json if the format of
the log output is JSON.

77019: sql: update tpcc and tpch stats r=rharding6373 a=rharding6373

This change rewrites the stats for tpcc and tpch to include the new
table statistic avg_size.

Fixes: #72332

Release note: None

77045: bazel: bump `rules_go` to pick up cockroachdb/rules_go#4 r=irfansharif a=rickystewart

Closes #77037.

Release note: None

77047: dev: make sure we inherit `stdout` and `stderr` when appropriate r=postamar,irfansharif a=rickystewart

Otherwise `bazel build`/`test` output will be ugly (won't be colored/
use `ncurses`/etc.)

Release note: None

77049: sql: skip flaky schema_changer/drop_database_cascade test r=postamar a=RichardJCai

Release note: None

Co-authored-by: richardjcai <caioftherichard@gmail.com>
Co-authored-by: rimadeodhar <rima@cockroachlabs.com>
Co-authored-by: David Hartunian <davidh@cockroachlabs.com>
Co-authored-by: Max Neverov <neverov.max@gmail.com>
Co-authored-by: Arjun Nair <nair@cockroachlabs.com>
Co-authored-by: Aditya Maru <adityamaru@gmail.com>
Co-authored-by: Yevgeniy Miretskiy <yevgeniy@cockroachlabs.com>
Co-authored-by: rharding6373 <harding@cockroachlabs.com>
Co-authored-by: Ricky Stewart <ricky@cockroachlabs.com>
@craig craig bot closed this as completed in 0048936 Feb 25, 2022
@craig craig bot closed this as completed in #77019 Feb 25, 2022
RajivTS pushed a commit to RajivTS/cockroach that referenced this issue Mar 6, 2022
This change rewrites the stats for tpcc and tpch to include the new
table statistic avg_size.

Fixes: cockroachdb#72332

Release note: None
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
C-enhancement Solution expected to add code/behavior + preserve backward-compat (pg compat issues are exception) T-sql-queries SQL Queries Team
Projects
Archived in project
Development

Successfully merging a pull request may close this issue.

1 participant