Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

what exactly does disallow_full_table_scans do? / is it working correctly? #70795

Closed
davepacheco opened this issue Sep 27, 2021 · 7 comments · Fixed by #71317
Closed

what exactly does disallow_full_table_scans do? / is it working correctly? #70795

davepacheco opened this issue Sep 27, 2021 · 7 comments · Fixed by #71317
Labels
A-sql-optimizer SQL logical planning and optimizations. C-bug Code not up to spec/doc, specs & docs deemed correct. Solution expected to change code/behavior. E-quick-win Likely to be a quick win for someone experienced. good first issue O-community Originated from the community T-sql-queries SQL Queries Team X-blathers-triaged blathers was able to find an owner

Comments

@davepacheco
Copy link

I apologize if this is the wrong venue for this. I can't tell if this behavior is buggy or just not what I expected.

Describe the problem

The disable_full_table_scans option seems so close to what I'm looking for, but not quite right.

My team is building an application atop CockroachDB with the intent of scaling horizontally. Naturally, we want to avoid using queries that require full scans on tables that might grow arbitrarily large. We're trying to figure out the best way to avoid accidentally introducing such a query. Ideally, we'd like to find out early during development that a query is problematic, not when we go to do large-scale testing later. I was excited to find disallow_full_table_scans=off because I thought it might help us identify queries that require a table scan. But it doesn't seem to work quite the way I'd expect in two ways.

First, it seems to allow some full scans? Take this query:

root@127.0.0.1:32221/omicron> set disallow_full_table_scans=on;
SET

Time: 52ms total (execution 0ms / network 51ms)

root@127.0.0.1:32221/omicron> EXPLAIN SELECT * FROM
  vpcsubnet,
  vpc,
  project
WHERE
  project.name = 'test1' AND
  project.time_deleted IS NULL AND
  project.id = vpc.project_id AND
  vpc.name = 'vpc1' AND
  vpc.time_deleted IS NULL AND
  vpc.id = vpcsubnet.vpc_id AND
  vpcsubnet.time_deleted IS NULL
ORDER BY
  vpcsubnet.name ASC
LIMIT
  100
;
              tree              |        field        |                 description
--------------------------------+---------------------+----------------------------------------------
                                | distribution        | full
                                | vectorized          | false
  limit                         |                     |
   │                            | count               | 100
   └── sort                     |                     |
        │                       | order               | +name
        └── hash join           |                     |
             │                  | equality            | (vpc_id) = (id)
             │                  | right cols are key  |
             ├── filter         |                     |
             │    │             | filter              | time_deleted IS NULL
             │    └── scan      |                     |
             │                  | estimated row count | 1
             │                  | table               | vpcsubnet@primary
             │                  | spans               | FULL SCAN
             └── hash join      |                     |
                  │             | equality            | (project_id) = (id)
                  │             | right cols are key  |
                  ├── filter    |                     |
                  │    │        | filter              | (name = 'vpc1') AND (time_deleted IS NULL)
                  │    └── scan |                     |
                  │             | estimated row count | 1
                  │             | table               | vpc@primary
                  │             | spans               | FULL SCAN
                  └── filter    |                     |
                       │        | filter              | (name = 'test1') AND (time_deleted IS NULL)
                       └── scan |                     |
                                | estimated row count | 1
                                | table               | project@primary
                                | spans               | FULL SCAN
(30 rows)

Time: 54ms total (execution 3ms / network 51ms)

There are three FULL SCAN nodes there. Why is that allowed with disallow_full_table_scans=on?

Note that when I ran this, all three of these tables were empty:

root@127.0.0.1:32221/omicron> select count(*) from vpcsubnet;
ERROR: query `select count(*) from vpcsubnet` contains a full table/index scan which is explicitly disallowed
SQLSTATE: P0003
HINT: try overriding the `disallow_full_table_scans` cluster/session setting
root@127.0.0.1:32221/omicron> set disallow_full_table_scans=off;
SET

Time: 53ms total (execution 0ms / network 53ms)

root@127.0.0.1:32221/omicron> select count(*) from vpcsubnet;
  count
---------
      0
(1 row)

Time: 53ms total (execution 0ms / network 53ms)

root@127.0.0.1:32221/omicron> select count(*) from vpc;
  count
---------
      0
(1 row)

Time: 52ms total (execution 1ms / network 52ms)

root@127.0.0.1:32221/omicron> select count(*) from project;
  count
---------
      0
(1 row)

Time: 53ms total (execution 1ms / network 52ms)

If you want the schema to reproduce this:

root@127.0.0.1:32221/omicron> SHOW CREATE TABLE vpc;
  table_name |                                               create_statement
-------------+----------------------------------------------------------------------------------------------------------------
  vpc        | CREATE TABLE public.vpc (
             |     id UUID NOT NULL,
             |     name STRING(63) NOT NULL,
             |     description STRING(512) NOT NULL,
             |     time_created TIMESTAMPTZ NOT NULL,
             |     time_modified TIMESTAMPTZ NOT NULL,
             |     time_deleted TIMESTAMPTZ NULL,
             |     project_id UUID NOT NULL,
             |     dns_name STRING(63) NOT NULL,
             |     CONSTRAINT "primary" PRIMARY KEY (id ASC),
             |     UNIQUE INDEX vpc_project_id_name_key (project_id ASC, name ASC) WHERE time_deleted IS NULL,
             |     FAMILY "primary" (id, name, description, time_created, time_modified, time_deleted, project_id, dns_name)
             | )
(1 row)

Time: 56ms total (execution 5ms / network 51ms)

root@127.0.0.1:32221/omicron> SHOW CREATE TABLE vpcsubnet;
  table_name |                                                    create_statement
-------------+--------------------------------------------------------------------------------------------------------------------------
  vpcsubnet  | CREATE TABLE public.vpcsubnet (
             |     id UUID NOT NULL,
             |     name STRING(63) NOT NULL,
             |     description STRING(512) NOT NULL,
             |     time_created TIMESTAMPTZ NOT NULL,
             |     time_modified TIMESTAMPTZ NOT NULL,
             |     time_deleted TIMESTAMPTZ NULL,
             |     vpc_id UUID NOT NULL,
             |     ipv4_block INET NULL,
             |     ipv6_block INET NULL,
             |     CONSTRAINT "primary" PRIMARY KEY (id ASC),
             |     UNIQUE INDEX vpcsubnet_vpc_id_name_key (vpc_id ASC, name ASC) WHERE time_deleted IS NULL,
             |     FAMILY "primary" (id, name, description, time_created, time_modified, time_deleted, vpc_id, ipv4_block, ipv6_block)
             | )
(1 row)

Time: 56ms total (execution 5ms / network 51ms)

root@127.0.0.1:32221/omicron> SHOW CREATE TABLE project;
  table_name |                                    create_statement
-------------+------------------------------------------------------------------------------------------
  project    | CREATE TABLE public.project (
             |     id UUID NOT NULL,
             |     name STRING(63) NOT NULL,
             |     description STRING(512) NOT NULL,
             |     time_created TIMESTAMPTZ NOT NULL,
             |     time_modified TIMESTAMPTZ NOT NULL,
             |     time_deleted TIMESTAMPTZ NULL,
             |     CONSTRAINT "primary" PRIMARY KEY (id ASC),
             |     UNIQUE INDEX project_name_key (name ASC) WHERE time_deleted IS NULL,
             |     FAMILY "primary" (id, name, description, time_created, time_modified, time_deleted)
             | )
(1 row)

Time: 56ms total (execution 5ms / network 51ms)

I suspected that maybe this was allowed because the query planner chose to use a full scan even when it wasn't needed (because it knows the table is tiny -- empty, really). However, I ran into a different case where the query planner chose to do a table scan when an index was available, and that query was disallowed by disallow_full_table_scans. This was the second thing I found surprising.

This time, I'm using this table:

root@127.0.0.1:32221/omicron> SHOW CREATE TABLE saga;
  table_name |                                                          create_statement
-------------+--------------------------------------------------------------------------------------------------------------------------------------
  saga       | CREATE TABLE public.saga (
             |     id UUID NOT NULL,
             |     creator UUID NOT NULL,
             |     template_name STRING(127) NOT NULL,
             |     time_created TIMESTAMPTZ NOT NULL,
             |     saga_params JSONB NOT NULL,
             |     saga_state STRING(31) NOT NULL,
             |     current_sec UUID NULL,
             |     adopt_generation INT8 NOT NULL,
             |     adopt_time TIMESTAMPTZ NOT NULL,
             |     CONSTRAINT "primary" PRIMARY KEY (id ASC),
             |     UNIQUE INDEX saga_current_sec_id_key (current_sec ASC, id ASC) WHERE saga_state != 'done':::STRING,
             |     FAMILY "primary" (id, creator, template_name, time_created, saga_params, saga_state, current_sec, adopt_generation, adopt_time)
             | )
(1 row)

Time: 60ms total (execution 8ms / network 52ms)

Again, starting from an empty table:

root@127.0.0.1:32221/omicron> select count(*) from saga;
  count
---------
      0
(1 row)

Time: 51ms total (execution 1ms / network 50ms)

I was surprised that this query didn't work with disallow_full_table_scans:

root@127.0.0.1:32221/omicron> SELECT * from saga WHERE current_sec = '971E7E1D-3DBB-48A2-9692-646B825A8560'
AND saga_state !=
'done' ORDER BY id ASC;
ERROR: query `SELECT * from saga WHERE current_sec = '971E7E1D-3DBB-48A2-9692-646B825A8560' AND saga_state !=
'done' ORDER BY id ASC` contains a full table/index scan which is explicitly disallowed
SQLSTATE: P0003
HINT: try overriding the `disallow_full_table_scans` cluster/session setting

It's not surprising that the query planner would choose to do a table scan here, since the table is empty, and sure enough:

root@127.0.0.1:32221/omicron> EXPLAIN SELECT * from saga WHERE current_sec = '971E7E1D-3DBB-48A2-9692-646B825A8560' AND saga_s
tate != 'done' ORDER BY id ASC;
     tree    |        field        |                                     description
-------------+---------------------+--------------------------------------------------------------------------------------
             | distribution        | local
             | vectorized          | false
  index join |                     |
   │         | table               | saga@primary
   └── scan  |                     |
             | estimated row count | 0
             | table               | saga@saga_current_sec_id_key (partial index)
             | spans               | [/'971e7e1d-3dbb-48a2-9692-646b825a8560' - /'971e7e1d-3dbb-48a2-9692-646b825a8560']
(8 rows)

Time: 53ms total (execution 1ms / network 53ms)

So I inserted 500-1000 records using:

seq 0 500 | while read i; do cockroach sql --url postgresql://root@127.0.0.1:32221/omicron?sslmode=disable -e "INSERT INTO Saga (id, creator, template_name, time_created, saga_params, saga_state, adopt_generation, adopt_time) VALUES ('$(uuidgen)', '$(uuidgen)', 'dummy-template', NOW(), '{}', 'bogus', 0, NOW())"; done

and sure enough I get a new plan that uses the index and I can run it even with disallow_full_table_scans:

root@127.0.0.1:32221/omicron> EXPLAIN SELECT * from saga WHERE current_sec = '971E7E1D-3DBB-48A2-9692-646B825A8560' AND saga_s
tate != 'done' ORDER BY id ASC;
     tree    |        field        |                                     description
-------------+---------------------+--------------------------------------------------------------------------------------
             | distribution        | local
             | vectorized          | false
  index join |                     |
   │         | table               | saga@primary
   └── scan  |                     |
             | estimated row count | 0
             | table               | saga@saga_current_sec_id_key (partial index)
             | spans               | [/'971e7e1d-3dbb-48a2-9692-646b825a8560' - /'971e7e1d-3dbb-48a2-9692-646b825a8560']
(8 rows)

Time: 53ms total (execution 1ms / network 53ms)

root@127.0.0.1:32221/omicron> show disallow_full_table_scans;
  disallow_full_table_scans
-----------------------------
  on
(1 row)

Time: 52ms total (execution 1ms / network 51ms)

root@127.0.0.1:32221/omicron> SELECT * from saga WHERE current_sec = '971E7E1D-3DBB-48A2-9692-646B825A8560' AND saga_state !=
'done' ORDER BY id ASC;
  id | creator | template_name | time_created | saga_params | saga_state | current_sec | adopt_generation | adopt_time
-----+---------+---------------+--------------+-------------+------------+-------------+------------------+-------------
(0 rows)

Time: 51ms total (execution 1ms / network 50ms)

More on this below, but the fact that this behavior depends on which plan the query planner chooses makes it a lot less useful for what we're trying to do.

To Reproduce

Set up a CockroachDB cluster with default configuration.

Use the above CREATE TABLE statements to set up the schema, then run the above EXPLAIN and observe the query plan. (If you need the example where a query was disallowed, even when an index was available, let me know.) Observe it doing a full scan. You can do this again with the second example and observe it disallow the query.

Expected behavior

In the first case, given that CockroachDB chose a full scan, I expected it to disallow the query. Instead, it executed the query.

Alternatively, if the expected behavior of disallow_full_table_scans is that it only disallows queries that can only be completed with a full scan, then I expected my second query to not produce an error.

Additional data / screenshots

If the problem is SQL-related, include a copy of the SQL query and the schema
of the supporting tables.

See above.

Environment:

Server:

$ cockroach version
Build Tag:        v20.2.5
Build Time:       2021/03/17 21:00:51
Distribution:     OSS
Platform:         illumos amd64 (x86_64-pc-solaris2.11)
Go Version:       go1.16.2
C Compiler:       gcc 9.3.0
Build Commit ID:  162c5ac4968cf31c0ed54cd29aa8aeccd66247bb
Build Type:       release

Client: cockroach sql:

dap@zathras ~ $ cockroach version
Build Tag:        v20.2.5
Build Time:       2021/02/16 12:57:34
Distribution:     CCL
Platform:         darwin amd64 (x86_64-apple-darwin14)
Go Version:       go1.13.14
C Compiler:       4.2.1 Compatible Clang 3.8.0 (tags/RELEASE_380/final)
Build Commit ID:  162c5ac4968cf31c0ed54cd29aa8aeccd66247bb
Build Type:       release

Additional context

Add any other context about the problem here.

Between these two examples, I can't tell what the exactly behavior of disallow_full_table_scans is supposed to be. Is it supposed to change nothing about the query plan and simply deny queries that get planned with a full scan? If so, why did the first one work? Or is it supposed to encourage the query planner to avoid a table scan and fail only if it couldn't? If so, why did the second query get denied?

Our real goal is to avoid building our application using queries that will clearly require table scans as tables get large. So it's tempting to set disallow_full_table_scans=on in the application. That way, if we have good test coverage, we can have confidence that we're not going to run into scaling limitations due to table scans. (Obviously, this isn't a substitute for actual scale testing, but anything we can do to catch this early in development would be a big help.) I'm also excited by #67964, since that would allow us to set this only on tables that we know will be large.

But it doesn't seem like this will work. Because disallow_full_table_scans=on disallows queries for which the planner happens to have chosen a table scan, not just those that require a table scan, it produces false positives any time a table is small.

Thanks for reading this far. If there's a better way to achieve what we're trying to do, please let me know!

@davepacheco davepacheco added the C-bug Code not up to spec/doc, specs & docs deemed correct. Solution expected to change code/behavior. label Sep 27, 2021
@blathers-crl
Copy link

blathers-crl bot commented Sep 27, 2021

Hello, I am Blathers. I am here to help you get the issue triaged.

Hoot - a bug! Though bugs are the bane of my existence, rest assured the wretched thing will get the best of care here.

I have CC'd a few people who may be able to assist you:

If we have not gotten back to your issue within a few business days, you can try the following:

  • Join our community slack channel and ask on #cockroachdb.
  • Try find someone from here if you know they worked closely on the area and CC them.

🦉 Hoot! I am a Blathers, a bot for CockroachDB. My owner is otan.

@blathers-crl blathers-crl bot added O-community Originated from the community X-blathers-triaged blathers was able to find an owner labels Sep 27, 2021
@michae2 michae2 added A-sql-optimizer SQL logical planning and optimizations. T-sql-queries SQL Queries Team labels Sep 27, 2021
@michae2
Copy link
Collaborator

michae2 commented Sep 27, 2021

Thanks for opening the issue, @davepacheco. No need to apologize. The initial implementation of disallow_full_table_scans does have problems (for example) which make it difficult to use in production. In 20.2, disallow_full_table_scans is implemented as roughly:

  1. At the end of planning, if the chosen plan has any full scans on non-virtual tables, flag it as a full-scan plan.
  2. Before execution, if disallow_full_table_scans is on and the plan is flagged as a full-scan plan, return an error instead of executing.

This means that disallow_full_table_scans does not currently influence the optimizer's plan choice, which I think is what you were (rightfully) expecting. And it is definitely a problem.

First, it seems to allow some full scans? Take this query:

root@127.0.0.1:32221/omicron> set disallow_full_table_scans=on;
SET

Time: 52ms total (execution 0ms / network 51ms)

root@127.0.0.1:32221/omicron> EXPLAIN SELECT * FROM
  vpcsubnet,
  vpc,
  project
WHERE
  project.name = 'test1' AND
  project.time_deleted IS NULL AND
  project.id = vpc.project_id AND
  vpc.name = 'vpc1' AND
  vpc.time_deleted IS NULL AND
  vpc.id = vpcsubnet.vpc_id AND
  vpcsubnet.time_deleted IS NULL
ORDER BY
  vpcsubnet.name ASC
LIMIT
  100
;
              tree              |        field        |                 description
--------------------------------+---------------------+----------------------------------------------
                                | distribution        | full
                                | vectorized          | false
  limit                         |                     |
   │                            | count               | 100
   └── sort                     |                     |
        │                       | order               | +name
        └── hash join           |                     |
             │                  | equality            | (vpc_id) = (id)
             │                  | right cols are key  |
             ├── filter         |                     |
             │    │             | filter              | time_deleted IS NULL
             │    └── scan      |                     |
             │                  | estimated row count | 1
             │                  | table               | vpcsubnet@primary
             │                  | spans               | FULL SCAN
             └── hash join      |                     |
                  │             | equality            | (project_id) = (id)
                  │             | right cols are key  |
                  ├── filter    |                     |
                  │    │        | filter              | (name = 'vpc1') AND (time_deleted IS NULL)
                  │    └── scan |                     |
                  │             | estimated row count | 1
                  │             | table               | vpc@primary
                  │             | spans               | FULL SCAN
                  └── filter    |                     |
                       │        | filter              | (name = 'test1') AND (time_deleted IS NULL)
                       └── scan |                     |
                                | estimated row count | 1
                                | table               | project@primary
                                | spans               | FULL SCAN
(30 rows)

Time: 54ms total (execution 3ms / network 51ms)

There are three FULL SCAN nodes there. Why is that allowed with disallow_full_table_scans=on?

The optimizer produces this plan because (currently) it doesn't know about disallow_full_table_scans. The setting is only checked when we go to execute. So EXPLAIN will succeed but the query without EXPLAIN should fail. (Did you observe the query without EXPLAIN succeeding?)

I suspected that maybe this was allowed because the query planner chose to use a full scan even when it wasn't needed (because it knows the table is tiny -- empty, really). However, I ran into a different case where the query planner chose to do a table scan when an index was available, and that query was disallowed by disallow_full_table_scans. This was the second thing I found surprising.

Yes, sometimes the optimizer chooses to use a full scan for tiny tables, even if disallow_full_table_scans is on. This is a problem.

Between these two examples, I can't tell what the exactly behavior of disallow_full_table_scans is supposed to be. Is it supposed to change nothing about the query plan and simply deny queries that get planned with a full scan? If so, why did the first one work?

Yes, this is the current implementation. I think the first one worked because EXPLAIN (without ANALYZE) does not actually execute the query.

Or is it supposed to encourage the query planner to avoid a table scan and fail only if it couldn't? If so, why did the second query get denied?

This is what we probably should be doing, but are not.

But it doesn't seem like this will work. Because disallow_full_table_scans=on disallows queries for which the planner happens to have chosen a table scan, not just those that require a table scan, it produces false positives any time a table is small.

Thanks for reading this far. If there's a better way to achieve what we're trying to do, please let me know!

You've summed it up well.

I might have some good news, we recently changed the implementation a little so that now:

  1. At the end of planning, if the chosen plan has any full scans on non-virtual tables expecting to read more than large_full_scan_rows, flag it as a large-full-scan plan.
  2. Before execution, if disallow_full_table_scans is on and the plan is flagged as a large-full-scan plan, return an error instead of executing.

(This change will be released in 21.1.10 with large_full_scan_rows defaulting to 0 and in 21.2.0 with large_full_scan_rows defaulting to 1000.)

This means that disallow_full_table_scans should no longer affect scans of tiny tables. This isn't exactly a direct fix, but since the problems usually only occur with tiny tables, it might work as an indirect fix. I'm curious to hear what you think. We can keep this issue open to track the fact that the optimizer does not know about disallow_full_table_scans.

@davepacheco
Copy link
Author

Thanks for the quick and detailed reply!

So EXPLAIN will succeed but the query without EXPLAIN should fail. (Did you observe the query without EXPLAIN succeeding?)

Ah, you're right. I thought I had tried running this without EXPLAIN and seen it execute the query. But that's not what I have in my notes and I can't reproduce it, so I'm probably misremembering.

I think the first one worked because EXPLAIN (without ANALYZE) does not actually execute the query.

For what it's worth, EXPLAIN ANALYZE does produce output.

I might have some good news...
This means that disallow_full_table_scans should no longer affect scans of tiny tables. This isn't exactly a direct fix, but since the problems usually only occur with tiny tables, it might work as an indirect fix. I'm curious to hear what you think.

Cool! Yes, this sounds like an improvement in the ability to identify at runtime that a query's going to run badly before you run it.

I don't think it quite captures what I'm hoping to do. The case I'd love to be able to catch is where we've got a small database (because we're in development) and we accidentally introduce a query into the application without an associated index to make that query run quickly. With the updated implementation, we won't get a false positive, but we also won't get a true positive because the tables are small. One improvement though is we can load the tables up with at least large_full_scan_rows and know that if that works, we should be in good shape, right? (We considered doing that before this change, but we wouldn't know what value to pick -- any value we picked felt like testing an implementation detail of CockroachDB.)

We can keep this issue open to track the fact that the optimizer does not know about disallow_full_table_scans.

Sounds good. From my understanding, that'd be necessary to reliably identify queries that require table scans.

I know we can't totally solve the problem of catching all pathological queries. But any assistance from the database in identifying queries that will fall apart at scale (before you get to that scale) would be invaluable for application developers. I'm excited to see a bunch of work around this. (Out of curiosity, is there other tooling in this area, or plans for it? Another example I can think of is when we have an index, but we forget to sort the results in index order, so the database can't just walk it in sorted order but needs to assemble the results first, then sort.)

@davepacheco
Copy link
Author

I wrote:

One improvement though is we can load the tables up with at least large_full_scan_rows and know that if that works, we should be in good shape, right?

Thinking about that more: I think we still have the same problem. The number of rows we'd need to insert in each table is the greater of large_full_scan_rows and the number of rows that causes the query planner to use the index, and we still don't know the latter.

We could use this mechanism to fail quickly at runtime instead of taking arbitrarily long. That might be an improvement. But for our app, we may want that to be latency-based anyway.

@michae2
Copy link
Collaborator

michae2 commented Sep 28, 2021

One improvement though is we can load the tables up with at least large_full_scan_rows and know that if that works, we should be in good shape, right?

Thinking about that more: I think we still have the same problem. The number of rows we'd need to insert in each table is the greater of large_full_scan_rows and the number of rows that causes the query planner to use the index, and we still don't know the latter.

It's true, this latter number is difficult to know ahead of time, and might even change depending on the table or the version of CRDB. (Anecdotally it's probably always below 1000 rows.) Good news is that in 21.1 the optimizer is less likely to pick full scans for small tables so that should reduce the chance of false positives.

Out of curiosity, is there other tooling in this area, or plans for it?

We did recently add some guardrails which:

  • log or prevent excessively large rows (sql.guardrails.max_row_size_log and sql.guardrails.max_row_size_err)
  • log or prevent transactions reading too many rows (transaction_rows_read_log and transaction_rows_read_err)
  • log or prevent transactions writing too many rows (transaction_rows_written_log and transaction_rows_written_err)

(These should also be released in 21.1.10 and 21.2.0.) There are plans for more.

@davepacheco
Copy link
Author

Thanks!

@michae2
Copy link
Collaborator

michae2 commented Sep 28, 2021

Notes: @rytaft pointed out that we could use a "hints"-like mechanism to teach the optimizer about this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
A-sql-optimizer SQL logical planning and optimizations. C-bug Code not up to spec/doc, specs & docs deemed correct. Solution expected to change code/behavior. E-quick-win Likely to be a quick win for someone experienced. good first issue O-community Originated from the community T-sql-queries SQL Queries Team X-blathers-triaged blathers was able to find an owner
Projects
Archived in project
Development

Successfully merging a pull request may close this issue.

2 participants