Skip to content

Commit

Permalink
Browse files Browse the repository at this point in the history
…db#75058 cockroachdb#75076 cockroachdb#75465 cockroachdb#75504

72665: sql: remove invalid database privileges  r=rafiss a=RichardJCai

sql: remove invalid database privileges 

Release note (sql change): SELECT, INSERT, DELETE, UPDATE can
no longer be granted/revoked on databases. Previously
SELECT, INSERT, DELETE, UPDATE would be converted to ALTER DEFAULT PRIVILEGES
on GRANTs and were revokable but now they are no longer revokable either.

Resolves cockroachdb#68731

74251: opt: add avgSize stat to statisticsBuilder r=rharding6373 a=rharding6373

We recently added a new table stat, `avgSize`, that is the average size,
in bytes, of a table column. This PR is the first in a series of commits
to use the new stat for more accurate cost modeling in the optimizer.

This commit applies `avgSize` to `statisticsBuilder` in the following
ways:
* It loads `avgSize` when it fetches table statistics.
* It modifies the `avgSize` for some operators (e.g., union) which
  may affect the average size of a column.
* If the `avgSize` is not found for a column (e.g., if it is not in the
  table statistics or the column is synthesized), a default value of 4
is applied unless all the rows are known to be `NULL`.

This change also prints out `avgSize` as part of `EXPLAIN` if stats are
requested. It does not affect how queries are costed.

Informs: cockroachdb#72332

Release note: None

74831: sql, server: fix errors seen from combined statements endpoint r=Azhng,maryliag a=xinhaoz

## Commit 1 sql, server: use a view to join persisted and in-mem sql stats
Partially addresses: cockroachdb#71245

Previously, we created virtual tables to join in-memory and
persisted disk statement and transaction statistics. This
proved to be inefficient as requests to the the virtual
tables lead to full scans of the underlying system tables.

This commit utilizes virtual views to combine in-memory and disk
stats. The view queries in-memory stats from the virtual tables
`crdb_internal.cluster_{statement, transactions}_statistics,`
and combines them with the results from the system tables.
This allows us to push down query filters into the system
tables, leveraging their existing indexes.

Release note: None

## Commit 2 sql: decrease stats flush interval to every 10 mins 

Previously, we flush in-memory sql stats collected by each
node on an hourly interval. We have found that this hourly
interval might be too conservative, and the size of the
returned cluster-wide stats after an hour can also be
quite large, sometimes exceeding the gRPC max message
size.

This commit lowers the flush interval to every 10 minutes.
Since we want to continue to aggregate stats on an hourly
interval, we introduce a new cluster setting
`sql.stats.aggregation.interval` to control the
aggregation interval separately from the flush frequency.

Release note (sql change): The default sql stats flush interval
is now 10 minutes. A new cluster setting
`sql.stats.aggregatinon.interval` controls the aggregation
interval of sql stats, with a default value of 1 hour.

## Commit 3 server: allow statements EP to optionally exclude stmt or txns stats

Closes: cockroachdb#71829

Previously, the /statements endpoint returned cluster-wide
in-memory stats, containing  both statements and transactions stats.
In the past, we've observed the Statements endpoint response being
too large for gRPC. Because this endpoint is being used by virtual
tables that powers our combined stats api,
cluster_{statement,transactions}_stats, we might continue to surpass
the gRPC message size in the new api. However, each virtual table
only uses roughly half the response size (either statement or txn
stats).

This commit allows the virtual tables to exclude statement or txn
stats from the Statements endpoint resposne by introducing new
request parameters to /statements. This reduces the response size
in the stats virtual tables.

Release note: None

75058: sql: native evaluation support for NotExpr r=yuzefovich a=RajivTS

The commit includes the following changes to provide native evaluation support for tree.NotExpr:
1. Defined new operators NotExprProjOp for projection and NotExprSelOp for selection when evaluating the result of a NotExpr
2. Defined NotNullProjOp for projection when the underlying type is non-bool and contains only Nulls.
3. Defined the Next method for both the projection and selection operators.
4. Added test cases for testing the functionality of NotExprProjOp, NotNullProjOp and NotExprSelOp operators.

Fixes: cockroachdb#70713

Release note (performance improvement): queries using `NOT expr` syntax can now be evaluated faster in some cases.

75076: schemachanger: columns are not always backfilled in transactions r=fqazi a=fqazi

Fixes: cockroachdb#75074

Previously, when multiple columns were added in a transaction,
the schema changer incorrectly determined if a backfill was
required based on the last column that was added. This was
inadequate because in a transaction multiple columns can
be added concurrently, where some may require a backfill,
and others may not. To address this, this patch checks
if any of the columns added need a backfill and uses that
to determine if a backfill is required.

Release note (bug fix): If multiple columns were added to
a table inside a transaction, then none of the columns
will be backfilled if the last column did not require
a backfill.

75465: ci: add `go_transition_test` support to `bazci` r=rail a=rickystewart

These targets have special bespoke output directories for `testlogs`, so
we can't find them in the standard location.

Also allow `bazci run`.

Closes cockroachdb#75184.

Release note: None

75504: pkg/sql: add `-linecomment` when building `roleoption` `stringer` file r=maryliag a=rickystewart

Release note: None

Co-authored-by: richardjcai <caioftherichard@gmail.com>
Co-authored-by: rharding6373 <rharding6373@users.noreply.github.com>
Co-authored-by: Xin Hao Zhang <xzhang@cockroachlabs.com>
Co-authored-by: RajivTS <rajshar.email@gmail.com>
Co-authored-by: Faizan Qazi <faizan@cockroachlabs.com>
Co-authored-by: Ricky Stewart <ricky@cockroachlabs.com>
  • Loading branch information
7 people committed Jan 25, 2022
8 parents 0e932c3 + 8eaa225 + 6135b2c + 5d51963 + 37e74f0 + 8796833 + be0e69d + 83e5190 commit 81c447d
Show file tree
Hide file tree
Showing 160 changed files with 5,250 additions and 4,107 deletions.
6 changes: 3 additions & 3 deletions build/STRINGER.bzl
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# stringer lets us define the equivalent of `//go:generate stringer` files
# within bazel sandbox.
def stringer(src, typ, name):
def stringer(src, typ, name, additional_args=[]):
native.genrule(
name = name,
srcs = [src], # Accessed below using `$<`.
Expand All @@ -14,8 +14,8 @@ def stringer(src, typ, name):
GO_ABS_PATH=`cd $$GO_REL_PATH && pwd`
# Set GOPATH to something to workaround https://github.com/golang/go/issues/43938
env PATH=$$GO_ABS_PATH HOME=$(GENDIR) GOPATH=/nonexist-gopath \
$(location @com_github_cockroachdb_tools//cmd/stringer:stringer) -output=$@ -type={} $<
""".format(typ),
$(location @com_github_cockroachdb_tools//cmd/stringer:stringer) -output=$@ -type={} {} $<
""".format(typ, " ".join(additional_args)),
tools = [
"@go_sdk//:bin/go",
"@com_github_cockroachdb_tools//cmd/stringer",
Expand Down
12 changes: 7 additions & 5 deletions build/teamcity/cockroach/ci/tests/acceptance.sh
Original file line number Diff line number Diff line change
Expand Up @@ -5,18 +5,20 @@ set -xeuo pipefail
dir="$(dirname $(dirname $(dirname $(dirname $(dirname "${0}")))))"
source "$dir/teamcity-support.sh"

tc_prepare

export ARTIFACTSDIR=$PWD/artifacts/acceptance
mkdir -p "$ARTIFACTSDIR"

tc_start_block "Run acceptance tests"
status=0
bazel run \
//pkg/acceptance:acceptance_test \
--config=crosslinux --config=test \

bazel build //pkg/cmd/bazci --config=ci
BAZCI=$(bazel info bazel-bin --config=ci)/pkg/cmd/bazci/bazci_/bazci

$BAZCI run --config=crosslinux --config=test --artifacts_dir=$PWD/artifacts \
//pkg/acceptance:acceptance_test -- \
--test_arg=-l="$ARTIFACTSDIR" \
--test_env=TZ=America/New_York \
--test_env=GO_TEST_WRAP_TESTV=1 \
--test_timeout=1800 || status=$?

# Some unit tests test automatic ballast creation. These ballasts can be
Expand Down
1 change: 1 addition & 0 deletions docs/generated/http/full.md
Original file line number Diff line number Diff line change
Expand Up @@ -3437,6 +3437,7 @@ tenant pods.
| combined | [bool](#cockroach.server.serverpb.StatementsRequest-bool) | | If this field is set we will use the combined statements API instead. | [reserved](#support-status) |
| start | [int64](#cockroach.server.serverpb.StatementsRequest-int64) | | These fields are used for the combined statements API. | [reserved](#support-status) |
| end | [int64](#cockroach.server.serverpb.StatementsRequest-int64) | | | [reserved](#support-status) |
| fetch_mode | [StatementsRequest.FetchMode](#cockroach.server.serverpb.StatementsRequest-cockroach.server.serverpb.StatementsRequest.FetchMode) | | | [reserved](#support-status) |



Expand Down
4 changes: 3 additions & 1 deletion docs/generated/settings/settings-for-tenants.txt
Original file line number Diff line number Diff line change
Expand Up @@ -151,16 +151,18 @@ sql.multiregion.drop_primary_region.enabled boolean true allows dropping the PRI
sql.notices.enabled boolean true enable notices in the server/client protocol being sent
sql.optimizer.uniqueness_checks_for_gen_random_uuid.enabled boolean false if enabled, uniqueness checks may be planned for mutations of UUID columns updated with gen_random_uuid(); otherwise, uniqueness is assumed due to near-zero collision probability
sql.spatial.experimental_box2d_comparison_operators.enabled boolean false enables the use of certain experimental box2d comparison operators
sql.stats.aggregation.interval duration 1h0m0s the interval at which we aggregate SQL execution statistics upon flush, this value must be greater than or equal to sql.stats.flush.interval
sql.stats.automatic_collection.enabled boolean true automatic statistics collection mode
sql.stats.automatic_collection.fraction_stale_rows float 0.2 target fraction of stale rows per table that will trigger a statistics refresh
sql.stats.automatic_collection.min_stale_rows integer 500 target minimum number of stale rows per table that will trigger a statistics refresh
sql.stats.cleanup.recurrence string @hourly cron-tab recurrence for SQL Stats cleanup job
sql.stats.flush.enabled boolean true if set, SQL execution statistics are periodically flushed to disk
sql.stats.flush.interval duration 1h0m0s the interval at which SQL execution statistics are flushed to disk
sql.stats.flush.interval duration 10m0s the interval at which SQL execution statistics are flushed to disk, this value must be less than or equal to sql.stats.aggregation.interval
sql.stats.histogram_collection.enabled boolean true histogram collection mode
sql.stats.multi_column_collection.enabled boolean true multi-column statistics collection mode
sql.stats.persisted_rows.max integer 1000000 maximum number of rows of statement and transaction statistics that will be persisted in the system tables
sql.stats.post_events.enabled boolean false if set, an event is logged for every CREATE STATISTICS job
sql.stats.response.max integer 20000 the maximum number of statements and transaction stats returned in a CombinedStatements request
sql.telemetry.query_sampling.enabled boolean false when set to true, executed queries will emit an event on the telemetry logging channel
sql.temp_object_cleaner.cleanup_interval duration 30m0s how often to clean up orphaned temporary objects
sql.temp_object_cleaner.wait_interval duration 30m0s how long after creation a temporary object will be cleaned up
Expand Down
4 changes: 3 additions & 1 deletion docs/generated/settings/settings.html
Original file line number Diff line number Diff line change
Expand Up @@ -159,16 +159,18 @@
<tr><td><code>sql.notices.enabled</code></td><td>boolean</td><td><code>true</code></td><td>enable notices in the server/client protocol being sent</td></tr>
<tr><td><code>sql.optimizer.uniqueness_checks_for_gen_random_uuid.enabled</code></td><td>boolean</td><td><code>false</code></td><td>if enabled, uniqueness checks may be planned for mutations of UUID columns updated with gen_random_uuid(); otherwise, uniqueness is assumed due to near-zero collision probability</td></tr>
<tr><td><code>sql.spatial.experimental_box2d_comparison_operators.enabled</code></td><td>boolean</td><td><code>false</code></td><td>enables the use of certain experimental box2d comparison operators</td></tr>
<tr><td><code>sql.stats.aggregation.interval</code></td><td>duration</td><td><code>1h0m0s</code></td><td>the interval at which we aggregate SQL execution statistics upon flush, this value must be greater than or equal to sql.stats.flush.interval</td></tr>
<tr><td><code>sql.stats.automatic_collection.enabled</code></td><td>boolean</td><td><code>true</code></td><td>automatic statistics collection mode</td></tr>
<tr><td><code>sql.stats.automatic_collection.fraction_stale_rows</code></td><td>float</td><td><code>0.2</code></td><td>target fraction of stale rows per table that will trigger a statistics refresh</td></tr>
<tr><td><code>sql.stats.automatic_collection.min_stale_rows</code></td><td>integer</td><td><code>500</code></td><td>target minimum number of stale rows per table that will trigger a statistics refresh</td></tr>
<tr><td><code>sql.stats.cleanup.recurrence</code></td><td>string</td><td><code>@hourly</code></td><td>cron-tab recurrence for SQL Stats cleanup job</td></tr>
<tr><td><code>sql.stats.flush.enabled</code></td><td>boolean</td><td><code>true</code></td><td>if set, SQL execution statistics are periodically flushed to disk</td></tr>
<tr><td><code>sql.stats.flush.interval</code></td><td>duration</td><td><code>1h0m0s</code></td><td>the interval at which SQL execution statistics are flushed to disk</td></tr>
<tr><td><code>sql.stats.flush.interval</code></td><td>duration</td><td><code>10m0s</code></td><td>the interval at which SQL execution statistics are flushed to disk, this value must be less than or equal to sql.stats.aggregation.interval</td></tr>
<tr><td><code>sql.stats.histogram_collection.enabled</code></td><td>boolean</td><td><code>true</code></td><td>histogram collection mode</td></tr>
<tr><td><code>sql.stats.multi_column_collection.enabled</code></td><td>boolean</td><td><code>true</code></td><td>multi-column statistics collection mode</td></tr>
<tr><td><code>sql.stats.persisted_rows.max</code></td><td>integer</td><td><code>1000000</code></td><td>maximum number of rows of statement and transaction statistics that will be persisted in the system tables</td></tr>
<tr><td><code>sql.stats.post_events.enabled</code></td><td>boolean</td><td><code>false</code></td><td>if set, an event is logged for every CREATE STATISTICS job</td></tr>
<tr><td><code>sql.stats.response.max</code></td><td>integer</td><td><code>20000</code></td><td>the maximum number of statements and transaction stats returned in a CombinedStatements request</td></tr>
<tr><td><code>sql.telemetry.query_sampling.enabled</code></td><td>boolean</td><td><code>false</code></td><td>when set to true, executed queries will emit an event on the telemetry logging channel</td></tr>
<tr><td><code>sql.temp_object_cleaner.cleanup_interval</code></td><td>duration</td><td><code>30m0s</code></td><td>how often to clean up orphaned temporary objects</td></tr>
<tr><td><code>sql.temp_object_cleaner.wait_interval</code></td><td>duration</td><td><code>30m0s</code></td><td>how long after creation a temporary object will be cleaned up</td></tr>
Expand Down
4 changes: 4 additions & 0 deletions docs/generated/sql/functions.md
Original file line number Diff line number Diff line change
Expand Up @@ -323,6 +323,10 @@
</span></td></tr>
<tr><td><a name="cardinality"></a><code>cardinality(input: anyelement[]) &rarr; <a href="int.html">int</a></code></td><td><span class="funcdesc"><p>Calculates the number of elements contained in <code>input</code></p>
</span></td></tr>
<tr><td><a name="crdb_internal.merge_statement_stats"></a><code>crdb_internal.merge_statement_stats(input: anyelement[]) &rarr; jsonb</code></td><td><span class="funcdesc"><p>Merge an array of roachpb.StatementStatistics into a single JSONB object</p>
</span></td></tr>
<tr><td><a name="crdb_internal.merge_transaction_stats"></a><code>crdb_internal.merge_transaction_stats(input: anyelement[]) &rarr; jsonb</code></td><td><span class="funcdesc"><p>Merge an array of roachpb.TransactionStatistics into a single JSONB object</p>
</span></td></tr>
<tr><td><a name="string_to_array"></a><code>string_to_array(str: <a href="string.html">string</a>, delimiter: <a href="string.html">string</a>) &rarr; <a href="string.html">string</a>[]</code></td><td><span class="funcdesc"><p>Split a string into components on a delimiter.</p>
</span></td></tr>
<tr><td><a name="string_to_array"></a><code>string_to_array(str: <a href="string.html">string</a>, delimiter: <a href="string.html">string</a>, null: <a href="string.html">string</a>) &rarr; <a href="string.html">string</a>[]</code></td><td><span class="funcdesc"><p>Split a string into components on a delimiter with a specified string to consider NULL.</p>
Expand Down
5 changes: 3 additions & 2 deletions pkg/ccl/backupccl/backup_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -5009,7 +5009,7 @@ func TestRestoredPrivileges(t *testing.T) {
// Explicitly don't restore grants when just restoring a database since we
// cannot ensure that the same users exist in the restoring cluster.
data2Grants := sqlDB.QueryStr(t, `SHOW GRANTS ON DATABASE data2`)
sqlDB.Exec(t, `GRANT SELECT, INSERT, UPDATE, DELETE ON DATABASE data2 TO someone`)
sqlDB.Exec(t, `GRANT CONNECT, CREATE, DROP, GRANT, ZONECONFIG ON DATABASE data2 TO someone`)

withGrants := sqlDB.QueryStr(t, `SHOW GRANTS ON data.bank`)

Expand All @@ -5031,7 +5031,8 @@ func TestRestoredPrivileges(t *testing.T) {
sqlDBRestore := sqlutils.MakeSQLRunner(tc.Conns[0])
sqlDBRestore.Exec(t, `CREATE DATABASE data`)
sqlDBRestore.Exec(t, `CREATE USER someone`)
sqlDBRestore.Exec(t, `GRANT SELECT, INSERT, UPDATE, DELETE ON DATABASE data TO someone`)
sqlDBRestore.Exec(t, `USE data`)
sqlDBRestore.Exec(t, `ALTER DEFAULT PRIVILEGES GRANT SELECT, INSERT, UPDATE, DELETE ON TABLES TO someone`)
sqlDBRestore.Exec(t, `RESTORE data.bank FROM $1`, LocalFoo)
sqlDBRestore.CheckQueryResults(t, `SHOW GRANTS ON data.bank`, withGrants)
})
Expand Down
1 change: 0 additions & 1 deletion pkg/ccl/backupccl/full_cluster_backup_restore_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -156,7 +156,6 @@ CREATE TABLE data2.foo (a int);
)
// Populate system.role_members.
sqlDB.Exec(t, `CREATE ROLE system_ops;`)
sqlDB.Exec(t, `GRANT CREATE, SELECT ON DATABASE data TO system_ops;`)
sqlDB.Exec(t, `GRANT system_ops TO maxroach1;`)

// Populate system.scheduled_jobs table with a first run in the future to prevent immediate adoption.
Expand Down
6 changes: 3 additions & 3 deletions pkg/ccl/backupccl/show_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -333,7 +333,7 @@ GRANT agents TO agent_bond;
GRANT agents TO agent_thomas;
GRANT ALL ON DATABASE mi5 TO agents;
REVOKE UPDATE ON DATABASE mi5 FROM agents;
--REVOKE UPDATE ON DATABASE mi5 FROM agents;
GRANT ALL ON SCHEMA locator TO m;
GRANT ALL ON SCHEMA locator TO agent_bond;
Expand All @@ -355,8 +355,8 @@ GRANT UPDATE ON top_secret TO agent_bond;
sqlDB.Exec(t, `BACKUP DATABASE mi5 TO $1;`, showPrivs)

want := [][]string{
{`mi5`, `database`, `GRANT ALL ON mi5 TO admin; GRANT CONNECT, CREATE, DELETE, DROP, GRANT, INSERT, ` +
`SELECT, ZONECONFIG ON mi5 TO agents; GRANT CONNECT ON mi5 TO public; GRANT ALL ON mi5 TO root; `, `root`},
{`mi5`, `database`, `GRANT ALL ON mi5 TO admin; GRANT ALL ` +
`ON mi5 TO agents; GRANT CONNECT ON mi5 TO public; GRANT ALL ON mi5 TO root; `, `root`},
{`public`, `schema`, `GRANT ALL ON public TO admin; GRANT CREATE, USAGE ON public TO public; GRANT ALL ON public TO root; `, `admin`},
{`locator`, `schema`, `GRANT ALL ON locator TO admin; GRANT CREATE, GRANT ON locator TO agent_bond; GRANT ALL ON locator TO m; ` +
`GRANT ALL ON locator TO root; `, `root`},
Expand Down
7 changes: 2 additions & 5 deletions pkg/ccl/backupccl/testdata/backup-restore/backup-permissions
Original file line number Diff line number Diff line change
Expand Up @@ -122,11 +122,8 @@ exec-sql
CREATE DATABASE d;
REVOKE CONNECT ON DATABASE d FROM public;
CREATE USER testuser;
GRANT SELECT ON DATABASE d TO testuser;
GRANT CONNECT ON DATABASE d TO testuser;
----
NOTICE: GRANT SELECT ON DATABASE is deprecated.
This statement was automatically converted to USE d; ALTER DEFAULT PRIVILEGES GRANT SELECT ON TABLES TO testuser;
Please use ALTER DEFAULT PRIVILEGES going forward

exec-sql server=s3 user=testuser
SHOW BACKUP 'http://COCKROACH_TEST_HTTP_SERVER/'
Expand All @@ -136,7 +133,7 @@ pq: only users with the admin role are allowed to SHOW BACKUP from the specified
exec-sql user=testuser
BACKUP DATABASE d TO 'nodelocal://0/test3'
----
pq: user testuser does not have CONNECT privilege on database d
pq: only users with the admin role are allowed to BACKUP to the specified nodelocal URI

# Test that http access is disallowed by disable http even if allow-non-admin is on.
new-server name=s4 allow-implicit-access disable-http
Expand Down
2 changes: 1 addition & 1 deletion pkg/ccl/logictestccl/testdata/logic_test/changefeed
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ user root
# Test granting CONTROLCHANGEFEED.
statement ok
ALTER USER testuser CONTROLCHANGEFEED;
GRANT SELECT ON DATABASE test TO testuser
GRANT CONNECT ON DATABASE test TO testuser

user testuser

Expand Down
2 changes: 2 additions & 0 deletions pkg/cli/zip_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -86,6 +86,8 @@ table_name NOT IN (
'session_trace',
'session_variables',
'tables',
'cluster_statement_statistics',
'cluster_transaction_statistics',
'statement_statistics',
'transaction_statistics',
'tenant_usage_details'
Expand Down
67 changes: 59 additions & 8 deletions pkg/cmd/bazci/bazci.go
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,7 @@ import (
"io/ioutil"
"os"
"os/exec"
"path/filepath"
"strings"

"github.com/alessio/shellescape"
Expand All @@ -26,6 +27,7 @@ import (

const (
buildSubcmd = "build"
runSubcmd = "run"
testSubcmd = "test"
mergeTestXMLsSubcmd = "merge-test-xmls"
mungeTestXMLSubcmd = "munge-test-xml"
Expand Down Expand Up @@ -100,8 +102,8 @@ func parseArgs(args []string, argsLenAtDash int) (*parsedArgs, error) {
if len(args) < 2 {
return nil, errUsage
}
if args[0] != buildSubcmd && args[0] != testSubcmd && args[0] != mungeTestXMLSubcmd && args[0] != mergeTestXMLsSubcmd {
return nil, errors.Newf("First argument must be `build`, `test`, `merge-test-xmls`, or `munge-test-xml`; got %v", args[0])
if args[0] != buildSubcmd && args[0] != runSubcmd && args[0] != testSubcmd && args[0] != mungeTestXMLSubcmd && args[0] != mergeTestXMLsSubcmd {
return nil, errors.Newf("First argument must be `build`, `run`, `test`, `merge-test-xmls`, or `munge-test-xml`; got %v", args[0])
}
var splitLoc int
if argsLenAtDash < 0 {
Expand Down Expand Up @@ -138,11 +140,21 @@ type buildInfo struct {
// into their component tests and all put in this list, so this may be
// considerably longer than the argument list.
tests []string
// Expanded set of go_transition_test targets to be run. The map is the full test target
// name -> the location of the corresponding `bazel-testlogs` directory for this test.
transitionTests map[string]string
}

func runBazelReturningStdout(subcmd string, arg ...string) (string, error) {
if subcmd != "query" {
arg = append(configArgList(), arg...)
var configArgs []string
// The `test` config is implied in this case.
if subcmd == "cquery" {
configArgs = configArgList("test")
} else {
configArgs = configArgList()
}
arg = append(configArgs, arg...)
arg = append(arg, "-c", compilationMode)
}
arg = append([]string{subcmd}, arg...)
Expand All @@ -155,7 +167,7 @@ func runBazelReturningStdout(subcmd string, arg ...string) (string, error) {
}

func getBuildInfo(args parsedArgs) (buildInfo, error) {
if args.subcmd != buildSubcmd && args.subcmd != testSubcmd {
if args.subcmd != buildSubcmd && args.subcmd != runSubcmd && args.subcmd != testSubcmd {
return buildInfo{}, errors.Newf("Unexpected subcommand %s. This is a bug!", args.subcmd)
}
binDir, err := runBazelReturningStdout("info", "bazel-bin")
Expand All @@ -168,8 +180,9 @@ func getBuildInfo(args parsedArgs) (buildInfo, error) {
}

ret := buildInfo{
binDir: binDir,
testlogsDir: testlogsDir,
binDir: binDir,
testlogsDir: testlogsDir,
transitionTests: make(map[string]string),
}

for _, target := range args.targets {
Expand All @@ -194,6 +207,33 @@ func getBuildInfo(args parsedArgs) (buildInfo, error) {
ret.goBinaries = append(ret.goBinaries, fullTarget)
case "go_test":
ret.tests = append(ret.tests, fullTarget)
case "go_transition_test":
// Run cquery to get the hash of the config.
res, err := runBazelReturningStdout("cquery", fullTarget, "--output=label_kind")
if err != nil {
return buildInfo{}, err
}
configHash := strings.Fields(res)[3]
// The hash will start be surrounded with (), so trim those.
configHash = strings.TrimPrefix(configHash, "(")
configHash = strings.TrimSuffix(configHash, ")")
res, err = runBazelReturningStdout("config", configHash)
if err != nil {
return buildInfo{}, err
}
var testlogsDir string
for _, line := range strings.Split(res, "\n") {
if strings.Contains(line, "transition directory name fragment") {
fragmentLine := strings.Split(line, ":")
fragment := strings.TrimSpace(fragmentLine[1])
testlogsDir = filepath.Join(filepath.Dir(ret.testlogsDir)+"-"+fragment, filepath.Base(ret.testlogsDir))
break
}
}
if testlogsDir == "" {
return buildInfo{}, errors.Newf("could not find transition directory name fragment for target %s", fullTarget)
}
ret.transitionTests[fullTarget] = testlogsDir
case "test_suite":
// Expand the list of tests from the test suite with another query.
allTests, err := runBazelReturningStdout("query", "tests("+fullTarget+")")
Expand Down Expand Up @@ -289,10 +329,21 @@ func mergeTestXMLs(args parsedArgs) error {
return bazelutil.MergeTestXMLs(xmlsToMerge, os.Stdout)
}

func configArgList() []string {
// Return a list of the form --config=$CONFIG for every $CONFIG in configs,
// with the exception of every config in `exceptions`.
func configArgList(exceptions ...string) []string {
ret := []string{}
for _, config := range configs {
ret = append(ret, "--config="+config)
keep := true
for _, exception := range exceptions {
if config == exception {
keep = false
break
}
}
if keep {
ret = append(ret, "--config="+config)
}
}
return ret
}
Expand Down
Loading

0 comments on commit 81c447d

Please sign in to comment.