-
Notifications
You must be signed in to change notification settings - Fork 16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
build(deps): bump sequelize from 4.37.1 to 5.15.1 in /pkg/acceptance/testdata/node #2
Open
dependabot
wants to merge
1
commit into
master
Choose a base branch
from
dependabot/npm_and_yarn/pkg/acceptance/testdata/node/sequelize-5.15.1
base: master
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Bumps [sequelize](https://github.com/sequelize/sequelize) from 4.37.1 to 5.15.1. - [Release notes](https://github.com/sequelize/sequelize/releases) - [Commits](sequelize/sequelize@v4.37.1...v5.15.1) Signed-off-by: dependabot[bot] <support@github.com>
cockroach-teamcity
pushed a commit
that referenced
this pull request
Jul 1, 2020
This change introduces a new FileTableStorage flavor of ExternalStorage. FileTableStorage is backed by the UserFileTableSystem and allows a user to write and read file blobs to/from user scoped SQL tables. Unit tests were also added. This is commit #2 of 2 to intgrate the UserFileTableStorage. Release note: None
cockroach-teamcity
pushed a commit
that referenced
this pull request
Jan 11, 2021
Looks like the movr series can take a while to initialize on TestDockerCLI, so starting an empty database instead. This would still have caught the original regression. Release note: None
cockroach-teamcity
pushed a commit
that referenced
this pull request
Jan 11, 2021
58687: cli: deflake test_demo_global.tcl (attempt #2) r=knz a=otan Looks like the movr series can take a while to initialize on TestDockerCLI, so starting an empty database instead. This would still have caught the original regression. Release note: None Co-authored-by: Oliver Tan <otan@cockroachlabs.com>
cockroach-teamcity
pushed a commit
that referenced
this pull request
Jan 11, 2021
Looks like the movr series can take a while to initialize on TestDockerCLI, so starting an empty database instead. This would still have caught the original regression. Release note: None
cockroach-teamcity
pushed a commit
that referenced
this pull request
Jan 11, 2021
release-20.2: cli: deflake test_demo_global.tcl (attempt #2)
cockroach-teamcity
pushed a commit
that referenced
this pull request
Apr 2, 2021
We arrived at the previous default rate of 10% back in #59379. This was back when we were creating real tracing spans for all statements, and for sampled statements, we were propagating additional stats payloads. Consequently what #59379 ended up measuring (and finding the overhead acceptable) for was the performance hit we would incur for propagating stats payloads for statements already using real tracing spans. Since then, the landscape has changed. Notably we introduced #61777, which made it so that we were only using real tracing spans for sampled statements. This was done after performance analysis in #59424 showed that the use of real tracing spans in all statements resulted in tremendous overhead, for no real benefit. What this now leaves us with is a sampling rate that was tuned by only considering the stats payload overhead. What we want now is to also consider the overhead of using real tracing spans for sampled statements, vs. not. Doing this analysis gives us a very different picture for what the default rate should be. --- To find out what the overhead for sampled statements are currently, we experimented with kv95/enc=false/nodes=1/cpu=32. It's a simple benchmark that does little more than one-off statements, so should give us a concise picture of the sampling overhead. We ran six experiments in total (each corresponding to a pair of read+write rows), done in groups of three (each group corresponding to a table below). Each run in turn is comprised of 10 iterations of kv95, and what's varied between each run is the default sampling rate. We pin a sampling rate of 0.0 as the baseline that effectively switches off sampling entirely (and tracing), and measure the throughput degradation as we vary the sampling rate. ops/sec ops/sec --------------------|------------------|------------------ rate op grp | median diff | mean diff --------------------|------------------|------------------ 0.00 / read / #1 | 69817.90 | 69406.37 0.01 / read / #1 | 69300.35 -0.74% | 68717.23 -0.99% 0.10 / read / #1 | 67743.35 -2.97% | 67601.81 -2.60% 0.00 / write / #1 | 3672.55 | 3653.63 0.01 / write / #1 | 3647.65 -0.68% | 3615.90 -1.03% 0.10 / write / #1 | 3567.20 -2.87% | 3558.90 -2.59% ops/sec ops/sec --------------------|------------------|------------------ rate op grp | median diff | mean diff --------------------|------------------|------------------ 0.00 / read / #2 | 69440.80 68893.24 0.01 / read / #2 | 69481.55 +0.06% 69463.13 +0.82% (probably in the noise margin) 0.10 / read / #2 | 67841.80 -2.30% 66992.55 -2.76% 0.00 / write / #2 | 3652.45 3625.24 0.01 / write / #2 | 3657.55 -0.14% 3654.34 +0.80% 0.10 / write / #2 | 3570.75 -2.24% 3526.04 -2.74% The results above suggest that the current default rate of 10% is too high, and a 1% rate is much more acceptable. --- The fact that the cost of sampling is largely dominated by tracing is extremely unfortunate. We have ideas for how that can be improved (prototyped in #62227), but they're much too invasive to backport to 21.1. It's unfortunate that we only discovered the overhead this late in the development cycle. It was due to two major reasons: - #59992 landed late in the cycle, and enabled tracing for realsies (by propagating real tracing spans across rpc boundaries). We had done sanity checking for the tracing overhead before this point, but failed to realize that #59992 would merit re-analysis. - The test that alerted us to the degradation (tpccbench) had be persistently failing for a myriad of other reasons, so we didn't learn until too late that tracing was the latest offendor. tpccbench also doesn't deal with VM overload well (something #62361 hopes to address), and after tracing was enabled for realsies, this was the dominant failure mode. This resulted in perf data not making it's way to roachperf, which further hid possible indicators we had a major regression on our hands. We also didn't have a healthy process looking at roachperf on a continual basis, something we're looking to rectify going forward. We would've picked up on this regression had we been closely monitoring the kv95 charts. Release note: None
cockroach-teamcity
pushed a commit
that referenced
this pull request
Apr 2, 2021
62998: sql: lower default sampling rate to 1% r=irfansharif a=irfansharif We arrived at the previous default rate of 10% back in #59379. This was back when we were creating real tracing spans for all statements, and for sampled statements, we were propagating additional stats payloads. Consequently what #59379 ended up measuring (and finding the overhead acceptable) for was the performance hit we would incur for propagating stats payloads for statements already using real tracing spans. Since then, the landscape has changed. Notably we introduced #61777, which made it so that we were only using real tracing spans for sampled statements. This was done after performance analysis in #59424 showed that the use of real tracing spans in all statements resulted in tremendous overhead, for no real benefit. What this now leaves us with is a sampling rate that was tuned by only considering the stats payload overhead. What we want now is to also consider the overhead of using real tracing spans for sampled statements, vs. not. Doing this analysis gives us a very different picture for what the default rate should be. --- To find out what the overhead for sampled statements are currently, we experimented with kv95/enc=false/nodes=1/cpu=32. It's a simple benchmark that does little more than one-off statements, so should give us a concise picture of the sampling overhead. We ran six experiments in total (each corresponding to a pair of read+write rows), done in groups of three (each group corresponding to a table below). Each run in turn is comprised of 10 iterations of kv95, and what's varied between each run is the default sampling rate. We pin a sampling rate of 0.0 as the baseline that effectively switches off sampling entirely (and tracing), and measure the throughput degradation as we vary the sampling rate. ops/sec ops/sec --------------------|------------------|------------------ rate op grp | median diff | mean diff --------------------|------------------|------------------ 0.00 / read / #1 | 69817.90 | 69406.37 0.01 / read / #1 | 69300.35 -0.74% | 68717.23 -0.99% 0.10 / read / #1 | 67743.35 -2.97% | 67601.81 -2.60% 0.00 / write / #1 | 3672.55 | 3653.63 0.01 / write / #1 | 3647.65 -0.68% | 3615.90 -1.03% 0.10 / write / #1 | 3567.20 -2.87% | 3558.90 -2.59% ops/sec ops/sec --------------------|------------------|------------------ rate op grp | median diff | mean diff --------------------|------------------|------------------ 0.00 / read / #2 | 69440.80 68893.24 0.01 / read / #2 | 69481.55 +0.06% 69463.13 +0.82% (probably in the noise margin) 0.10 / read / #2 | 67841.80 -2.30% 66992.55 -2.76% 0.00 / write / #2 | 3652.45 3625.24 0.01 / write / #2 | 3657.55 -0.14% 3654.34 +0.80% 0.10 / write / #2 | 3570.75 -2.24% 3526.04 -2.74% The results above suggest that the current default rate of 10% is too high, and a 1% rate is much more acceptable. --- The fact that the cost of sampling is largely dominated by tracing is extremely unfortunate. We have ideas for how that can be improved (prototyped in #62227), but they're much too invasive to backport to 21.1. Release note: None Co-authored-by: irfan sharif <irfanmahmoudsharif@gmail.com>
cockroach-teamcity
pushed a commit
that referenced
this pull request
Apr 2, 2021
We arrived at the previous default rate of 10% back in #59379. This was back when we were creating real tracing spans for all statements, and for sampled statements, we were propagating additional stats payloads. Consequently what #59379 ended up measuring (and finding the overhead acceptable) for was the performance hit we would incur for propagating stats payloads for statements already using real tracing spans. Since then, the landscape has changed. Notably we introduced #61777, which made it so that we were only using real tracing spans for sampled statements. This was done after performance analysis in #59424 showed that the use of real tracing spans in all statements resulted in tremendous overhead, for no real benefit. What this now leaves us with is a sampling rate that was tuned by only considering the stats payload overhead. What we want now is to also consider the overhead of using real tracing spans for sampled statements, vs. not. Doing this analysis gives us a very different picture for what the default rate should be. --- To find out what the overhead for sampled statements are currently, we experimented with kv95/enc=false/nodes=1/cpu=32. It's a simple benchmark that does little more than one-off statements, so should give us a concise picture of the sampling overhead. We ran six experiments in total (each corresponding to a pair of read+write rows), done in groups of three (each group corresponding to a table below). Each run in turn is comprised of 10 iterations of kv95, and what's varied between each run is the default sampling rate. We pin a sampling rate of 0.0 as the baseline that effectively switches off sampling entirely (and tracing), and measure the throughput degradation as we vary the sampling rate. ops/sec ops/sec --------------------|------------------|------------------ rate op grp | median diff | mean diff --------------------|------------------|------------------ 0.00 / read / #1 | 69817.90 | 69406.37 0.01 / read / #1 | 69300.35 -0.74% | 68717.23 -0.99% 0.10 / read / #1 | 67743.35 -2.97% | 67601.81 -2.60% 0.00 / write / #1 | 3672.55 | 3653.63 0.01 / write / #1 | 3647.65 -0.68% | 3615.90 -1.03% 0.10 / write / #1 | 3567.20 -2.87% | 3558.90 -2.59% ops/sec ops/sec --------------------|------------------|------------------ rate op grp | median diff | mean diff --------------------|------------------|------------------ 0.00 / read / #2 | 69440.80 68893.24 0.01 / read / #2 | 69481.55 +0.06% 69463.13 +0.82% (probably in the noise margin) 0.10 / read / #2 | 67841.80 -2.30% 66992.55 -2.76% 0.00 / write / #2 | 3652.45 3625.24 0.01 / write / #2 | 3657.55 -0.14% 3654.34 +0.80% 0.10 / write / #2 | 3570.75 -2.24% 3526.04 -2.74% The results above suggest that the current default rate of 10% is too high, and a 1% rate is much more acceptable. --- The fact that the cost of sampling is largely dominated by tracing is extremely unfortunate. We have ideas for how that can be improved (prototyped in #62227), but they're much too invasive to backport to 21.1. It's unfortunate that we only discovered the overhead this late in the development cycle. It was due to two major reasons: - #59992 landed late in the cycle, and enabled tracing for realsies (by propagating real tracing spans across rpc boundaries). We had done sanity checking for the tracing overhead before this point, but failed to realize that #59992 would merit re-analysis. - The test that alerted us to the degradation (tpccbench) had be persistently failing for a myriad of other reasons, so we didn't learn until too late that tracing was the latest offendor. tpccbench also doesn't deal with VM overload well (something #62361 hopes to address), and after tracing was enabled for realsies, this was the dominant failure mode. This resulted in perf data not making it's way to roachperf, which further hid possible indicators we had a major regression on our hands. We also didn't have a healthy process looking at roachperf on a continual basis, something we're looking to rectify going forward. We would've picked up on this regression had we been closely monitoring the kv95 charts. Release note: None
cockroach-teamcity
pushed a commit
that referenced
this pull request
Apr 29, 2022
79911: opt: refactor and test lookup join key column and expr generation r=mgartner a=mgartner #### opt: simplify fetching outer column in CustomFuncs.findComputedColJoinEquality Previously, `CustomFuncs.findComputedColJoinEquality` used `CustomFuncs.OuterCols` to retrieve the outer columns of computed column expressions. `CustomFuncs.OuterCols` returns the cached outer columns in the expression if it is a `memo.ScalarPropsExpr`, and falls back to calculating the outer columns with `memo.BuildSharedProps` otherwise. Computed column expressions are never `memo.ScalarPropsExpr`s, so we use just use `memo.BuildSharedProps` directly. Release note: None #### opt: make RemapCols a method on Factory instead of CustomFuncs Release note: None #### opt: use partial-index-reduced filters when building lookup expressions This commit makes a minor change to `generateLookupJoinsImpl`. Previously, equality filters were extracted from the original `ON` filters. Now they are extracted from filters that have been reduced by partial index implication. This has no effect on behavior because equality filters that reference columns in two tables cannot exist in partial index predicates, so they will never be eliminated during partial index implication. Release note: None #### opt: moves some lookup join generation logic to lookup join package This commit adds a new `lookupjoin` package. Logic for determining the key columns and lookup expressions for lookup joins has been moved to `lookupJoin.ConstraintBuilder`. The code was moved with as few changes as possible, and the behavior does not change in any way. This move will make it easier to test this code in isolation in the future, and allow for further refactoring. Release note: None #### opt: generalize lookupjoin.ConstraintBuilder API This commit makes the lookupjoin.ConstraintBuilder API more general to make unit testing easier in a future commit. Release note: None #### opt: add data-driven tests for lookupjoin.ConstraintBuilder Release note: None #### opt: add lookupjoin.Constraint struct The `lookupjoin.Constraint` struct has been added to encapsulate multiple data structures that represent a strategy for constraining a lookup join. Release note: None 80511: pkg/cloud/azure: Support specifying Azure environments in storage URLs r=adityamaru a=nlowe-sx The Azure Storage cloud provider learned a new parameter, AZURE_ENVIRONMENT, which specifies which azure environment the storage account in question belongs to. This allows cockroach to backup and restore data to Azure Storage Accounts outside the main Azure Public Cloud. For backwards compatibility, this defaults to "AzurePublicCloud" if AZURE_ENVIRONMENT is not specified. Fixes #47163 ## Verification Evidence I spun up a single node cluster: ``` nlowe@nlowe-z4l:~/projects/github/cockroachdb/cockroach [feat/47163-azure-storage-support-multiple-environments L|✚ 2] [🗓 2022-04-22 08:25:49] $ bazel run //pkg/cmd/cockroach:cockroach -- start-single-node --insecure WARNING: Option 'host_javabase' is deprecated WARNING: Option 'javabase' is deprecated WARNING: Option 'host_java_toolchain' is deprecated WARNING: Option 'java_toolchain' is deprecated INFO: Invocation ID: 11504a98-f767-413a-8994-8f92793c2ecf INFO: Analyzed target //pkg/cmd/cockroach:cockroach (0 packages loaded, 0 targets configured). INFO: Found 1 target... Target //pkg/cmd/cockroach:cockroach up-to-date: _bazel/bin/pkg/cmd/cockroach/cockroach_/cockroach INFO: Elapsed time: 0.358s, Critical Path: 0.00s INFO: 1 process: 1 internal. INFO: Build completed successfully, 1 total action INFO: Build completed successfully, 1 total action * * WARNING: ALL SECURITY CONTROLS HAVE BEEN DISABLED! * * This mode is intended for non-production testing only. * * In this mode: * - Your cluster is open to any client that can access any of your IP addresses. * - Intruders with access to your machine or network can observe client-server traffic. * - Intruders can log in without password and read or write any data in the cluster. * - Intruders can consume all your server's resources and cause unavailability. * * * INFO: To start a secure server without mandating TLS for clients, * consider --accept-sql-without-tls instead. For other options, see: * * - https://go.crdb.dev/issue-v/53404/dev * - https://www.cockroachlabs.com/docs/dev/secure-a-cluster.html * * * WARNING: neither --listen-addr nor --advertise-addr was specified. * The server will advertise "nlowe-z4l" to other nodes, is this routable? * * Consider using: * - for local-only servers: --listen-addr=localhost * - for multi-node clusters: --advertise-addr=<host/IP addr> * * CockroachDB node starting at 2022-04-22 15:25:55.461315977 +0000 UTC (took 2.1s) build: CCL unknown @ (go1.17.6) webui: http://nlowe-z4l:8080/ sql: postgresql://root@nlowe-z4l:26257/defaultdb?sslmode=disable sql (JDBC): jdbc:postgresql://nlowe-z4l:26257/defaultdb?sslmode=disable&user=root RPC client flags: /home/nlowe/.cache/bazel/_bazel_nlowe/cf6ed4d0d14c8e474a5c30d572846d8a/execroot/cockroach/bazel-out/k8-fastbuild/bin/pkg/cmd/cockroach/cockroach_/cockroach <client cmd> --host=nlowe-z4l:26257 --insecure logs: /home/nlowe/.cache/bazel/_bazel_nlowe/cf6ed4d0d14c8e474a5c30d572846d8a/execroot/cockroach/bazel-out/k8-fastbuild/bin/pkg/cmd/cockroach/cockroach_/cockroach.runfiles/cockroach/cockroach-data/logs temp dir: /home/nlowe/.cache/bazel/_bazel_nlowe/cf6ed4d0d14c8e474a5c30d572846d8a/execroot/cockroach/bazel-out/k8-fastbuild/bin/pkg/cmd/cockroach/cockroach_/cockroach.runfiles/cockroach/cockroach-data/cockroach-temp4100501952 external I/O path: /home/nlowe/.cache/bazel/_bazel_nlowe/cf6ed4d0d14c8e474a5c30d572846d8a/execroot/cockroach/bazel-out/k8-fastbuild/bin/pkg/cmd/cockroach/cockroach_/cockroach.runfiles/cockroach/cockroach-data/extern store[0]: path=/home/nlowe/.cache/bazel/_bazel_nlowe/cf6ed4d0d14c8e474a5c30d572846d8a/execroot/cockroach/bazel-out/k8-fastbuild/bin/pkg/cmd/cockroach/cockroach_/cockroach.runfiles/cockroach/cockroach-data storage engine: pebble clusterID: bb3942d7-f241-4d26-aa4a-1bd0d6556e4d status: initialized new cluster nodeID: 1 ``` I was then able to view the contents of a backup hosted in an azure government storage account: ``` root@:26257/defaultdb> SELECT DISTINCT object_name FROM [SHOW BACKUP 'azure://container/path/to/backup?AZURE_ACCOUNT_NAME=account&AZURE_ACCOUNT_KEY=***&AZURE_ENVIRONMENT=AzureUSGovernmentCloud'] WHERE object_type = 'database'; object_name ------------------------------------------ example_database ... (17 rows) Time: 5.859632889s ``` Omitting the `AZURE_ENVIRONMENT` parameter, we can see cockroach defaults to the public cloud where my storage account does not exist: ``` root@:26257/defaultdb> SELECT DISTINCT object_name FROM [SHOW BACKUP 'azure://container/path/to/backup?AZURE_ACCOUNT_NAME=account&AZURE_ACCOUNT_KEY=***'] WHERE object_type = 'database'; ERROR: reading previous backup layers: unable to list files for specified blob: Get "https://account.blob.core.windows.net/container?comp=list&delimiter=path%2Fto%2Fbackup&restype=container&timeout=61": dial tcp: lookup account.blob.core.windows.net on 8.8.8.8:53: no such host ``` ## Tests Two new tests are added to verify that the storage account URL is correctly built from the provided Azure Environment name, and that the Environment defaults to the Public Cloud if unspecified for backwards compatibility. I verified the existing tests pass against a government storage account after specifying `AZURE_ENVIRONMENT` as `AzureUSGovernmentCloud` in the backup URL query parameters: ``` nlowe@nlowe-mbp:~/projects/github/cockroachdb/cockroachdb [feat/47163-azure-storage-support-multiple-environments| …3] [🗓 2022-04-22 17:38:26] $ export AZURE_ACCOUNT_NAME=account nlowe@nlowe-mbp:~/projects/github/cockroachdb/cockroachdb [feat/47163-azure-storage-support-multiple-environments| …3] [🗓 2022-04-22 17:38:42] $ export AZURE_ACCOUNT_KEY=*** nlowe@nlowe-mbp:~/projects/github/cockroachdb/cockroachdb [feat/47163-azure-storage-support-multiple-environments| …3] [🗓 2022-04-22 17:39:25] $ export AZURE_CONTAINER=container nlowe@nlowe-mbp:~/projects/github/cockroachdb/cockroachdb [feat/47163-azure-storage-support-multiple-environments| …3] [🗓 2022-04-22 17:39:48] $ export AZURE_ENVIRONMENT=AzureUSGovernmentCloud nlowe@nlowe-mbp:~/projects/github/cockroachdb/cockroachdb [feat/47163-azure-storage-support-multiple-environments| …3] [🗓 2022-04-22 17:40:15] $ bazel test --test_output=streamed --test_arg=-test.v --action_env=AZURE_ACCOUNT_NAME --action_env=AZURE_ACCOUNT_KEY --action_env=AZURE_CONTAINER --action_env=AZURE_ENVIRONMENT //pkg/cloud/azure:azure_test INFO: Invocation ID: aa88a942-f3c7-4df6-bade-8f5f0e18041f WARNING: Streamed test output requested. All tests will be run locally, without sharding, one at a time INFO: Build option --action_env has changed, discarding analysis cache. INFO: Analyzed target //pkg/cloud/azure:azure_test (468 packages loaded, 16382 targets configured). INFO: Found 1 test target... initialized metamorphic constant "span-reuse-rate" with value 28 === RUN TestAzure === RUN TestAzure/simple_round_trip === RUN TestAzure/exceeds-4mb-chunk === RUN TestAzure/exceeds-4mb-chunk/rand-readats === RUN TestAzure/exceeds-4mb-chunk/rand-readats/#00 cloud_test_helpers.go:226: read 3345 of file at 4778744 === RUN TestAzure/exceeds-4mb-chunk/rand-readats/#1 cloud_test_helpers.go:226: read 7228 of file at 226589 === RUN TestAzure/exceeds-4mb-chunk/rand-readats/#2 cloud_test_helpers.go:226: read 634 of file at 256284 === RUN TestAzure/exceeds-4mb-chunk/rand-readats/#3 cloud_test_helpers.go:226: read 7546 of file at 3546208 === RUN TestAzure/exceeds-4mb-chunk/rand-readats/#4 cloud_test_helpers.go:226: read 24123 of file at 4821795 === RUN TestAzure/exceeds-4mb-chunk/rand-readats/#5 cloud_test_helpers.go:226: read 16899 of file at 403428 === RUN TestAzure/exceeds-4mb-chunk/rand-readats/#6 cloud_test_helpers.go:226: read 29467 of file at 4886370 === RUN TestAzure/exceeds-4mb-chunk/rand-readats/#7 cloud_test_helpers.go:226: read 11700 of file at 1876920 === RUN TestAzure/exceeds-4mb-chunk/rand-readats/#8 cloud_test_helpers.go:226: read 2928 of file at 489781 === RUN TestAzure/exceeds-4mb-chunk/rand-readats/#9 cloud_test_helpers.go:226: read 19933 of file at 1483342 === RUN TestAzure/read-single-file-by-uri === RUN TestAzure/write-single-file-by-uri === RUN TestAzure/file-does-not-exist === RUN TestAzure/List === RUN TestAzure/List/root === RUN TestAzure/List/file-slash-numbers-slash === RUN TestAzure/List/root-slash === RUN TestAzure/List/file === RUN TestAzure/List/file-slash === RUN TestAzure/List/slash-f === RUN TestAzure/List/nothing === RUN TestAzure/List/delim-slash-file-slash === RUN TestAzure/List/delim-data --- PASS: TestAzure (34.81s) --- PASS: TestAzure/simple_round_trip (9.66s) --- PASS: TestAzure/exceeds-4mb-chunk (16.45s) --- PASS: TestAzure/exceeds-4mb-chunk/rand-readats (6.41s) --- PASS: TestAzure/exceeds-4mb-chunk/rand-readats/#00 (0.15s) --- PASS: TestAzure/exceeds-4mb-chunk/rand-readats/#1 (0.64s) --- PASS: TestAzure/exceeds-4mb-chunk/rand-readats/#2 (0.65s) --- PASS: TestAzure/exceeds-4mb-chunk/rand-readats/#3 (0.60s) --- PASS: TestAzure/exceeds-4mb-chunk/rand-readats/#4 (0.75s) --- PASS: TestAzure/exceeds-4mb-chunk/rand-readats/#5 (0.80s) --- PASS: TestAzure/exceeds-4mb-chunk/rand-readats/#6 (0.75s) --- PASS: TestAzure/exceeds-4mb-chunk/rand-readats/#7 (0.65s) --- PASS: TestAzure/exceeds-4mb-chunk/rand-readats/#8 (0.65s) --- PASS: TestAzure/exceeds-4mb-chunk/rand-readats/#9 (0.77s) --- PASS: TestAzure/read-single-file-by-uri (0.60s) --- PASS: TestAzure/write-single-file-by-uri (0.60s) --- PASS: TestAzure/file-does-not-exist (1.05s) --- PASS: TestAzure/List (2.40s) --- PASS: TestAzure/List/root (0.30s) --- PASS: TestAzure/List/file-slash-numbers-slash (0.30s) --- PASS: TestAzure/List/root-slash (0.30s) --- PASS: TestAzure/List/file (0.30s) --- PASS: TestAzure/List/file-slash (0.30s) --- PASS: TestAzure/List/slash-f (0.30s) --- PASS: TestAzure/List/nothing (0.15s) --- PASS: TestAzure/List/delim-slash-file-slash (0.15s) --- PASS: TestAzure/List/delim-data (0.30s) === RUN TestAntagonisticAzureRead --- PASS: TestAntagonisticAzureRead (103.90s) === RUN TestParseAzureURL === RUN TestParseAzureURL/Defaults_to_Public_Cloud_when_AZURE_ENVIRONEMNT_unset === RUN TestParseAzureURL/Can_Override_AZURE_ENVIRONMENT --- PASS: TestParseAzureURL (0.00s) --- PASS: TestParseAzureURL/Defaults_to_Public_Cloud_when_AZURE_ENVIRONEMNT_unset (0.00s) --- PASS: TestParseAzureURL/Can_Override_AZURE_ENVIRONMENT (0.00s) === RUN TestMakeAzureStorageURLFromEnvironment === RUN TestMakeAzureStorageURLFromEnvironment/AzurePublicCloud === RUN TestMakeAzureStorageURLFromEnvironment/AzureUSGovernmentCloud --- PASS: TestMakeAzureStorageURLFromEnvironment (0.00s) --- PASS: TestMakeAzureStorageURLFromEnvironment/AzurePublicCloud (0.00s) --- PASS: TestMakeAzureStorageURLFromEnvironment/AzureUSGovernmentCloud (0.00s) PASS Target //pkg/cloud/azure:azure_test up-to-date: _bazel/bin/pkg/cloud/azure/azure_test_/azure_test INFO: Elapsed time: 159.865s, Critical Path: 152.35s INFO: 66 processes: 2 internal, 64 darwin-sandbox. INFO: Build completed successfully, 66 total actions //pkg/cloud/azure:azure_test PASSED in 139.9s INFO: Build completed successfully, 66 total actions ``` 80705: kvclient: fix gRPC stream leak in rangefeed client r=tbg,srosenberg a=erikgrinaker When the DistSender rangefeed client received a `RangeFeedError` message and propagated a retryable error up the stack, it would fail to close the existing gRPC stream, causing stream/goroutine leaks. Release note (bug fix): Fixed a goroutine leak when internal rangefeed clients received certain kinds of retriable errors. 80762: joberror: add ConnectionReset/ConnectionRefused to retryable err allow list r=miretskiy a=adityamaru Bulk jobs will no longer treat `sysutil.IsErrConnectionReset` and `sysutil.IsErrConnectionRefused` as permanent errors. IMPORT, RESTORE and BACKUP will treat this error as transient and retry. Release note: None 80773: backupccl: break dependency to testcluster r=irfansharif a=irfansharif Noticed we were building testing library packages when building CRDB binaries. $ bazel query "somepath(//pkg/cmd/cockroach-short, //pkg/testutils/testcluster)" //pkg/cmd/cockroach-short:cockroach-short //pkg/cmd/cockroach-short:cockroach-short_lib //pkg/ccl:ccl //pkg/ccl/backupccl:backupccl //pkg/testutils/testcluster:testcluster Release note: None Co-authored-by: Marcus Gartner <marcus@cockroachlabs.com> Co-authored-by: Nathan Lowe <nathan.lowe@spacex.com> Co-authored-by: Erik Grinaker <grinaker@cockroachlabs.com> Co-authored-by: Aditya Maru <adityamaru@gmail.com> Co-authored-by: irfan sharif <irfanmahmoudsharif@gmail.com>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Bumps sequelize from 4.37.1 to 5.15.1.
Release notes
Sourced from sequelize's releases.
Commits
9bd0bc1
fix(sequelize.json.fn): use common path extraction for mysql/mariadb/sqlite (...83e263b
feat(associations): source and target key support for belongs-to-many (#11311)4f09899
feat: support include option in bulkInsert (#11307)de06ac3
docs(security): grammar mistakes29eb1c8
docs(security): add responsible disclosure policy (#11300)592099d
fix(count): fix null count with includes (#11295)80d3625
docs(query-interface): fix typo with remove-column parameter (#11294)a39c63a
fix(types): return a usable type when using the sequelize.models lookup (#11293)98a4089
fix(types): use correctthis
value in getterMethods and setterMethods (#11292)dd428a0
refactor(association): name model that association is missing from (#11290)Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting
@dependabot rebase
.Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR:
@dependabot rebase
will rebase this PR@dependabot recreate
will recreate this PR, overwriting any edits that have been made to it@dependabot merge
will merge this PR after your CI passes on it@dependabot squash and merge
will squash and merge this PR after your CI passes on it@dependabot cancel merge
will cancel a previously requested merge and block automerging@dependabot reopen
will reopen this PR if it is closed@dependabot ignore this [patch|minor|major] version
will close this PR and stop Dependabot creating any more for this minor/major version (unless you reopen the PR or upgrade to it yourself)@dependabot ignore this dependency
will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)@dependabot use these labels
will set the current labels as the default for future PRs for this repo and language@dependabot use these reviewers
will set the current reviewers as the default for future PRs for this repo and language@dependabot use these assignees
will set the current assignees as the default for future PRs for this repo and language@dependabot use this milestone
will set the current milestone as the default for future PRs for this repo and languageYou can disable automated security fix PRs for this repo from the Security Alerts page.