{"payload":{"feedbackUrl":"https://github.com/orgs/community/discussions/53140","repo":{"id":362951059,"defaultBranch":"master","name":"cockroach","ownerLogin":"alyshanjahani-crl","currentUserCanPush":false,"isFork":true,"isEmpty":false,"createdAt":"2021-04-29T21:24:52.000Z","ownerAvatar":"https://avatars.githubusercontent.com/u/63252420?v=4","public":true,"private":false,"isOrgOwned":false},"refInfo":{"name":"","listCacheKey":"v0:1707245596.0","currentOid":""},"activityList":{"items":[{"before":"d3c3d4e3497629b9891a047c1f2db780e0f34a3b","after":"00a6257022de52fb87821cd14154d2174e4d752a","ref":"refs/heads/master","pushedAt":"2024-03-25T18:41:44.000Z","pushType":"push","commitsCount":2973,"pusher":{"login":"alyshanjahani-crl","name":null,"path":"/alyshanjahani-crl","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/63252420?s=80&v=4"},"commit":{"message":"Merge #120239 #121021\n\n120239: kvclient: add metrics for proxy requests r=erikgrinaker a=andrewbaptist\n\nProxy behavior was added in a previous commit. This commit adds 4 new\r\nmetrics to track the client and server side of proxying and the number\r\nof errors that occured as a result of the proxy. These statistics will\r\nnormally be zero or close to zero. While there is a partial partition\r\nthe metric will be increased.\r\n\r\nEpic: none\r\n\r\nRelease note: Adds four new metrics: `distsender.rpc.proxy.sent`,\r\n`distsender.rpc.proxy.err`, `distsender.rpc.proxy.forward.sent`,\r\n`distsender.rpc.proxy.forward.err` to track the number and outcome of\r\nproxy requests. Operators should monitor and alert on\r\n`distsender.rpc.proxy.sent` as it indicates there is likely a network\r\npartition in the system.\n\n121021: kvcoord: fix DistSender circuit breaker benchmark `nil` panic r=erikgrinaker a=erikgrinaker\n\nResolves #121020.\r\nEpic: none\r\nRelease note: None\n\nCo-authored-by: Andrew Baptist \nCo-authored-by: Erik Grinaker ","shortMessageHtmlLink":"Merge cockroachdb#120239 cockroachdb#121021"}},{"before":"9a7d2e7d59ab9da9f7f45ba6a9cf6798c682d663","after":"6beb0a7fd5e562ab757230eb742193be51e15049","ref":"refs/heads/ignore-flush-errors","pushedAt":"2024-02-08T19:15:50.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"alyshanjahani-crl","name":null,"path":"/alyshanjahani-crl","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/63252420?s=80&v=4"},"commit":{"message":"pkg/util/log: Do not log errors when the sink has never been flushed to before\n\nPreviously, when a buffered sink such as a fluent-server is unavailable\n(ex/ connection refused) the runFlusher method would log this error to\nthe OPS channel. If the sink was configured to receive the OPS channel\nand remained unavailable, this would lead to a lot of noise as the error\nlogged to the OPS channel would eventually get flushed to this unavailable\nsink, and this would keep on repeating.\n\nThis commit elimates that indefinite noise by only logging the error to the\nOPS channel if the sink has been flushed to successfully before.\n\nRelease note (ops change): Reduce noise when using dynamically provisioned logging sinks","shortMessageHtmlLink":"pkg/util/log: Do not log errors when the sink has never been flushed …"}},{"before":"f7ef9288130bbb05fcda1a0a6801a7d75b6ae395","after":"9a7d2e7d59ab9da9f7f45ba6a9cf6798c682d663","ref":"refs/heads/ignore-flush-errors","pushedAt":"2024-02-08T16:21:18.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"alyshanjahani-crl","name":null,"path":"/alyshanjahani-crl","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/63252420?s=80&v=4"}},{"before":"64deac341a78d22d237cf8901daf15d31623f5a0","after":"f7ef9288130bbb05fcda1a0a6801a7d75b6ae395","ref":"refs/heads/ignore-flush-errors","pushedAt":"2024-02-07T20:24:55.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"alyshanjahani-crl","name":null,"path":"/alyshanjahani-crl","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/63252420?s=80&v=4"}},{"before":"6e3e87aba5639f267062d20aa06841798f0c5669","after":"dee0f4cded07c10c3f4ade57d2e2683c7c08f385","ref":"refs/heads/ignore-flush-errors-23.1","pushedAt":"2024-02-07T20:10:12.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"alyshanjahani-crl","name":null,"path":"/alyshanjahani-crl","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/63252420?s=80&v=4"},"commit":{"message":"pkg/util/log: Do not log errors when the sink has never been flushed to before\n\nPreviously, when a buffered sink such as a fluent-server is unavailable\n(ex/ connection refused) the runFlusher method would log this error to\nthe OPS channel. If the sink was configured to receive the OPS channel\nand remained unavailable, this would lead to a lot of noise as the error\nlogged to the OPS channel would eventually get flushed to this unavailable\nsink, and this would keep on repeating.\n\nThis commit elimates that indefinite noise by only logging the error to the\nOPS channel if the sink has been flushed to successfully before.\n\nThis allows for the set up of dynamically provisioned logging sinks where\nCRDB is running but the logging sink is not available yet.\n\nRelease note (ops change): Reduce noise when using dynamically provision logging sinks","shortMessageHtmlLink":"pkg/util/log: Do not log errors when the sink has never been flushed …"}},{"before":"7d1b8c70cb778a72da9820095a640c05cf19fdc3","after":"6e3e87aba5639f267062d20aa06841798f0c5669","ref":"refs/heads/ignore-flush-errors-23.1","pushedAt":"2024-02-07T19:15:44.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"alyshanjahani-crl","name":null,"path":"/alyshanjahani-crl","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/63252420?s=80&v=4"}},{"before":"cd4465517d2d3dff0b32f3851b689db5a7baa0f8","after":"7d1b8c70cb778a72da9820095a640c05cf19fdc3","ref":"refs/heads/ignore-flush-errors-23.1","pushedAt":"2024-02-06T20:14:06.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"alyshanjahani-crl","name":null,"path":"/alyshanjahani-crl","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/63252420?s=80&v=4"}},{"before":null,"after":"cd4465517d2d3dff0b32f3851b689db5a7baa0f8","ref":"refs/heads/ignore-flush-errors-23.1","pushedAt":"2024-02-06T18:53:16.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"alyshanjahani-crl","name":null,"path":"/alyshanjahani-crl","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/63252420?s=80&v=4"}},{"before":null,"after":"8c42affb27bdcef254583d096466b23800f9ca13","ref":"refs/heads/release-23.1","pushedAt":"2024-02-06T18:40:47.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"alyshanjahani-crl","name":null,"path":"/alyshanjahani-crl","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/63252420?s=80&v=4"},"commit":{"message":"Merge pull request #117829 from cockroachdb/blathers/backport-release-23.1-117139\n\nrelease-23.1: multiregionccl: deflake TestMrSystemDatabase","shortMessageHtmlLink":"Merge pull request cockroachdb#117829 from cockroachdb/blathers/backp…"}},{"before":"d3c3d4e3497629b9891a047c1f2db780e0f34a3b","after":"64deac341a78d22d237cf8901daf15d31623f5a0","ref":"refs/heads/ignore-flush-errors","pushedAt":"2024-02-06T18:40:13.000Z","pushType":"push","commitsCount":0,"pusher":{"login":"alyshanjahani-crl","name":null,"path":"/alyshanjahani-crl","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/63252420?s=80&v=4"}},{"before":null,"after":"d3c3d4e3497629b9891a047c1f2db780e0f34a3b","ref":"refs/heads/ignore-flush-errors","pushedAt":"2024-02-06T18:31:03.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"alyshanjahani-crl","name":null,"path":"/alyshanjahani-crl","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/63252420?s=80&v=4"},"commit":{"message":"Merge #118299 #118543\n\n118299: kv/tscache: implement timestamp cache serialization r=nvanbenschoten a=nvanbenschoten\n\nInforms #61986.\r\n\r\nThis commit adds a new `Serialize` function to `tscache.Cache` implementations. This serialization uses the `readsummary/rspb.Segment` representation added in 0950a1e0. Serialization of the production `sklImpl` uses the Segment merging logic added in e4fc6f1e in order to merge together a partial serializations of each individual `sklPage` in the data structure.\r\n\r\nTimestamp cache serialization will be used to address #61986.\r\n\r\nRelease note: None\n\n118543: catalog/lease: detect if synchronous lease releases are successful r=fqazi a=fqazi\n\nPreviously, for unit testing, we added support for synchronously releasing leases. If the context was cancelled when releasing a lease synchronously, it was possible for the lease to be erased from memory and not from storage. As a result, reacquisition could hit an error when session-based leasing is enabled. To address this, this patch re-orders operations so that we clear storage first for synchronous lease release, followed by the in-memory copy.\r\n\r\nFixes: #118522, fixes #118523, fixes #118521, fixes https://github.com/cockroachdb/cockroach/issues/118550\r\n\r\nRelease note: None\n\nCo-authored-by: Nathan VanBenschoten \nCo-authored-by: Faizan Qazi ","shortMessageHtmlLink":"Merge cockroachdb#118299 cockroachdb#118543"}},{"before":"6e499de9245d6bc3ac5a5a858f3946bc896134b7","after":"d3c3d4e3497629b9891a047c1f2db780e0f34a3b","ref":"refs/heads/master","pushedAt":"2024-01-31T22:45:00.000Z","pushType":"push","commitsCount":355,"pusher":{"login":"alyshanjahani-crl","name":null,"path":"/alyshanjahani-crl","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/63252420?s=80&v=4"},"commit":{"message":"Merge #118299 #118543\n\n118299: kv/tscache: implement timestamp cache serialization r=nvanbenschoten a=nvanbenschoten\n\nInforms #61986.\r\n\r\nThis commit adds a new `Serialize` function to `tscache.Cache` implementations. This serialization uses the `readsummary/rspb.Segment` representation added in 0950a1e0. Serialization of the production `sklImpl` uses the Segment merging logic added in e4fc6f1e in order to merge together a partial serializations of each individual `sklPage` in the data structure.\r\n\r\nTimestamp cache serialization will be used to address #61986.\r\n\r\nRelease note: None\n\n118543: catalog/lease: detect if synchronous lease releases are successful r=fqazi a=fqazi\n\nPreviously, for unit testing, we added support for synchronously releasing leases. If the context was cancelled when releasing a lease synchronously, it was possible for the lease to be erased from memory and not from storage. As a result, reacquisition could hit an error when session-based leasing is enabled. To address this, this patch re-orders operations so that we clear storage first for synchronous lease release, followed by the in-memory copy.\r\n\r\nFixes: #118522, fixes #118523, fixes #118521, fixes https://github.com/cockroachdb/cockroach/issues/118550\r\n\r\nRelease note: None\n\nCo-authored-by: Nathan VanBenschoten \nCo-authored-by: Faizan Qazi ","shortMessageHtmlLink":"Merge cockroachdb#118299 cockroachdb#118543"}},{"before":"9c510f9abdcd0d52e04f620ce5fa283c54d6ef46","after":"6e499de9245d6bc3ac5a5a858f3946bc896134b7","ref":"refs/heads/master","pushedAt":"2024-01-18T15:26:23.000Z","pushType":"push","commitsCount":5508,"pusher":{"login":"alyshanjahani-crl","name":null,"path":"/alyshanjahani-crl","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/63252420?s=80&v=4"},"commit":{"message":"Merge #117892\n\n117892: util/admission: remove unused elasticCPUGranter.tbReset field r=petermattis a=petermattis\n\nNoticed in passing while inspecting usages of `util.EveryN`.\r\n\r\nEpic: none\r\nRelease note: none\r\n\n\nCo-authored-by: Peter Mattis ","shortMessageHtmlLink":"Merge cockroachdb#117892"}},{"before":"bdf2a64450b72aaafeffd7d7cbc0478a9a0efa0d","after":"9c510f9abdcd0d52e04f620ce5fa283c54d6ef46","ref":"refs/heads/master","pushedAt":"2023-07-26T16:13:57.000Z","pushType":"push","commitsCount":1191,"pusher":{"login":"alyshanjahani-crl","name":null,"path":"/alyshanjahani-crl","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/63252420?s=80&v=4"},"commit":{"message":"Merge #107360 #107584\n\n107360: backupccl: clean up test helpers r=adityamaru a=stevendanna\n\nPreviously, we had two sets of test helpers, one in backupccl and another in backuptestutils.\r\n\r\nI suspect most people chose a test helper with the signature closest to the one they need and don't look much more closely. But as a result, the different functions have grown slightly different behaviours.\r\n\r\nHere, I keep nearly all of the backupccl entry functions but change them to directly call a single implementation in backuptestutils. This isn't a pure refactor since it does mean some tests are seeing slightly different testing hooks.\r\n\r\nI also deleted a few versions of this helper that were only used in a few places and instead call the more general function directly.\r\n\r\nThis still feels like a bit much just for a thin wrapper around testcluster.StartTestCluster. This PR is a baby step in the direction of removing this complexity. I realise the functional options API looks more complex. I'm not sure it will survive the next steps of this cleanup. Also, backuptestutils is a long package name so I don't want to have to type it all the time.\r\n\r\nEpic: none\r\n\r\nRelease note: None\n\n107584: release: update predecessor map for 22.2.12 and 23.1.6 r=renatolabs a=celiala\n\nRelease note: None\r\nEpic: None\n\nCo-authored-by: Steven Danna \nCo-authored-by: Celia La ","shortMessageHtmlLink":"Merge cockroachdb#107360 cockroachdb#107584"}},{"before":"dc2584c5dde5cca03cd18cf2fa40213827d512ac","after":"bdf2a64450b72aaafeffd7d7cbc0478a9a0efa0d","ref":"refs/heads/master","pushedAt":"2023-06-27T20:43:49.470Z","pushType":"push","commitsCount":559,"pusher":{"login":"alyshanjahani-crl","name":null,"path":"/alyshanjahani-crl","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/63252420?s=80&v=4"},"commit":{"message":"Merge #105316 #105589 #105630\n\n105316: obsservice: migrate gRPC ingest to follow new architecture r=knz a=abarganier\n\n**Reviewer note: review this PR commit-wise.**\r\n\r\n----\r\n\r\nIn the original design of the obsservice, exported events were\r\nintended to be written directly to storage. The idea was that\r\nexported events would experience minimal transformation once\r\ningested, meaning that work done to \"package\" events properly\r\nwas left up to the exporting client (CRDB). The obsservice\r\nwould then store the ingested invents into a target storage.\r\nThis concept of target storage has been removed for now as\r\npart of this patch.\r\n\r\nIn the new architecture, exported events are more \"raw\", and\r\nwe expect the obsservice to heavily transform & aggregate the\r\ndata externally, where the aggregated results are flushed\r\nto storage instead.\r\n\r\nThis patch takes the pre-existing gRPC events ingester, and\r\nmodifies it to meet the new architecture.\r\n\r\nThe events ingester will now be provided with a consumer with\r\nwhich it can feed ingested events into the broader pipeline.\r\nIt is no longer the responsibility of the ingester to write\r\ningested events to storage.\r\n\r\nFor now, we use a simple STDOUT consumer that writes all\r\ningested events to STDOUT, but in the future, this will\r\nbe a more legitimate component - part of a chain that\r\neventually buffers ingested events for aggregation.\r\n\r\nRelease note: none\r\n\r\nEpic: CRDB-28526\n\n105589: ccl/sqlproxyccl: fix possible flake in TestProxyProtocol r=pjtatlow a=jaylim-crl\n\nFixes #105585.\r\n\r\nThis commit updates the TestProxyProtocol test to only test the case where RequireProxyProtocol=true. There's no point testing the case where the RequireProxyProtocol field is false since every other tests do not use the proxy protocol (and that case is implicitly covered by them).\r\n\r\nIt's unclear what is causing this test flake (and it is extremely rare, i.e. 1 legit failure out of 1000 runs [1]). It may be due to some sort of race within the tests, but given that the case is covered by all other tests, this commit opts to remove the test entirely.\r\n\r\n[1] https://teamcity.cockroachdb.com/test/-1121006080109385641?currentProjectId=Cockroach_Ci_TestsGcpLinuxX8664BigVm&expandTestHistoryChartSection=true\r\n\r\nRelease note: None\r\n\r\nRelease justification: Fixes a test flake.\r\n\r\nEpic: none\n\n105630: roachtest: handle panics in `mixedversion` r=smg260 a=renatolabs\n\nPreviously, a panic in a user function in a roachtest using the `mixedversion` package would crash the entire roachtest process. This is because all steps run in a separate goroutine, so if panics are not captured, the entire process crashes.\r\n\r\nThis commit updates the test runner so that all steps (including those that are part of the test infrastructure) run with panics captured. The panic message is returned as a regular error which should lead to usual GitHub error reports. The stack trace for the panic is also logged so that we can pinpoint the exact offending line in the test.\r\n\r\nEpic: CRDB-19321\r\n\r\nRelease note: None\n\nCo-authored-by: Alex Barganier \nCo-authored-by: Jay \nCo-authored-by: Renato Costa ","shortMessageHtmlLink":"Merge cockroachdb#105316 cockroachdb#105589 cockroachdb#105630"}},{"before":"27a769977cc8f174e5261aa2082c594165be5a47","after":"dc2584c5dde5cca03cd18cf2fa40213827d512ac","ref":"refs/heads/master","pushedAt":"2023-06-13T19:30:25.436Z","pushType":"push","commitsCount":1705,"pusher":{"login":"alyshanjahani-crl","name":null,"path":"/alyshanjahani-crl","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/63252420?s=80&v=4"},"commit":{"message":"Merge #104618\n\n104618: test: add support for `TEST_UNDECLARED_OUTPUTS_DIR` r=rickystewart a=rickystewart\n\nTo date, we have put temporary files from tests in $TMPDIR. We have a patch to `rules_go` that copies the value of the $TEST_TMPDIR (the variable that Bazel provides) over to $TMPDIR for use in CI. Some tests (especially those using TestLogScope) have behavior where they leave files behind after the test completes *if the test fails*, thereby allowing people to look at the left-over files for debugging.\r\n\r\nAs we transition to remote execution, this will no longer work, since the $TMPDIR is on some remote machine somewhere, and Bazel will just clean the $TMPDIR up after the test completes regardless of its exit status.\r\n\r\nBazel provides the variable `TEST_UNDECLARED_OUTPUTS_DIR` for the same purpose: it gives us a place to put unstructured output from tests. To prepare for remote execution, we make the following changes:\r\n\r\n1. Update `TestLogScope` to use `TEST_UNDECLARED_OUTPUTS_DIR` where appropriate.\r\n2. Add a new function `datapathutils.DebuggableTempDir()` which returns either `TEST_UNDECLARED_OUTPUTS_DIR` or os.TempDir() as appropriate.\r\n\r\nSince the outputs.zip behavior is kind of awkward, we guard this behind the environment variable `REMOTE_EXEC`. We must be sure to set this variable whenever we run tests remotely.\r\n\r\nEpic: CRDB-17165\r\nRelease note: None\n\nCo-authored-by: Ricky Stewart ","shortMessageHtmlLink":"Merge cockroachdb#104618"}},{"before":"7db9ad2a1cda7bb43f3a140ac8255711d2fdc34d","after":"27a769977cc8f174e5261aa2082c594165be5a47","ref":"refs/heads/master","pushedAt":"2023-04-18T00:40:18.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"alyshanjahani-crl","name":null,"path":"/alyshanjahani-crl","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/63252420?s=80&v=4"},"commit":{"message":"Merge #101687\n\n101687: rowcontainer: fix flaky test r=yuzefovich a=yuzefovich\n\nThis commit fixes a flaky test where we incorrectly were reusing one memory monitor by restarting it with a small budget (rather than creating a fresh new monitor). This would lead to unexpected memory budget errors in the later subtests.\r\n\r\nAlso adjust a couple of places to use the test rand.\r\n\r\nFixes: #101326.\r\n\r\nRelease note: None\n\nCo-authored-by: Yahor Yuzefovich ","shortMessageHtmlLink":"Merge cockroachdb#101687"}},{"before":"10ce1f5ebddb9dad813d06abdf4e68187e4d0121","after":"10b0f9b83f7ef3a6f20dbe91c83995ad65b3220d","ref":"refs/heads/update-setting-description","pushedAt":"2023-04-11T19:35:47.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"alyshanjahani-crl","name":null,"path":"/alyshanjahani-crl","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/63252420?s=80&v=4"},"commit":{"message":"pgwire: update description of server.max_connections_per_gateway\n\nPreviously, the description of the cluster setting\nserver.max_connections_per_gateway was inaccurate and misleading.\n\nIt read \"the maximum number of non-superuser SQL connections per gateway\",\nhowever, the limit still counts superuser SQL connections towards the limit.\nFor example, a cluster may have the limit set to 2, with 2 superuser\nconnections currently open. The current description suggests that a non\nsuperuser connection can be opened, since the limit is 2 but there are 0\nnon superuser connections that are open. However, this is not the case,\nthe connection would be denied due to the limit.\n\nThis commit updates the description to better reflect the behaviour of the\nsetting. All connections are counted towards the limit, superuser connections\nare not affected (but are still counted towards) the limit.\n\nRelease note: none","shortMessageHtmlLink":"pgwire: update description of server.max_connections_per_gateway"}},{"before":"a7479a3961416dcb0a95a0738afde02498da9566","after":"10ce1f5ebddb9dad813d06abdf4e68187e4d0121","ref":"refs/heads/update-setting-description","pushedAt":"2023-04-11T17:46:30.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"alyshanjahani-crl","name":null,"path":"/alyshanjahani-crl","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/63252420?s=80&v=4"}},{"before":"7db9ad2a1cda7bb43f3a140ac8255711d2fdc34d","after":"a7479a3961416dcb0a95a0738afde02498da9566","ref":"refs/heads/update-setting-description","pushedAt":"2023-04-10T22:25:04.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"alyshanjahani-crl","name":null,"path":"/alyshanjahani-crl","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/63252420?s=80&v=4"},"commit":{"message":"pgwire: update description of server.max_connections_per_gateway\n\nPreviously, the description of the cluster setting\nserver.max_connections_per_gateway was inaccurate and misleading.\n\nIt read \"the maximum number of non-superuser SQL connections per gateway\",\nhowever, the limit still counts superuser SQL connections towards the limit.\nFor example, a cluster may have the limit set to 2, with 2 superuser\nconnections currently open. The current description suggests that a non\nsuperuser connection can be opened, since the limit is 2 but there are 0\nnon superuser connections that are open. However, this is not the case,\nthe connection would be denied due to the limit.\n\nThis commit updates the description to better reflect the behaviour of the\nsetting. All connections are counted towards the limit, superuser connections\nare not affected (but are still counted towards) the limit.\n\nRelease note (general change): the description of the\nserver.max_connections_per_gateway cluster setting was changed.","shortMessageHtmlLink":"pgwire: update description of server.max_connections_per_gateway"}},{"before":null,"after":"7db9ad2a1cda7bb43f3a140ac8255711d2fdc34d","ref":"refs/heads/update-setting-description","pushedAt":"2023-04-10T22:18:16.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"alyshanjahani-crl","name":null,"path":"/alyshanjahani-crl","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/63252420?s=80&v=4"}},{"before":"6bf00e228410e3cc8f795962f69c129e405ac230","after":"7db9ad2a1cda7bb43f3a140ac8255711d2fdc34d","ref":"refs/heads/master","pushedAt":"2023-04-10T22:17:25.000Z","pushType":"push","commitsCount":0,"pusher":{"login":"alyshanjahani-crl","name":null,"path":"/alyshanjahani-crl","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/63252420?s=80&v=4"}},{"before":"d3a9bf0d78b8ac6b043c9b114f494e4c30386049","after":"02f420a159620215c281d1f4f8d7cca9fe1779f6","ref":"refs/heads/max-external-conns-cluster-setting","pushedAt":"2023-04-10T22:04:46.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"alyshanjahani-crl","name":null,"path":"/alyshanjahani-crl","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/63252420?s=80&v=4"},"commit":{"message":"pgwire: add server.cockroach_cloud.max_client_connections_per_gateway_reason cluster setting\n\nThis setting can be used to customize the error message returned when\nconnections are denied due to a limit specified via\nserver.cockroach_cloud.max_client_connections_per_gateway.\n\nThis functionality is required for serverless. Being able to indicate to the\nuser why a limit is placed on their number of connections will support a better\nUX. Specifically, indicating to the user which resource limits they have hit.\nPart of: https://cockroachlabs.atlassian.net/browse/CC-9288\n\nRelease note: None","shortMessageHtmlLink":"pgwire: add server.cockroach_cloud.max_client_connections_per_gateway…"}},{"before":"660a05a9b9fb6e31875506a11dbb7d1b35442962","after":"d3a9bf0d78b8ac6b043c9b114f494e4c30386049","ref":"refs/heads/max-external-conns-cluster-setting","pushedAt":"2023-04-10T17:08:47.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"alyshanjahani-crl","name":null,"path":"/alyshanjahani-crl","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/63252420?s=80&v=4"},"commit":{"message":"pgwire: add server.cockroach_cloud.max_client_connections_per_gateway_reason cluster setting\n\nThis setting can be used to customize the error message returned when\nconnections are denied due to a limit specified via\nserver.cockroach_cloud.max_client_connections_per_gateway.\n\nThis functionality is required for serverless. Being able to indicate to the\nuser why a limit is placed on their number of connections will support a better\nUX. Specifically, indicating to the user which resource limits they have hit.\nPart of: https://cockroachlabs.atlassian.net/browse/CC-9288\n\nRelease note: None","shortMessageHtmlLink":"pgwire: add server.cockroach_cloud.max_client_connections_per_gateway…"}},{"before":"0e6ff41b880ece5dbe510ca12f80e128aef32378","after":"660a05a9b9fb6e31875506a11dbb7d1b35442962","ref":"refs/heads/max-external-conns-cluster-setting","pushedAt":"2023-04-07T23:58:09.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"alyshanjahani-crl","name":null,"path":"/alyshanjahani-crl","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/63252420?s=80&v=4"}},{"before":"0745cd4e40ba6ef8bbbc3d733f85a84c831fbf06","after":"6bf00e228410e3cc8f795962f69c129e405ac230","ref":"refs/heads/master","pushedAt":"2023-04-07T23:50:11.000Z","pushType":"push","commitsCount":524,"pusher":{"login":"alyshanjahani-crl","name":null,"path":"/alyshanjahani-crl","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/63252420?s=80&v=4"},"commit":{"message":"Merge #100900\n\n100900: kv: expose isolation level through kv.Txn API r=nvanbenschoten a=nvanbenschoten\n\nFixes #100130.\r\n\r\nThis commit exposes isolation levels through the kv.Txn API with the introduction of a new `SetIsoLevel` method. This method behaves similarly to `SetUserPriority`. Notably, the isolation must be set before any operations are performed on the transaction.\r\n\r\nRelease note: None\n\nCo-authored-by: Nathan VanBenschoten ","shortMessageHtmlLink":"Merge cockroachdb#100900"}},{"before":"3fb40b63172202410a266249841433afea8e6263","after":"0e6ff41b880ece5dbe510ca12f80e128aef32378","ref":"refs/heads/max-external-conns-cluster-setting","pushedAt":"2023-04-06T20:38:01.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"alyshanjahani-crl","name":null,"path":"/alyshanjahani-crl","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/63252420?s=80&v=4"},"commit":{"message":"pgwire: add server.max_non_cockroach_cloud_connections_per_gateway_reason cluster setting\n\nThis setting can be used to customize the error message returned when\nconnections are denied due to a limit specified via\nserver.max_non_cockroach_cloud_connections_per_gateway.\n\nThis functionality is required for serverless. Being able to indicate to the\nuser why a limit is placed on their number of connections will support a better\nUX. Specifically, indicating to the user which resource limits they have hit.\nPart of: https://cockroachlabs.atlassian.net/browse/CC-9288","shortMessageHtmlLink":"pgwire: add server.max_non_cockroach_cloud_connections_per_gateway_re…"}},{"before":"5b021bfdf4bb75c53b7275d3d0fa481d34aa79ba","after":"3fb40b63172202410a266249841433afea8e6263","ref":"refs/heads/max-external-conns-cluster-setting","pushedAt":"2023-04-04T20:02:39.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"alyshanjahani-crl","name":null,"path":"/alyshanjahani-crl","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/63252420?s=80&v=4"},"commit":{"message":"pgwire: add server.max_non_root_connections_per_gateway_reason cluster setting\n\nThis setting can be used to customize the error message returned when\nconnections are denied due to a limit specified via\nserver.max_non_root_connections_per_gateway.\n\nThis functionality is required for serverless. Being able to indicate to the\nuser why a limit is placed on their number of connections will support a better\nUX. Specifically, indicating to the user which resource limits they have hit.\nPart of: https://cockroachlabs.atlassian.net/browse/CC-9288","shortMessageHtmlLink":"pgwire: add server.max_non_root_connections_per_gateway_reason cluste…"}},{"before":"50d374ececa0a52715f231a3c3e8e2a6c5b4a60d","after":"5b021bfdf4bb75c53b7275d3d0fa481d34aa79ba","ref":"refs/heads/max-external-conns-cluster-setting","pushedAt":"2023-04-04T19:19:07.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"alyshanjahani-crl","name":null,"path":"/alyshanjahani-crl","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/63252420?s=80&v=4"}},{"before":"7e712666b532611c4f060f431083500a00ab5e29","after":"50d374ececa0a52715f231a3c3e8e2a6c5b4a60d","ref":"refs/heads/max-external-conns-cluster-setting","pushedAt":"2023-04-04T18:18:11.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"alyshanjahani-crl","name":null,"path":"/alyshanjahani-crl","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/63252420?s=80&v=4"},"commit":{"message":"pgwire: add server.max_external_connections_per_gateway_reason cluster setting\n\nThis setting can be used to customize the error message returned when\nconnections are denied due to a limit specified via\nserver.max_external_connections_per_gateway.\n\nThis functionality is required for serverless. Being able to indicate to the\nuser why a limit is placed on their number of connections will support a better\nUX. Specifically, indicating to the user which resource limits they have hit.\nPart of: https://cockroachlabs.atlassian.net/browse/CC-9288","shortMessageHtmlLink":"pgwire: add server.max_external_connections_per_gateway_reason cluste…"}}],"hasNextPage":true,"hasPreviousPage":false,"activityType":"all","actor":null,"timePeriod":"all","sort":"DESC","perPage":30,"cursor":"djE6ks8AAAAEHuRDoQA","startCursor":null,"endCursor":null}},"title":"Activity · alyshanjahani-crl/cockroach"}