New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
server: TestAddNewStoresToExistingNodes failed #106706
Comments
seems related to #106355 that got closed |
@tbg did you have further thoughts on this one. This failure included the fix to 106402 (ffa0a74), but still seemed to fail. However it only happened once and so I'm inclined to close this out. I ran this on my GCE worker for 6 hours (unintentionally as I forgot it was running :) ) - and it never failed for me on the latest master. |
cc @cockroachdb/replication |
server.TestAddNewStoresToExistingNodes failed with artifacts on master @ b3ae46d39098088ca33844b3cb87d18c60cde0d5: Fatal error:
Stack:
Log preceding fatal error
|
Yep looks like the test is hanging during start-up. I'll take a look. TestCluster.Start
waits for Node.waitForAdditionalStoreInit
which is stuck in DistSender trying to send the Increment to get a StoreID Node.initializeAdditionalStores
There are a few served-side `redirectOnOrAcquireLease`
|
I think I see a problem. In cockroach/pkg/server/server.go Line 1909 in 0b2f1cc
before starting liveness, which happens a few lines down below: That seems unwise, since now the following can happen:
|
I see why this isn't happening frequently, though. I added a 10s sleep before the it does hang. So without that check we usually get away with it. We have to reliably hit the second branch on all (or two) nodes for the hang to occur "naturally", since only the second branch below requires nodeliveness to have been started on the other node: cockroach/pkg/kv/kvserver/replica_range_lease.go Lines 465 to 481 in cd9612b
|
Now that I have a deterministic way of introducing the hang, I can see about rearranging the startup sequence so that we wait for additional stores only after starting liveness. |
I discovered[^1] a deadlock scenario when multiple nodes in the cluster restart with additional stores that need to be bootstrapped. In that case, liveness must be running when the StoreIDs are allocated, but it is not. Trying to address this problem, I realized that when an auxiliary Store is bootstrapped, it will create a new replicateQueue, which will register a new callback into NodeLiveness. But if liveness must be started at this point to fix cockroachdb#106706, we'll run into the assertion that checks that we don't register callbacks on a started node liveness. Something's got to give: we will allow registering callbacks at any given point in time, and they'll get an initial set of notifications synchronously. I audited the few users of RegisterCallback and this seems OK with all of them. [^1]: cockroachdb#106706 (comment) Epic: None Release note: None
I discovered[^1] a deadlock scenario when multiple nodes in the cluster restart with additional stores that need to be bootstrapped. In that case, liveness must be running when the StoreIDs are allocated, but it is not. Trying to address this problem, I realized that when an auxiliary Store is bootstrapped, it will create a new replicateQueue, which will register a new callback into NodeLiveness. But if liveness must be started at this point to fix cockroachdb#106706, we'll run into the assertion that checks that we don't register callbacks on a started node liveness. Something's got to give: we will allow registering callbacks at any given point in time, and they'll get an initial set of notifications synchronously. I audited the few users of RegisterCallback and this seems OK with all of them. [^1]: cockroachdb#106706 (comment) Epic: None Release note: None
Otherwise, we can end up in a situation where each node is sitting on the channel and nobody has started their liveness yet. The sender to the channel will first have to get an Increment through KV, but if nobody acquires the lease (since nobody's heartbeat loop is running), this will never happen. In practice, *most of the time*, there is no deadlock because the lease acquisition path performs a synchronous heartbeat to the own entry in most cases (ignoring the fact that liveness hasn't been started yet). But there is also another path where someone else's epoch needs to be incremented, and this path also checks if the node itself is live - which it won't necessarily be (liveness loop is not running yet). Fixes cockroachdb#106706 Epic: None Release note (bug fix): a rare (!) situation in which nodes would get stuck during start-up was addressed. This is unlikely to have been encountered by production users This is unlikely to have been encountered by users. If so, it would manifest itself through a stack frame sitting on a select in `waitForAdditionalStoreInit` for extended periods of time (i.e. minutes).
107265: liveness: allow registering callbacks after start r=erikgrinaker a=tbg I discovered[^1] a deadlock scenario when multiple nodes in the cluster restart with additional stores that need to be bootstrapped. In that case, liveness must be running when the StoreIDs are allocated, but it is not. Trying to address this problem, I realized that when an auxiliary Store is bootstrapped, it will create a new replicateQueue, which will register a new callback into NodeLiveness. But if liveness must be started at this point to fix #106706, we'll run into the assertion that checks that we don't register callbacks on a started node liveness. Something's got to give: we will allow registering callbacks at any given point in time, and they'll get an initial set of notifications synchronously. I audited the few users of RegisterCallback and this seems OK with all of them. [^1]: #106706 (comment) Epic: None Release note: None 107417: kvserver: ignore RPC conn when deciding to campaign/vote r=erikgrinaker a=erikgrinaker **kvserver: remove stale mayCampaignOnWake comment** The comment is about a parameter that no longer exists. **kvserver: revamp shouldCampaign/Forget tests** **kvserver: ignore RPC conn in `shouldCampaignOnWake`** Previously, `shouldCampaignOnWake()` used `IsLiveMapEntry.IsLive` to determine whether the leader was dead. However, this not only depends on the node's liveness, but also its RPC connectivity. This can prevent an unquiescing replica from acquiring Raft leadership if the leader is still alive but unable to heartbeat liveness, and the leader will be unable to acquire epoch leases in this case. This patch ignores the RPC connection state when deciding whether to campaign, using only on the liveness state. **kvserver: ignore RPC conn in `shouldForgetLeaderOnVoteRequest`** Previously, `shouldForgetLeaderOnVoteRequest()` used `IsLiveMapEntry.IsLive` to determine whether the leader was dead. However, this not only depends on the node's liveness, but also its RPC connectivity. This can prevent granting votes to a new leader that may be attempting to acquire a epoch lease (which the current leader can't). This patch ignores the RPC connection state when deciding whether to campaign, using only on the liveness state. Resolves #107060. Epic: none Release note: None **kvserver: remove `StoreTestingKnobs.DisableLivenessMapConnHealth`** 107424: kvserver: scale Raft entry cache size with system memory r=erikgrinaker a=erikgrinaker The Raft entry cache size defaulted to 16 MB, which is rather small. This has been seen to cause tail latency and throughput degradation with high write volume on large nodes, correlating with a reduction in the entry cache hit rate. This patch linearly scales the Raft entry cache size as 1/256 of total system/cgroup memory, shared evenly between all stores, with a minimum 32 MB. For example, a 32 GB 8-vCPU node will have a 128 MB entry cache. This is a conservative default, since this memory is not accounted for in existing memory budgets nor by the `--cache` flag. We rarely see cache misses in production clusters anyway, and have seen significantly improved hit rates with this scaling (e.g. a 64 KB kv0 workload on 8-vCPU nodes increased from 87% to 99% hit rate). Resolves #98666. Epic: none Release note (performance improvement): The default Raft entry cache size has been increased from 16 MB to 1/256 of system memory with a minimum of 32 MB, divided evenly between all stores. This can be configured via `COCKROACH_RAFT_ENTRY_CACHE_SIZE`. 107442: kvserver: deflake TestRequestsOnFollowerWithNonLiveLeaseholder r=erikgrinaker a=tbg The test previously relied on aggressive liveness heartbeat expirations to avoid running for too long. As a result, it was flaky since liveness wasn't reliably pinned in the way the test wanted. The hybrid manual clock allows time to jump forward at an opportune moment. Use it here to avoid running with a tight lease interval. On my gceworker, previously flaked within a few minutes. As of this commit, I ran it for double-digit minutes without issue. Fixes #107200. Epic: None Release note: None 107526: kvserver: fail gracefully in TestLeaseTransferRejectedIfTargetNeedsSnapshot r=erikgrinaker a=tbg We saw this test hang in CI. What likely happened (according to the stacks) is that a lease transfer that was supposed to be caught by an interceptor never showed up in the interceptor. The most likely explanation is that it errored out before it got to evaluation. It then signaled a channel the test was only prepared to check later, so the test hung (waiting for a channel that was now never to be touched). This test is hard to maintain. It would be great (though, for now, out of reach) to write tests like it in a deterministic framework[^1] [^1]: see #105177. For now, fix the test so that when the (so far unknown) error rears its head again, it will fail properly, so we get to see the error and can take another pass at fixing the test (separately). Stressing this commit[^2], we get: > transferErrC unexpectedly signaled: /Table/Max: transfer lease unexpected > error: refusing to transfer lease to (n3,s3):3 because target may need a Raft > snapshot: replica in StateProbe This makes sense. The test wants to exercise the below-raft mechanism, but the above-raft mechanism also exists and while we didn't want to interact with it, we sometimes do[^1] The second commit introduces a testing knob that disables the above-raft mechanism selectively. I've stressed the test for 15 minutes without issues after this change. [^1]: somewhat related to #107524 [^2]: `./dev test --filter TestLeaseTransferRejectedIfTargetNeedsSnapshot --stress ./pkg/kv/kvserver/` on gceworker, 285s Fixes #106383. Epic: None Release note: None 107531: kvserver: disable replicate queue and lease transfers in closedts tests r=erikgrinaker a=tbg For a more holistic suggestion on how to fix this for the likely many other tests susceptible to similar issues, see: #107528 > 1171 runs so far, 0 failures, over 15m55s Fixes #101824. Release note: None Epic: none Co-authored-by: Tobias Grieger <tobias.b.grieger@gmail.com> Co-authored-by: Erik Grinaker <grinaker@cockroachlabs.com>
Otherwise, we can end up in a situation where each node is sitting on the channel and nobody has started their liveness yet. The sender to the channel will first have to get an Increment through KV, but if nobody acquires the lease (since nobody's heartbeat loop is running), this will never happen. In practice, *most of the time*, there is no deadlock because the lease acquisition path performs a synchronous heartbeat to the own entry in most cases (ignoring the fact that liveness hasn't been started yet). But there is also another path where someone else's epoch needs to be incremented, and this path also checks if the node itself is live - which it won't necessarily be (liveness loop is not running yet). Fixes cockroachdb#106706 Epic: None Release note (bug fix): a rare (!) situation in which nodes would get stuck during start-up was addressed. This is unlikely to have been encountered by production users This is unlikely to have been encountered by users. If so, it would manifest itself through a stack frame sitting on a select in `waitForAdditionalStoreInit` for extended periods of time (i.e. minutes).
If we rely on sync heartbeats, there's an issue. They were very effective in hiding the problem in cockroachdb#106706 so at least in our testing, allow sync heartbeats only once there are also async heartbeats. Epic: none Release note: None
96144: server: honor and validate the service mode for SQL pods r=yuzefovich,stevendanna a=knz Rebased on top of #105441. Fixes #93145. Fixes #83650. Epic: CRDB-26691 TLDR: this makes SQL servers only accept to start if their data state is "ready" and their deployment type (e.g. separate process) matches the configured service mode in the tenant record. Additionally, the SQL server spontaneously terminates if their service mode or data state is not valid any more (e.g as a result of DROP TENANT or ALTER TENANT STOP SERVICE). ---- Prior to this patch, there wasn't a good story about the lifecycle of separate-process SQL servers ("SQL pods"): - if a SQL server was started against a non-existent tenant, an obscure error would be raised (`database "[1]" does not exist`) - if a SQL server was started while a tenant was being added (keyspace not yet valid), no check would be performed and data corruption could ensue. - if the tenant record/keyspace was dropped while the SQL server was running, SQL clients would start encountering obscure errors. This commit fixes the situation by checking the tenant metadata: - once, during SQL server startup, at which point server startup is prevented if the service check fails; - then, asynchronously, whenever the metadata is updated, such that any service check failure results in a graceful shutdown of the SQL service. The check proper validates: - that the tenant record exists; - that the data state is "ready"; - that the configured service mode matches that requested by the SQL server. Examples output upon error: - non-existent tenant: ``` tenant service check failed: missing record ``` - attempting to start separate-process server while tenant is running as shared-process: ``` tenant service check failed: service mode check failed: expected external, record says shared ``` - after ALTER TENANT STOP SERVICE: ``` tenant service check failed: service mode check failed: expected external, record says none ``` - after DROP TENANT: ``` tenant service check failed: service mode check failed: expected external, record says dropping ``` Release note: None 107124: server: avoid deadlock when initing additional stores r=erikgrinaker a=tbg We need to start node liveness before waiting for additional store init. Otherwise, we can end up in a situation where each node is sitting on the channel and nobody has started their liveness yet. The sender to the channel will first have to get an Increment through KV, but if nobody acquires the lease (since nobody's heartbeat loop is running), this will never happen. In practice, *most of the time*, there is no deadlock because the lease acquisition path performs a synchronous heartbeat to the own entry in most cases (ignoring the fact that liveness hasn't been started yet). But there is also another path where someone else's epoch needs to be incremented, and this path also checks if the node itself is live - which it won't necessarily be (liveness loop is not running yet). Fixes #106706 Epic: None Release note (bug fix): a rare (!) situation in which nodes would get stuck during start-up was addressed. This is unlikely to have been encountered by production users This is unlikely to have been encountered by users. If so, it would manifest itself through a stack frame sitting on a select in `waitForAdditionalStoreInit` for extended periods of time (i.e. minutes). 107216: sql: disallow schema changes for READ COMMITTED transactions r=michae2 a=rafiss Due to how descriptor leasing works, schema changes are not safe in weaker isolation transactions. Until they are safe, we disallow them. fixes #100143 Release note: None Co-authored-by: Raphael 'kena' Poss <knz@thaumogen.net> Co-authored-by: Tobias Grieger <tobias.b.grieger@gmail.com> Co-authored-by: Rafi Shamim <rafi@cockroachlabs.com>
server.TestAddNewStoresToExistingNodes failed with artifacts on master @ 665084ed379a25fcde92ee8b4c9dc48e192876e5:
Fatal error:
Stack:
Log preceding fatal error
Help
See also: How To Investigate a Go Test Failure (internal)
This test on roachdash | Improve this report!
Jira issue: CRDB-29688
The text was updated successfully, but these errors were encountered: