Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Introduce fallback ZooKeeper sessions #50424

Merged

Conversation

tonickkozlov
Copy link
Contributor

@tonickkozlov tonickkozlov commented Jun 1, 2023

Changelog category (leave one):

  • Improvement

Changelog entry (a user-readable short description of the changes that goes to CHANGELOG.md):

Introduce fallback ZooKeeper sessions which are time-bound. Fixed index column in system.zookeeper_connection for DNS addresses.

Documentation entry for user-facing changes

  • Documentation is written (mandatory for new features)

This change allows users to limit lifetime of ZooKeeper sessions. It's useful to balance load between ZooKeeper replicas over long periods of time. A hypothetical situation where this may be useful (please see the integration test in the PR):

  • A cluster of N CH nodes is connected to 3 Keeper replicas.
  • Each Keeper replica is seeing N/3 connections if random load_balancing is used
  • One of the Keeper replicas goes down / loses connectivity
  • Clickhouse nodes detect that and re-connect to other replicas.
  • 2 remaining Keeper nodes are receiving N/2 connections each
  • The Keeper node comes up
  • Without the proposed change, the node that just came up will not get new connections, while the other two will see N/2, unless CH nodes are restarted;
  • With the proposed change, the ZK sessions will expire automatically and the load balance will be restored.

I believe this will help the scenario described in #29617 (comment) as well as #51524

args.session_lifetime->max_sec,
};
UInt64 client_session_duration_sec = distribution(thread_local_rng);
client_session_deadline = clock::now() + std::chrono::seconds(client_session_duration_sec);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will this work with system clock changes?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for taking a look!
I'm not 100% sure. I'm following an example in src/Dictionaries/SSDCacheDictionaryStorage.h:1366 , does that account for system clock changes?

Copy link
Member

@tavplubix tavplubix Jun 1, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@tonickkozlov tonickkozlov force-pushed the tonickkozlov/zk-session-lifetime branch from 5e1b0d8 to 2d89eef Compare June 1, 2023 12:12
@robot-ch-test-poll robot-ch-test-poll added the pr-improvement Pull request with some product improvements label Jun 1, 2023
@robot-ch-test-poll
Copy link
Contributor

robot-ch-test-poll commented Jun 1, 2023

This is an automated comment for commit 5dfc305 with description of existing statuses. It's updated for the latest CI running
The full report is available here
The overall status of the commit is 🟢 success

Check nameDescriptionStatus
AST fuzzerRuns randomly generated queries to catch program errors. The build type is optionally given in parenthesis. If it fails, ask a maintainer for help🟢 success
CI runningA meta-check that indicates the running CI. Normally, it's in success or pending state. The failed status indicates some problems with the PR🟢 success
ClickHouse build checkBuilds ClickHouse in various configurations for use in further steps. You have to fix the builds that fail. Build logs often has enough information to fix the error, but you might have to reproduce the failure locally. The cmake options can be found in the build log, grepping for cmake. Use these options and follow the general build process🟢 success
Compatibility checkChecks that clickhouse binary runs on distributions with old libc versions. If it fails, ask a maintainer for help🟢 success
Docker image for serversThe check to build and optionally push the mentioned image to docker hub🟢 success
Docs CheckBuilds and tests the documentation🟢 success
Fast testNormally this is the first check that is ran for a PR. It builds ClickHouse and runs most of stateless functional tests, omitting some. If it fails, further checks are not started until it is fixed. Look at the report to see which tests fail, then reproduce the failure locally as described here🟢 success
Flaky testsChecks if new added or modified tests are flaky by running them repeatedly, in parallel, with more randomization. Functional tests are run 100 times with address sanitizer, and additional randomization of thread scheduling. Integrational tests are run up to 10 times. If at least once a new test has failed, or was too long, this check will be red. We don't allow flaky tests, read the doc🟢 success
Install packagesChecks that the built packages are installable in a clear environment🟢 success
Integration testsThe integration tests report. In parenthesis the package type is given, and in square brackets are the optional part/total tests🟢 success
Mergeable CheckChecks if all other necessary checks are successful🟢 success
Performance ComparisonMeasure changes in query performance. The performance test report is described in detail here. In square brackets are the optional part/total tests🟢 success
Push to DockerhubThe check for building and pushing the CI related docker images to docker hub🟢 success
SQLancerFuzzing tests that detect logical bugs with SQLancer tool🟢 success
SqllogicRun clickhouse on the sqllogic test set against sqlite and checks that all statements are passed🟢 success
Stateful testsRuns stateful functional tests for ClickHouse binaries built in various configurations -- release, debug, with sanitizers, etc🟢 success
Stateless testsRuns stateless functional tests for ClickHouse binaries built in various configurations -- release, debug, with sanitizers, etc🟢 success
Stress testRuns stateless functional tests concurrently from several clients to detect concurrency-related errors🟢 success
Style CheckRuns a set of checks to keep the code style clean. If some of tests failed, see the related log from the report🟢 success
Unit testsRuns the unit tests for different release types🟢 success
Upgrade checkRuns stress tests on server version from last release and then tries to upgrade it to the version from the PR. It checks if the new server can successfully startup without any errors, crashes or sanitizer asserts🟢 success

@tavplubix tavplubix self-assigned this Jun 1, 2023
Comment on lines 94 to 95
# wait for zk sessions to reach deadline
time.sleep(5)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's unreliable, the test is going to be flaky

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Any suggestion on how to improve? The test needs to wait for ZK sessions to expire and re-connect. They are set to expire between 2 and 4 seconds, hence 5. I understand sleeping for a magic number of seconds is not ideal.
Is there a way to query clickhouse itself for which ZK replica it's connected to? Maybe we could use that in query_with_retry

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe system.zookeeper_connection, query_with_retry and zookeeper_load_balancing will help

@tonickkozlov
Copy link
Contributor Author

I'll need to re-work this PR. I see the exception leaking through to a client that sends queries and that shouldn't happen.

@tonickkozlov tonickkozlov force-pushed the tonickkozlov/zk-session-lifetime branch 5 times, most recently from 712f391 to dfc3dea Compare June 21, 2023 18:23
@tavplubix
Copy link
Member

I see the exception leaking through to a client that sends queries and that shouldn't happen.

Do you mean that a "session expired" exception is thrown on each attempt to make a zk request when the session is closed due to lifetime timeout? That's unavoidable. We need this exception.

@devoxel
Copy link

devoxel commented Jul 17, 2023

I suggested in #51524 a similar idea; but in this implementation we do a session refresh even when there is no failover. Does it make sense in the code to have this only kick in after a fail-over event to prevent constant needless refreshs, or is it preferred for simplicity to just have a maximum session duration?

@Algunenano
Copy link
Member

Does it make sense in the code to have this only kick in after a fail-over event to prevent constant needless refreshs, or is it preferred for simplicity to just have a maximum session duration?

As a user, right now ZK session refreshes are a PITA. At least now inserts are able to survive it by retrying, but other operations (ALTER, CREATE, etc) won't. It might help in specific cases, but I don't think having ZK refreshes happening all the time by default is a good default (but that's why it should be behind a setting).

@tonickkozlov
Copy link
Contributor Author

@devoxel I quite like your idea actually. So only the "failover" session would have a deadline, not regular ones. Do you have any implementation in mind, or an integration test maybe? If not, I think I'm going to change my PR to do what you suggest.

@tonickkozlov tonickkozlov force-pushed the tonickkozlov/zk-session-lifetime branch 2 times, most recently from edb7267 to da93c5c Compare July 24, 2023 16:13
{
if (args.hosts[i] == address.toString())
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This did not work for cases where ZooKeeper host is a DNS name (including integration tests)

@tonickkozlov tonickkozlov changed the title Introduce Zookeeper session lifetime Introduce fallback ZooKeeper sessions Jul 24, 2023
@tonickkozlov
Copy link
Contributor Author

@devoxel I've updated my change, does it fit your use-case? The ZooKeeper session now only has a limited timeline if a fallback node is used. @tavplubix could I get another review please? Thanks.

Comment on lines 885 to 886
bool fallback_session_expired = ((session_lifetime_seconds > 0) && (getSessionUptime() >= session_lifetime_seconds));
return fallback_session_expired || impl->isExpired();
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The only correct way to expire the current session and start a new one is to call finalize. Otherwise, some objects will start using a new session while others still use the old one. It will cause consistency issues and will trigger obscure bugs. See #50424 (comment)

@tonickkozlov
Copy link
Contributor Author

tonickkozlov commented Jul 26, 2023

@tavplubix fair enough. Let's discuss follow-up work that needs to be done before I implement anything further.

Here's my understanding of how things are meant to work but please correct me if I'm wrong:

  • managing ZooKeeper sessions is delegated to ReplicatedMergeTreeRestartingThread. This is done for the exact reason you mentioned: keeping only 1 ZooKeeper session alive at any point.
  • the "global" ZooKeeper object is stored on the server context. It's being set by the ReplicatedMergeTreeRestartingThread when the current ZooKeeper session is found to be expired.
  • ReplicatedMergeTreeRestartingThread under normal condition wakes up every minute to check if the ZooKeeper session has expired. Some other parts of the code do so as well, and request a new ZooKeeper session from the server context. In either case the old session is meant to be destroyed and a new one created. This is done under a mutex https://github.com/ClickHouse/ClickHouse/blob/4d03c23166f62e3d38ff6d1e5db4a0e32751a587/src/Interpreters/Context.cpp#L2689
  • In addition to the above, if the ZooKeeper session encounters an exception sending or receiving data it calls finalize on itself which triggers a notification which in turn wakes up the restarting thread:
    LOG_TEST(log, "Received event for expired session. Waking up restarting thread");

So what my implementation essentially does is it schedules the restarting thread close to "artificial" ZooKeeper session expiry to get a new session on time.

Now, my understanding is that you suggest that when the session is "expired" because of the time limit, all its methods should throw ZSESSIONEXPIRED exception, so that clients have a chance to refresh the session?
In one of my old implementations I had a ZooKeeperSessionTimeoutWrapper , shall I bring it back alongside the rest of my changes?

@tonickkozlov
Copy link
Contributor Author

tonickkozlov commented Jul 26, 2023

There's also a question I hope you could answer separately. So there's meant to be one ZooKeeper session active at a time, stored on Context.

This is where it's updated:

shared->zookeeper = shared->zookeeper->startNewSession();

And returned to the caller:

return shared->zookeeper;

The startNewSession method creates a new ZK session and returns a shared_ptr to is:

ZooKeeperPtr ZooKeeper::startNewSession() const

Here's the caveat: some callers of getZooKeeper() may store the shared_ptr internally, e.g.

ZooKeeperPtr current_zookeeper TSA_GUARDED_BY(zookeeper_mutex);

The implication is that the old expired session will stay alive until all callers update their references by calling getZooKeeper() on context to receive the new, non-expired session. Only then the old session will get "finalized" because the destructor will be called. So there may be a situation where there are multiple active ZooKeeper sessions present, at least conceptually.

Does this look like a bug to you or do I misunderstand something? Would it be better to destroy the old session right there in the same call the creates a new one (Context::getZooKeeper)?

@tavplubix
Copy link
Member

managing ZooKeeper sessions is delegated to ReplicatedMergeTreeRestartingThread

No, ReplicatedMergeTreeRestartingThread is only responsible for [re]initializing ReplicatedMergeTree. ZooKeeper sessions are "managed" by Context::getZooKeeper() and ZooKeeper client.

the "global" ZooKeeper object is stored on the server context

That's true.

It's being set by the ReplicatedMergeTreeRestartingThread

No, it's being set by anyone who had called Context::getZooKeeper() when shared->zookeeper->expired() was true:

zkutil::ZooKeeperPtr Context::getZooKeeper() const
{
std::lock_guard lock(shared->zookeeper_mutex);
const auto & config = shared->zookeeper_config ? *shared->zookeeper_config : getConfigRef();
if (!shared->zookeeper)
shared->zookeeper = std::make_shared<zkutil::ZooKeeper>(config, zkutil::getZooKeeperConfigName(config), getZooKeeperLog());
else if (shared->zookeeper->expired())
{
Stopwatch watch;
LOG_DEBUG(shared->log, "Trying to establish a new connection with ZooKeeper");
shared->zookeeper = shared->zookeeper->startNewSession();
LOG_DEBUG(shared->log, "Establishing a new connection with ZooKeeper took {} ms", watch.elapsedMilliseconds());
}
return shared->zookeeper;

It can be ReplicatedMergeTreeRestartingThread, or DDLWorker, or ReadFromSystemZooKeeper, or UserDefinedSQLObjectsLoaderFromZooKeeper, or wahtever else.

ReplicatedMergeTreeRestartingThread under normal condition wakes up every minute
In addition to the above, ... triggers a notification which in turn wakes up the restarting thread

(just an observation) Hmm, looks like we don't really need to run the restarting thread every minute anymore (previously we didn't have that notification mechanism)

Now, my understanding is that you suggest that when the session is "expired" because of the time limit, all its methods should throw ZSESSIONEXPIRED exception, so that clients have a chance to refresh the session?

Yes, exactly

In one of my old implementations I had a ZooKeeperSessionTimeoutWrapper , shall I bring it back alongside the rest of my changes?

I'm not sure about ZooKeeperSessionTimeoutWrapper, I don't really understand why we need a wrapper. AFAIR, the very first implementation was simply throwing an exception leading to finalize call (or just calling finalize directly, I don't remember). And it was totally fine. Yes, it triggers a lot of "connection loss"/"session expired" errors from different places that use ZooKeeper client, but it's unavoidable.


Here's the caveat: some callers of getZooKeeper() may store the shared_ptr internally

Yes, but it's okay (see below)

Only then the old session will get "finalized" because the destructor will be called.

A session is finalized in the dtor only when clickhouse-server is shutting down. If an error happens in ZooKeeper client while clickhouse-server is working, then finalize is called explicitly, e.g.

catch (...)
{
tryLogCurrentException(log);
finalize(true, false, "Exception in sendThread");
}

if (future_result.wait_for(std::chrono::milliseconds(args.operation_timeout_ms)) != std::future_status::ready)
{
impl->finalize(fmt::format("Operation timeout on {} {}", Coordination::OpNum::List, path));
return Coordination::Error::ZOPERATIONTIMEOUT;
}

And if no errors happen, then the session is still active and ZooKeeper::isExpired() is false, so no one will start a new session.

Also, ZooKeeper::isExpired() is false until ZooKeeper::finalize() is called.

The implication is that the old expired session will stay alive until all callers update their references by calling getZooKeeper() on context to receive the new, non-expired session.

Yes, some objects may still use shared_ptr to the old ZooKeeper object, but since that object was finalized it will throw "session expired" exceptions on any attempts to make a request. Having multiple "finalized" ZooKeeper clients is harmless.

So there may be a situation where there are multiple active ZooKeeper sessions present, at least conceptually.

No, it's possible to have multiple ZooKeeper objects, but only one of them can be active. And others are finalized and basically unusable

@tonickkozlov tonickkozlov force-pushed the tonickkozlov/zk-session-lifetime branch 4 times, most recently from c51e916 to 8f6594f Compare July 27, 2023 17:14
@tonickkozlov tonickkozlov force-pushed the tonickkozlov/zk-session-lifetime branch from 8f6594f to 3f3f4dd Compare July 27, 2023 17:16
@tonickkozlov tonickkozlov force-pushed the tonickkozlov/zk-session-lifetime branch 2 times, most recently from bea6aa1 to 0d8ba7e Compare July 27, 2023 19:05

def test_fallback_session(started_cluster: ClickHouseCluster):
# only leave connecting to zoo3 possible
with PartitionManager() as pm:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please add this test to parallel_skip.json, otherwise PartitionManager may work incorrectly

@tonickkozlov tonickkozlov force-pushed the tonickkozlov/zk-session-lifetime branch from 7a82996 to 5dfc305 Compare July 28, 2023 11:48
@tavplubix tavplubix merged commit 01f05e1 into ClickHouse:master Jul 31, 2023
278 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
pr-improvement Pull request with some product improvements
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants