-
Notifications
You must be signed in to change notification settings - Fork 641
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Send DDL commands in parallel to worker nodes. #131
Comments
I'm copy/pasting @aamederen's notes to this issue. This logic is in multi_utility.c file. The main call hieararchy is: ExecuteDistributedDDLCommand ExecuteCommandOnWorkerShards ExecuteCommandOnShardPlacements PQexec For the change, I think we need to use |
@aamederen @marcocitus -- I have a question on the DDL propagation changes. Let's say the user has a table with 10K shards. They then run an If we do, could you open another issue to run a few safety checks before we propagate DDL changes? That way, if we think the cluster doesn't have the resources needed to propagate these changes, we error out early. (What happens today if we don't?) |
It would end up opening a connection for each shard placement and there would need to be connections and prepared transaction slots available for that to succeed. Otherwise, it would error out on the worker and roll back, potentially causing some queries to error out as well. A safety check would probably be worth adding. An alternative approach would be to open a fixed number of connections per worker and do multiple DDL commands per connection. A drawback of this approach would be that it significantly complicates transaction and connection management as it creates a somewhat arbitrary mapping between sessions and locks held by that session. |
We will open up a separate PR after PR #855 is merged, as that sets up the infrastructure for this change. |
DDL commands can be slow. So, if users have a high number of shards, then they may have to wait for a long time while the DDL commands complete.
Consider sending these commands in parallel to the worker nodes.
The text was updated successfully, but these errors were encountered: