-
Notifications
You must be signed in to change notification settings - Fork 845
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support bootstrapping spilo cluster from the external PostgreSQL database #13
Comments
The particular plan did not work out (*): after playing with the standby mode flag we've decided the interactions with the standby mode flag and the decision on when to clear this flag (given that the "primary standby" node may just crash or shut down) were too complex to manage them automatically, so instead, we would rely on a client to set the keys in DCS in order for the existing cluster to point elsewhere, and clear those keys when we are done. The user setting up the external master would also be responsible for creating replication slots or setting up WAL archiving. Of course, we need the client support and documented process on how to set things up, which will be a part of this task. |
@alexeyklyukin Hi! Any movement on this since Oct 2015? LMK if we can close. |
We might still need it in some form or another. Let's keep the ticket open for now. |
I believe #252 solves this issue |
@alexeyklyukin Given @CyberDem0n's last comment, can we close now? |
There is a valid use case of moving already running databases to spilo without doing an expensive dump/restore. In order to support it, we can just pretend that an external PostgreSQL data service is part of the spilo cluster, acting as a leader, with TTL that never expires. Once the cluster is started, spilo nodes will choose replica roles (as the leader is already taken) and will continue to act as replicas until the leader key is explicitly removed by spilo (since it cannot expire on its own).
In order to support this use case, the following parameters should be changed:
From the user point-of-view, the spilo cluster should be configured with the primary_conninfo, pointing to the external PostgreSQL database.
This connection string should also allow normal (non-replication) superuser connections, and the pg_hba.conf on the cluster should be configured to permit such connections from the addresses that would belong to the spilo cluster.
The changes required in the spilo workflow are the following:
That command should be built into the spilo comamand-line client, as it's not possible to find the promotion point automatically (for instance, we might stream current changes from the external master to spilo until a certain scheduled maintaince window, where the spilo cluster will be set as a master).
If the active spilo node dies while the standby_mode is still set, another node should notice it and take its place.
The text was updated successfully, but these errors were encountered: