Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

enable zero-downtime deployments for RPC #82

Closed
mollykarcher opened this issue Apr 10, 2024 · 5 comments · Fixed by #84
Closed

enable zero-downtime deployments for RPC #82

mollykarcher opened this issue Apr 10, 2024 · 5 comments · Fixed by #84
Assignees

Comments

@mollykarcher
Copy link
Contributor

What problem does your feature solve?

In it's current form, RPC takes ~30 minutes to deploy new versions to pubnet (thread1, thread2) due to iops limits when initializing it's in-memory data storage from disk.

What would you like to see?

A new RPC version rolls out, and there's no disruption in service. There is also no loss of historical transaction/events history upon rollout (that is, the db/history does not reset to nothing).

What alternatives are there?

  • Blue/green deployment model. We'd maintain 2 instances of RPC, with one always being kept in "standby" mode and not used for client requests
  • Horizontally scale RPC to 2 replicas, each using their own independent PVC, and load balance between them. On deployments, we would deploy to one at a time, making sure we always had 1 ready/available. This is the strategy that we think is better/optimal
@sreuland
Copy link
Contributor

sreuland commented Apr 10, 2024

I think both options converge to option#2 as a blue/green, with two deployments on cluster one for each color, as it is not possible to update a single replica(pod) within one deployment that has replicas set to more than 1, i.e. all replicas(pods) will inherit the config set on the deployment(defined as pod spec in the deployment spec), this is maintained by the Deployment controller which runs on cluster and constantly monitors deployment pod states to make sure they equal the deployment spec and match deployment replicas count.

@sreuland
Copy link
Contributor

sreuland commented Apr 10, 2024

we discussed this more in platform team meeting, and thanks @mollykarcher for wrangling ideas further on chat, your summarized option 'magic bullet' approach of using existing StatefuleSet with replicas=2 sounds like viable option to achieve zero down time during upgrades. this provisions for one ordinal pod to always be healthy during upgrade and routable(included as Endpoint) on the k8s Service associated to the StatefuleSet.

Untitled-2023-02-16-1504

So, we should test replicas=2 out in dev to determine if we can land on that to resolve here. One potential caveat from having this horizontally scaled model when both replicas are healthy and routed to service, there may be potential for each instance to be slightly off on their ingested ledger/network states, potentially reporting different responses for same url requests at about the same time. We'd have to see how this looks at run time to see if significant.

Another interesting option if we want to explore a blue/green or canary approach further is with statefulset rollingupdate partitioning which seems to provide a basis for either of those.

@mollykarcher
Copy link
Contributor Author

...there may be potential for each instance to be slightly off on their ingested ledger/network states, potentially reporting different responses for same url requests at about the same time

I agree that this possibility exists, but let's not over-optimize before we know we have a problem. For now, we might want to just monitor and/or alert on any persistent diff in the LCL between the two instances. Could give us a sense of how likely this issue is.

We could also probably delay/lessen the effects of this simply by enabling sticky sessions/session affinity on the rpc ingress.

@sreuland
Copy link
Contributor

results with replicas=2 on dev:

  • observed the rollout behavior on k8s after statefulset with an existing single replica was updated to replicas=2, it followed the sts scaling docs, i.e. the existing '0' ordinal pod is left as-is and it creates '1' ordinal pod, no service down time occurs in this scenario.

    • However, the first time rpc is scaled to two replicas, it does allow for the new 2nd pod to start responding to url requests through service before it has a full 24 hour data window populated, so it will produce inconsistent results on json-rpc requests until the 24 hour window has passed. This situation can be fixed by changing the readiness probe to parse and compare the new ledger range info recently included on getHealth response per Add ledger range to getHealth endpoint soroban-rpc#133, should include this fix as part of this ticket.
  • tested ensuing upgrades when already on 2 healthy replicas, and saw the expected rolling upgrade behavior, the statefulset will update the pods serially in reverse order, during which no http server down time through the service was seen.

@sreuland
Copy link
Contributor

two thirds of this effort are complete:
the k8s resource changes are done on dev cluster here:
https://github.com/stellar/kube/pull/2098

the helm-chart update to include the changes:
#84

last step will be to merge same change to dev cluster when soroban rpc 21.0.0 is GA.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Status: Done
Development

Successfully merging a pull request may close this issue.

3 participants