New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
cloud-api-adaptor ds rolling update non-disruptively #1322
Comments
@jtumber-ibm @stevenhorsman @bpradipt @liudalibj @surajssd @jensfr This is the proposed solution based on the discussion on the weekly meeting on Wed, may you please help give your comments on that? |
As far as I understand, the proposal right now requires 2 nodes to be able to update. I think it would be beneficial if we could manage to update single nodes clusters. While most production clusters likely consist of multiple nodes, for testing, especially in CI, its much easier to run on a single node cluster. Would it be possible to update pods on the same node one-by-one to the new CAA version? Very naive approach:
|
Fixes: confidential-containers#1322 Signed-off-by: Qi Feng Huo <huoqif@cn.ibm.com>
Fixes: #1322 Signed-off-by: Qi Feng Huo <huoqif@cn.ibm.com>
Fixes: confidential-containers#1322 Signed-off-by: Qi Feng Huo <huoqif@cn.ibm.com>
Fixes: confidential-containers#1322 Signed-off-by: Qi Feng Huo <huoqif@cn.ibm.com>
Follow up of the issue #1240.
The proposed solution is to leverage the rolling update feature in Daemonset in CR and add additional probes:
So, we can make sure all PeerPod instances on specific nodes can be running before update another node. Diagram like:
DaemonSetController rolling update cloud-api-adaptor ds, steps as below
In this way, we can avoid the complete down time window on all nodes (cloud-api-adaptor pod upgraded but corresponding PeerPod instances not recreated yet), So, we might:
runtimeClass
andnodename
fields matchThe text was updated successfully, but these errors were encountered: