Skip to content

Commit

Permalink
Added RollingUpdate partition logic (canary deployments) (#7132)
Browse files Browse the repository at this point in the history
### Description

Configured testnet statefulset update policy to rollingUpdate and partitions, so this can be used to only update a subset of the validators/proxies/txnodes... during a testnet upgrade. More information about the statefulset update pollicies: [link](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#partitions).

Pending to apply these changes to celo-fullnode chart.

### Tested

Tested only with `--helmdryrun` to validate the output. Pending further testing (deploying to a testnet and/or baklava and alfajores).

### Related issues

- Fixes celo-org/celo-labs#770

### Backwards compatibility

No problem
  • Loading branch information
jcortejoso committed Feb 25, 2021
1 parent 0a04b98 commit 0eb0938
Show file tree
Hide file tree
Showing 13 changed files with 80 additions and 8 deletions.
13 changes: 13 additions & 0 deletions .env
Expand Up @@ -108,6 +108,19 @@ IN_MEMORY_DISCOVERY_TABLE=false
TX_NODES="3"
# Nodes whose RPC ports are only internally exposed
PRIVATE_TX_NODES=1

# Canary Deployment Variables
# Specify the partition nodes to be using a new statefulset definition. The number of the nodes
# specified in the partition are the nodes using the old definition. Use 0 for all the replicas
# using the latest definition
# https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#partitions
VALIDATORS_ROLLING_UPDATE_PARTITION=0
PROXY_ROLLING_UPDATE_PARTITION=0
SECONDARIES_ROLLING_UPDATE_PARTITION=0
TX_NODES_ROLLING_UPDATE_PARTITION=0
TX_NODES_PRIVATE_ROLLING_UPDATE_PARTITION=0

# Kubernetes Cluster Creation flags. Only used when a new kubernetes cluster needs to be created
CLUSTER_CREATION_FLAGS="--enable-autoscaling --min-nodes 3 --max-nodes 40 --machine-type=n1-standard-4 --preemptible --workload-metadata=GKE_METADATA --workload-pool=celo-testnet.svc.id.goog"

# Number of faulty/Byzantine validators
Expand Down
3 changes: 3 additions & 0 deletions .env.baklava
Expand Up @@ -67,6 +67,7 @@ AZURE_ORACLE_WESTUS2_KUBERNETES_CLUSTER_NAME=baklava-oracles-westus2
# <address>:<key vault name>:<resource group (optional)>
AZURE_ORACLE_WESTUS2_CELOUSD_ORACLE_ADDRESS_AZURE_KEY_VAULTS=0xd71fea6b92d3f21f659152589223385a7329bb11:baklava-oracle:baklava-oracles-westus2,0x1e477fc9b6a49a561343cd16b2c541930f5da7d2:baklava-oracle1:baklava-oracles-westus2,0x460b3f8d3c203363bb65b1a18d89d4ffb6b0c981:baklava-oracle2:baklava-oracles-westus2,0x3b522230c454ca9720665d66e6335a72327291e8:baklava-oracle3:baklava-oracles-westus2,0x0AFe167600a5542d10912f4A07DFc4EEe0769672:baklava-oracle4:baklava-oracles-westus2
AZURE_ORACLE_WESTUS2_FULL_NODES_COUNT=2
AZURE_ORACLE_WESTUS2_FULL_NODES_ROLLING_UPDATE_PARTITION=0
AZURE_ORACLE_WESTUS2_FULL_NODES_DISK_SIZE=30

AZURE_ORACLE_CENTRALUS_AZURE_SUBSCRIPTION_ID=7a6f5f20-bd43-4267-8c35-a734efca140c
Expand All @@ -78,6 +79,7 @@ AZURE_ORACLE_CENTRALUS_KUBERNETES_CLUSTER_NAME=baklava-oracles-centralus
# <address>:<key vault name>:<resource group (optional)>
AZURE_ORACLE_CENTRALUS_CELOUSD_ORACLE_ADDRESS_AZURE_KEY_VAULTS=0x412ebe7859e9aa71ff5ce4038596f6878c359c96:baklava-oracle5:baklava-oracles-centralus,0xbbfe73df8b346b3261b19ac91235888aba36d68c:baklava-oracle6:baklava-oracles-centralus,0x02b1d1bea682fcab4448c0820f5db409cce4f702:baklava-oracle7:baklava-oracles-centralus,0xe90f891710f625f18ecbf1e02efb4fd1ab236a10:baklava-oracle8:baklava-oracles-centralus,0x28c52c722df87ed11c5d7665e585e84aa93d7964:baklava-oracle9:baklava-oracles-centralus
AZURE_ORACLE_CENTRALUS_FULL_NODES_COUNT=2
AZURE_ORACLE_CENTRALUS_FULL_NODES_ROLLING_UPDATE_PARTITION=0
AZURE_ORACLE_CENTRALUS_FULL_NODES_DISK_SIZE=30

# ---- Forno ----
Expand All @@ -97,6 +99,7 @@ GCP_FORNO_EUROPE_WEST1_GCP_PROJECT_NAME=celo-testnet-production
GCP_FORNO_EUROPE_WEST1_GCP_ZONE=europe-west1-b
GCP_FORNO_EUROPE_WEST1_KUBERNETES_CLUSTER_NAME=baklava-europe-west1
GCP_FORNO_EUROPE_WEST1_FULL_NODES_COUNT=2
GCP_FORNO_EUROPE_WEST1_FULL_NODES_ROLLING_UPDATE_PARTITION=0
GCP_FORNO_EUROPE_WEST1_FULL_NODES_DISK_SIZE=100
# NOTE: If these fullnodes are used for static nodes, changing this will result
# in the full nodes having a different nodekey
Expand Down
8 changes: 8 additions & 0 deletions .env.rc1
Expand Up @@ -86,6 +86,7 @@ AZURE_ORACLE_WESTUS_KUBERNETES_CLUSTER_NAME=mainnet-oracles-westus2v1
# <address>:<key vault name>:<resource group (optional)>
AZURE_ORACLE_WESTUS_CELOUSD_ORACLE_ADDRESS_AZURE_KEY_VAULTS=0x0aee051be85ba9c7c1bc635fb76b52039341ab26:mainnet-oracle0:mainnet-oracles-westus2,0xd3405621f6cdcd95519a79d37f91c78e7c79cefa:mainnet-oracle1:mainnet-oracles-westus2,0xe037f31121f3a96c0cc49d0cf55b2f5d6deff19e:mainnet-oracle2:mainnet-oracles-westus2,0x12bad172b47287a754048f0d294221a499d1690f:mainnet-oracle3:mainnet-oracles-westus2,0xacad5b2913e21ccc073b80e431fec651cd8231c6:mainnet-oracle4:mainnet-oracles-westus2
AZURE_ORACLE_WESTUS_FULL_NODES_COUNT=5
AZURE_ORACLE_WESTUS_FULL_NODES_ROLLING_UPDATE_PARTITION=0
AZURE_ORACLE_WESTUS_FULL_NODES_DISK_SIZE=100

AZURE_ORACLE_WESTEUROPE_AZURE_SUBSCRIPTION_ID=7a6f5f20-bd43-4267-8c35-a734efca140c
Expand All @@ -97,6 +98,7 @@ AZURE_ORACLE_WESTEUROPE_KUBERNETES_CLUSTER_NAME=mainnet-oracles-westeurope
# <address>:<key vault name>:<resource group (optional)>
AZURE_ORACLE_WESTEUROPE_CELOUSD_ORACLE_ADDRESS_AZURE_KEY_VAULTS=0xfe9925e6ae9c4cd50ae471b90766aaef37ad307e:mainnet-oracle-eu0:mainnet-oracles-westeurope,0x641c6466dae2c0b1f1f4f9c547bc3f54f4744a1d:mainnet-oracle-eu1:mainnet-oracles-westeurope,0x75becd8e400552bac29cbe0534d8c7d6cba49979:mainnet-oracle-eu2:mainnet-oracles-westeurope,0x223ab67272891dd352194be61597042ecf9c272a:mainnet-oracle-eu3:mainnet-oracles-westeurope,0xca9ae47493f763a7166ab8310686b197984964b4:mainnet-oracle-eu4:mainnet-oracles-westeurope
AZURE_ORACLE_WESTEUROPE_FULL_NODES_COUNT=5
AZURE_ORACLE_WESTEUROPE_FULL_NODES_ROLLING_UPDATE_PARTITION=0
AZURE_ORACLE_WESTEUROPE_FULL_NODES_DISK_SIZE=100

AZURE_ORACLE_EASTUS2_AZURE_SUBSCRIPTION_ID=7a6f5f20-bd43-4267-8c35-a734efca140c
Expand All @@ -109,6 +111,7 @@ AZURE_ORACLE_EASTUS2_KUBERNETES_CLUSTER_NAME=mainnet-oracles-eastus2
# Set these when needed
AZURE_ORACLE_EASTUS2_CELOUSD_ORACLE_ADDRESS_AZURE_KEY_VAULTS=
AZURE_ORACLE_EASTUS2_FULL_NODES_COUNT=3
AZURE_ORACLE_EASTUS2_FULL_NODES_ROLLING_UPDATE_PARTITION=0
AZURE_ORACLE_EASTUS2_FULL_NODES_DISK_SIZE=100

# ---- Forno ----
Expand All @@ -128,6 +131,7 @@ GCP_FORNO_US_WEST1_GCP_PROJECT_NAME=celo-testnet-production
GCP_FORNO_US_WEST1_GCP_ZONE=us-west1-a
GCP_FORNO_US_WEST1_KUBERNETES_CLUSTER_NAME=rc1-us-west1
GCP_FORNO_US_WEST1_FULL_NODES_COUNT=10
GCP_FORNO_US_WEST1_FULL_NODES_ROLLING_UPDATE_PARTITION=0
GCP_FORNO_US_WEST1_FULL_NODES_DISK_SIZE=100
# NOTE: If these fullnodes are used for static nodes, changing this will result
# in the full nodes having a different nodekey
Expand All @@ -140,6 +144,7 @@ GCP_FORNO_US_EAST1_GCP_PROJECT_NAME=celo-testnet-production
GCP_FORNO_US_EAST1_GCP_ZONE=us-east1-b
GCP_FORNO_US_EAST1_KUBERNETES_CLUSTER_NAME=rc1-us-east1
GCP_FORNO_US_EAST1_FULL_NODES_COUNT=5
GCP_FORNO_US_EAST1_FULL_NODES_ROLLING_UPDATE_PARTITION=0
GCP_FORNO_US_EAST1_FULL_NODES_DISK_SIZE=100
# NOTE: If these fullnodes are used for static nodes, changing this will result
# in the full nodes having a different nodekey
Expand All @@ -152,6 +157,7 @@ GCP_FORNO_ASIA_EAST1_GCP_PROJECT_NAME=celo-testnet-production
GCP_FORNO_ASIA_EAST1_GCP_ZONE=asia-east1-a
GCP_FORNO_ASIA_EAST1_KUBERNETES_CLUSTER_NAME=rc1-asia-east1
GCP_FORNO_ASIA_EAST1_FULL_NODES_COUNT=10
GCP_FORNO_ASIA_EAST1_FULL_NODES_ROLLING_UPDATE_PARTITION=0
GCP_FORNO_ASIA_EAST1_FULL_NODES_DISK_SIZE=100
# NOTE: If these fullnodes are used for static nodes, changing this will result
# in the full nodes having a different nodekey
Expand All @@ -164,6 +170,7 @@ GCP_FORNO_EUROPE_WEST1_GCP_PROJECT_NAME=celo-testnet-production
GCP_FORNO_EUROPE_WEST1_GCP_ZONE=europe-west1-b
GCP_FORNO_EUROPE_WEST1_KUBERNETES_CLUSTER_NAME=rc1-europe-west1
GCP_FORNO_EUROPE_WEST1_FULL_NODES_COUNT=5
GCP_FORNO_EUROPE_WEST1_FULL_NODES_ROLLING_UPDATE_PARTITION=0
GCP_FORNO_EUROPE_WEST1_FULL_NODES_DISK_SIZE=100
# NOTE: If these fullnodes are used for static nodes, changing this will result
# in the full nodes having a different nodekey
Expand All @@ -176,6 +183,7 @@ GCP_FORNO_SOUTHAMERICA_EAST1_GCP_PROJECT_NAME=celo-testnet-production
GCP_FORNO_SOUTHAMERICA_EAST1_GCP_ZONE=southamerica-east1-a
GCP_FORNO_SOUTHAMERICA_EAST1_KUBERNETES_CLUSTER_NAME=rc1-southamerica-east1
GCP_FORNO_SOUTHAMERICA_EAST1_FULL_NODES_COUNT=5
GCP_FORNO_SOUTHAMERICA_EAST1_FULL_NODES_ROLLING_UPDATE_PARTITION=0
GCP_FORNO_SOUTHAMERICA_EAST1_FULL_NODES_DISK_SIZE=100
# NOTE: If these fullnodes are used for static nodes, changing this will result
# in the full nodes having a different nodekey
Expand Down
10 changes: 8 additions & 2 deletions packages/celotool/src/lib/env-utils.ts
Expand Up @@ -79,8 +79,8 @@ export enum envVar {
ISTANBUL_REQUEST_TIMEOUT_MS = 'ISTANBUL_REQUEST_TIMEOUT_MS',
KOMENCI_DOCKER_IMAGE_REPOSITORY = 'KOMENCI_DOCKER_IMAGE_REPOSITORY',
KOMENCI_DOCKER_IMAGE_TAG = 'KOMENCI_DOCKER_IMAGE_TAG',
KOMENCI_UNUSED_KOMENCI_ADDRESSES = 'KOMENCI_UNUSED_KOMENCI_ADDRESSES',
KOMENCI_RULE_CONFIG_CAPTCHA_BYPASS_TOKEN = 'KOMENCI_RULE_CONFIG_CAPTCHA_BYPASS_TOKEN',
KOMENCI_UNUSED_KOMENCI_ADDRESSES = 'KOMENCI_UNUSED_KOMENCI_ADDRESSES',
KUBERNETES_CLUSTER_NAME = 'KUBERNETES_CLUSTER_NAME',
KUBERNETES_CLUSTER_ZONE = 'KUBERNETES_CLUSTER_ZONE',
LEADERBOARD_CREDENTIALS = 'LEADERBOARD_CREDENTIALS',
Expand All @@ -105,13 +105,15 @@ export enum envVar {
NEXMO_KEY = 'NEXMO_KEY',
NEXMO_SECRET = 'NEXMO_SECRET',
NODE_DISK_SIZE_GB = 'NODE_DISK_SIZE_GB',
PRIVATE_NODE_DISK_SIZE_GB = 'PRIVATE_NODE_DISK_SIZE_GB',
ORACLE_DOCKER_IMAGE_REPOSITORY = 'ORACLE_DOCKER_IMAGE_REPOSITORY',
ORACLE_DOCKER_IMAGE_TAG = 'ORACLE_DOCKER_IMAGE_TAG',
ORACLE_UNUSED_ORACLE_ADDRESSES = 'ORACLE_UNUSED_ORACLE_ADDRESSES',
PRIVATE_NODE_DISK_SIZE_GB = 'PRIVATE_NODE_DISK_SIZE_GB',
PRIVATE_TX_NODES = 'PRIVATE_TX_NODES',
PROMETHEUS_GCE_SCRAPE_REGIONS = 'PROMETHEUS_GCE_SCRAPE_REGIONS',
PROXIED_VALIDATORS = 'PROXIED_VALIDATORS',
PROXY_ROLLING_UPDATE_PARTITION = 'PROXY_ROLLING_UPDATE_PARTITION',
SECONDARIES_ROLLING_UPDATE_PARTITION = 'SECONDARIES_ROLLING_UPDATE_PARTITION',
STACKDRIVER_MONITORING_DASHBOARD = 'STACKDRIVER_MONITORING_DASHBOARD',
STACKDRIVER_NOTIFICATION_APPLICATIONS_PREFIX = 'STACKDRIVER_NOTIFICATION_APPLICATIONS_PREFIX',
STACKDRIVER_NOTIFICATION_CHANNEL_APPLICATIONS = 'STACKDRIVER_NOTIFICATION_CHANNEL_APPLICATIONS',
Expand All @@ -130,11 +132,14 @@ export enum envVar {
TWILIO_ACCOUNT_SID = 'TWILIO_ACCOUNT_SID',
TWILIO_ADDRESS_SID = 'TWILIO_ADDRESS_SID',
TX_NODES = 'TX_NODES',
TX_NODES_PRIVATE_ROLLING_UPDATE_PARTITION = 'TX_NODES_PRIVATE_ROLLING_UPDATE_PARTITION',
TX_NODES_ROLLING_UPDATE_PARTITION = 'TX_NODES_ROLLING_UPDATE_PARTITION',
USE_GSTORAGE_DATA = 'USE_GSTORAGE_DATA',
VALIDATOR_GENESIS_BALANCE = 'VALIDATOR_GENESIS_BALANCE',
VALIDATOR_PROXY_COUNTS = 'VALIDATOR_PROXY_COUNTS',
VALIDATOR_ZERO_GENESIS_BALANCE = 'VALIDATOR_ZERO_GENESIS_BALANCE',
VALIDATORS = 'VALIDATORS',
VALIDATORS_ROLLING_UPDATE_PARTITION = 'VALIDATORS_ROLLING_UPDATE_PARTITION',
VM_BASED = 'VM_BASED',
VOTING_BOT_BALANCE = 'VOTING_BOT_BALANCE',
VOTING_BOT_CHANGE_BASELINE = 'VOTING_BOT_CHANGE_BASELINE',
Expand All @@ -157,6 +162,7 @@ export enum DynamicEnvVar {
AZURE_REGION_NAME = '{{ context }}_AZURE_REGION_NAME',
AZURE_TENANT_ID = '{{ context }}_AZURE_TENANT_ID',
FULL_NODES_COUNT = '{{ context }}_FULL_NODES_COUNT',
FULL_NODES_ROLLING_UPDATE_PARTITION = '{{ context }}_FULL_NODES_ROLLING_UPDATE_PARTITION',
FULL_NODES_DISK_SIZE = '{{ context }}_FULL_NODES_DISK_SIZE',
FULL_NODES_NODEKEY_DERIVATION_STRING = '{{ context }}_FULL_NODES_NODEKEY_DERIVATION_STRING',
FULL_NODES_STATIC_NODES_FILE_SUFFIX = '{{ context }}_FULL_NODES_STATIC_NODES_FILE_SUFFIX',
Expand Down
2 changes: 2 additions & 0 deletions packages/celotool/src/lib/fullnodes.ts
Expand Up @@ -17,6 +17,7 @@ const contextFullNodeDeploymentEnvVars: {
} = {
diskSizeGb: DynamicEnvVar.FULL_NODES_DISK_SIZE,
replicas: DynamicEnvVar.FULL_NODES_COUNT,
rollingUpdatePartition: DynamicEnvVar.FULL_NODES_ROLLING_UPDATE_PARTITION
}

/**
Expand Down Expand Up @@ -128,6 +129,7 @@ function getFullNodeDeploymentConfig(context: string) : BaseFullNodeDeploymentCo
const fullNodeDeploymentConfig: BaseFullNodeDeploymentConfig = {
diskSizeGb: parseInt(fullNodeDeploymentEnvVarValues.diskSizeGb, 10),
replicas: parseInt(fullNodeDeploymentEnvVarValues.replicas, 10),
rollingUpdatePartition: parseInt(fullNodeDeploymentEnvVarValues.rollingUpdatePartition, 10),
}
return fullNodeDeploymentConfig
}
Expand Down
11 changes: 11 additions & 0 deletions packages/celotool/src/lib/helm_deploy.ts
Expand Up @@ -770,6 +770,7 @@ async function helmParameters(celoEnv: string, useExistingGenesis: boolean) {
...setHelmArray('geth.proxiesPerValidator', getProxiesPerValidator()),
...gethMetricsOverrides,
...bootnodeOverwritePkey,
...rollingUpdateHelmVariables(),
...(await helmIPParameters(celoEnv)),
]
}
Expand Down Expand Up @@ -1016,3 +1017,13 @@ export async function checkHelmVersion() {
process.exit(1)
}
}

function rollingUpdateHelmVariables() {
return [
`--set updateStrategy.validators.rollingUpdate.partition=${fetchEnvOrFallback(envVar.VALIDATORS_ROLLING_UPDATE_PARTITION, "0")}`,
`--set updateStrategy.secondaries.rollingUpdate.partition=${fetchEnvOrFallback(envVar.SECONDARIES_ROLLING_UPDATE_PARTITION, "0")}`,
`--set updateStrategy.proxy.rollingUpdate.partition=${fetchEnvOrFallback(envVar.PROXY_ROLLING_UPDATE_PARTITION, "0")}`,
`--set updateStrategy.tx_nodes.rollingUpdate.partition=${fetchEnvOrFallback(envVar.TX_NODES_ROLLING_UPDATE_PARTITION, "0")}`,
`--set updateStrategy.tx_nodes_private.rollingUpdate.partition=${fetchEnvOrFallback(envVar.TX_NODES_PRIVATE_ROLLING_UPDATE_PARTITION, "0")}`,
]
}
2 changes: 2 additions & 0 deletions packages/celotool/src/lib/k8s-fullnode/base.ts
Expand Up @@ -23,6 +23,7 @@ export interface NodeKeyGenerationInfo {
export interface BaseFullNodeDeploymentConfig {
diskSizeGb: number
replicas: number
rollingUpdatePartition: number
// If undefined, node keys will not be predetermined and will be random
nodeKeyGenerationInfo?: NodeKeyGenerationInfo
}
Expand Down Expand Up @@ -93,6 +94,7 @@ export abstract class BaseFullNodeDeployer {
return [
`--set namespace=${this.kubeNamespace}`,
`--set replicaCount=${this._deploymentConfig.replicas}`,
`--set geth.updateStrategy.rollingUpdate.partition=${this._deploymentConfig.rollingUpdatePartition}`,
`--set storage.size=${this._deploymentConfig.diskSizeGb}Gi`,
`--set geth.expose_rpc_externally=false`,
`--set geth.image.repository=${fetchEnv(envVar.GETH_NODE_DOCKER_IMAGE_REPOSITORY)}`,
Expand Down
7 changes: 4 additions & 3 deletions packages/helm-charts/celo-fullnode/values.yaml
Expand Up @@ -84,9 +84,10 @@ geth:
create_network_endpoint_group: false
updateStrategy:
type: RollingUpdate
# rollingUpdate:
# maxUnavailable: 25%
# maxSurge: 25%
rollingUpdate:
partition: 0
# maxUnavailable: 25%
# maxSurge: 25%
maxpeers: 1100
light:
maxpeers: 1000
Expand Down
2 changes: 1 addition & 1 deletion packages/helm-charts/common/values.yaml
Expand Up @@ -37,4 +37,4 @@ celotool:
image:
repository: gcr.io/celo-testnet/celo-monorepo
tag: celotool-dc5e5dfa07231a4ff4664816a95eae606293eae9
imagePullPolicy: IfNotPresent
imagePullPolicy: IfNotPresent
3 changes: 3 additions & 0 deletions packages/helm-charts/testnet/templates/_helpers.tpl
Expand Up @@ -103,6 +103,9 @@ metadata:
validator-proxied: "{{ $validatorProxied }}"
{{- end }}
spec:
{{- $updateStrategy := index $.Values.updateStrategy $.component_label }}
updateStrategy:
{{ toYaml $updateStrategy | indent 4 }}
{{- if .Values.geth.ssd_disks }}
volumeClaimTemplates:
- metadata:
Expand Down
Expand Up @@ -31,7 +31,7 @@ spec:
storage: {{ .Values.geth.diskSizeGB }}Gi
podManagementPolicy: Parallel
updateStrategy:
type: RollingUpdate
{{ toYaml .Values.updateStrategy.secondaries | indent 4 }}
replicas: {{ .Values.geth.secondaries }}
serviceName: secondaries
selector:
Expand Down
Expand Up @@ -31,7 +31,7 @@ spec:
storage: {{ .Values.geth.diskSizeGB }}Gi
podManagementPolicy: Parallel
updateStrategy:
type: RollingUpdate
{{ toYaml .Values.updateStrategy.validators | indent 4 }}
replicas: {{ .Values.geth.validators }}
serviceName: validators
selector:
Expand Down
23 changes: 23 additions & 0 deletions packages/helm-charts/testnet/values.yaml
Expand Up @@ -36,6 +36,29 @@ geth:
memory: "4Gi"
cpu: "4"

# UpdateStrategy for statefulsets only. Partition=0 is default rollingUpdate behaviour.
updateStrategy:
validators:
type: RollingUpdate
rollingUpdate:
partition: 0
secondaries:
type: RollingUpdate
rollingUpdate:
partition: 0
proxy:
type: RollingUpdate
rollingUpdate:
partition: 0
tx_nodes:
type: RollingUpdate
rollingUpdate:
partition: 0
tx_nodes_private:
type: RollingUpdate
rollingUpdate:
partition: 0

gethexporter:
image:
repository: gcr.io/celo-testnet/geth-exporter
Expand Down

0 comments on commit 0eb0938

Please sign in to comment.