This repository contains Kubernetes (k8s) manifests and Helm charts that help Ethereum 2.0 stakers easily and safely install, upgrade and roll back Ethereum 2.0 clients. There are many Ethereum 2.0 clients and this project includes Prysm, Lighthouse, Teku and Nimbus.
As of today, this setup has been tested in the testnet only.
We all stake at our own risk. Please always do the experiments and dry-run on the testnet first, familiarize yourself with all the operations, and harden your systems before running it on the mainnet. This project serves as a stepping stone for staking with Kubernetes. The maintainers of this repository are not responsible for any financial losses incurred by using this repository.
We've written a blog post detailing the requirement and walkthrough for running Ethereum 2.0 clients on Prater. You can also find the quick start guide in the following sections.
If the goal is to run Ethereum 2.0 clients on mainnet, we recommend:
- Production-grade k8s distribution to build a k8s cluster.
- NFS as the persistent storage.
- Helm to manage packages and releases.
- Install a k8s distribution and build a cluster.
- Install Helm on the k8s controller node.
-
Set up NFS (Example: Guide for NFS installation and configuration on Ubuntu).
-
Create beacon node, validator (and/or validator keys and secrets) data folders with the correct ownership (our Helm chart uses uid 1001 and gid 2000 by default) on NFS.
-
Import validator keys.
-
Export created data folders as described in the NFS configuration guide.
-
Clone this repo.
git clone https://github.com/lumostone/eth2xk8s.git
-
Change values in the target client's values.yaml. For example, change values in
./prysm/helm/values.yaml
for prysm client.We recommend checking each field in
values.yaml
to determine the desired configuration. Fields that need to be changed or verified before installing the chart are the following ones:For all clients:
- nfs.serverIp: NFS server IP address.
- securityContext.runAsUser: The user ID will be used to run all processes in the container. The user should have the access to the mounted NFS volume.
- securityContext.runAsGroup: The group ID will be used to run all processes in the container. The group should have the access to the mounted NFS volume. We use the group ID to grant limited file access to the processes so it won't use the root group directly.
- image.versionTag: Client version.
For Prysm:
- beacon.dataDirPath: The path to the data directory on the NFS for the beacon node.
- beacon.eth1Endpoints: Ethereum 1.0 node endpoints.
- validatorClients.validatorClient1
- .dataDirPath: The path to the data directory on the NFS for the validator client.
- .walletDirPath: The path to the data directory on the NFS for the wallet.
- .walletPassword: The wallet password.
For Lighthouse:
- beacon.dataDirPath: The path to the data directory on the NFS for the beacon node.
- beacon.eth1Endpoints: Ethereum 1.0 node endpoints.
- validatorClients.validatorClient1.dataDirPath: The path to the data directory on the NFS for the validator client.
For Teku:
- beacon.dataDirPath: The path to the data directory on the NFS for the beacon node.
- beacon.eth1Endpoints: Ethereum 1.0 node endpoints.
- validatorClients.validatorClient1
- .dataDirPath: The path to the data directory on the NFS for the validator client.
- .validatorKeysDirPath: The path to the data directory on the NFS for the validator keys.
- .validatorKeyPasswordsDirPath: The path to the data directory on the NFS for the validator key passwords.
For Nimbus:
- nimbus.clients.client1
- .eth1Endpoints: Ethereum 1.0 node endpoints.
- .dataDirPath: The path to the data directory on the NFS for the beacon node.
- .validatorsDirPath: The path to the data directory on the NFS for the validator keystores.
- .secretsDirPath: The path to the data directory on the NFS for the validator keystore passwords.
Replace <release-name>
and <namespace>
in the following command to the name you prefer.
-
Go the directory of the target client
-
Install the chart.
helm install <release-name> ./helm -n <namespace> --create-namespace
-
Check installed manifests.
helm get manifest <release-name> -n <namespace>
-
Upgrade a release.
helm upgrade <release-name> ./helm -n <namespace>
-
Check release history.
helm history <release-name> -n <namespace>
-
Roll back a release to the target revision. Retrieve the target revision from release history and replace the
<release-revision>
.helm rollback <release-name> <release-revision> -n <namespace>
-
Uninstall a release.
helm uninstall <release-name> -n <namespace>
For Nimbus:
-
Check the status of Nimbus client.
kubectl logs -f -n <namespace> -lapp=nimbus-1
For other clients:
-
Check the status of beacon node.
kubectl logs -f -n <namespace> -lapp=beacon
-
Check the status of the first validator (To check other validators, change -lapp to other validators' names).
kubectl logs -f -n <namespace> -lapp=validator-client-1
If you want to develop for this project or verify your configuration quickly without setting up NFS or other storage solution, we recommend the following setup:
- kind as the k8s distribution.
- hostPath as the persistent storage.
- Helm to manage packages and releases.
-
Create the data folders for beacon node, validator (and/or validator keys and secrets).
-
Import validator keys.
-
Change the directory ownership. Assume the created data folders are under
/data
:chown -R 1001:2000 /data
-
Clone the repo.
-
Go the directory of the target client.
-
Update the
extraMounts
incluster-config/kind-single-node.yaml
with the paths to the created data directories. -
Install kind and create a kind cluster.
kind create cluster --config=cluster-config/kind-single-node.yaml
-
Change values in
helm/values.yaml
to match your environment.- Set persistentVolumeType to
hostPath
. - Follow the values.yaml configuration section for more details.
- Set persistentVolumeType to
-
Install the Helm chart
helm
.
Please see Testing manifests with hostPath and Testing manifests with NFS for details.