-
Notifications
You must be signed in to change notification settings - Fork 8
AWS EKS Deployment (experimental)
This page provides an experimental and not further supported how-to for deploying a Carbyne Stack Virtual Cloud using AWS EKS clsuters.
This guide describes how to install the prerequisite software components and how to install CarbyneStack on two AWS EKS clusters.
⚠ WARNING
Carbyne Stack has been tested using the exact versions of the tools specified below. Deviating from this experimental, but somewhat tested configuration may create all kinds of issues.
ℹ INFO
This part of the tutorial is developed and tested with Ubuntu 20.04. Please refer to this link for Ubuntu installation steps.
Software to be installed:
- kubectl v1.21.9
- AWS CLI v2.*
- eksctl v0.105.0
- Helm v3.7.1
- Helmfile v0.142.0
- Helm Diff Plugin v3.1.3
- OpenJDK 8
export EKSCTL_VERSION="0.105.0"
export KUBECTL_VERSION="1.21.9"
export HELM_VERSION="3.7.1"
export HELMFILE_VERSION="0.142.0"
export HELMDIFF_VERSION="3.1.3"
export OPENJDK_VERSION="8"
Detailed installation instructions for kubectl can be found here. Alternatively, you can install kubectl by following the instructions below.
-
Download kubectl 1.21.9.
curl -LO https://dl.k8s.io/release/v${KUBECTL_VERSION}/bin/linux/amd64/kubectl
-
Install kubectl with the command below.
sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
-
Verify the installation.
kubectl version --client
Detailed installation instructions about the AWS CLI can be found here. Alternatively, you can install the AWS CLI by following the instructions below.
-
Download the installation file.
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
-
Extract the installer.
unzip awscliv2.zip
-
Run installation.
sudo ./aws/install
-
Verify the installation.
aws --version
-
Configure AWS CLI
Please follow the instructions here in order to configure the AWS CLI.
The user used to create and manage the EKS clusters should have permissions to access the required resources. The following policy was assigned for testing purposes:
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "eks:*", "cloudformation:*", "ec2:*", "iam:*", "elasticloadbalancing:DescribeLoadBalancers" ], "Resource": "*" } ] }
⚠ WARNING
Please note that the specified policy grants extensive permissions. Please use this policy with caution or restrict the permissions further.
Detailed installation instructions for eksclt can be found here or here. Alternatively, you can install eksctl by following the instructions below.
-
Download and extract eksctl 0.105.0.
curl --silent --location "https://github.com/weaveworks/eksctl/releases/download/v${EKSCTL_VERSION}/eksctl_Linux_amd64.tar.gz" | tar xz -C /tmp
-
Move to
/usr/local/bin
.sudo mv /tmp/eksctl /usr/local/bin
-
Verify the installation.
eksctl version
curl --silent --location "https://github.com/weaveworks/eksctl/releases/download/v${EKSCTL_VERSION}/eksctl_Linux_amd64.tar.gz" | tar xz -C /tmp
sudo mv /tmp/eksctl /usr/local/bin
eksctl version
Detailed installation instructions for the Helm package manager can be found here. Alternatively, you can install Helm by following the instructions below.
-
Download the Helm package manager.
wget https://get.helm.sh/helm-v${HELM_VERSION}-linux-amd64.tar.gz
-
Unpack the downloaded compressed file.
tar -zxvf helm-v${HELM_VERSION}-linux-amd64.tar.gz
-
Move the unpacked content (the
helm
file) toPATH
.sudo mv linux-amd64/helm /usr/local/bin/helm
-
Verify your installation.
helm version
Detailed installation instructions for the Helmfile package can be found here. Alternatively, you can install Helmfile by following the instructions below.
-
Download the Helmfile package.
wget https://github.com/roboll/helmfile/releases/download/v${HELMFILE_VERSION}/helmfile_linux_amd64
-
Give execution permission to Helmfile.
chmod +x helmfile_linux_amd64
-
Move Helmfile to
PATH
.sudo mv helmfile_linux_amd64 /usr/local/bin/helmfile
-
Verify your installation.
helmfile -v
Detailed installation instructions for the Helm Diff plugin can be found here. Alternatively, you can install Helm Diff by following the instructions below.
-
Download Helm Diff plugin compressed file.
wget https://github.com/databus23/helm-diff/releases/download/v${HELMDIFF_VERSION}/helm-diff-linux.tgz
-
Unpack the compressed file.
tar -zxvf helm-diff-linux.tgz
-
Put the unpacked contents into the helm plugins folder. You may have a different folder. Please check
HELM_PLUGINS
withhelm env
command. Note that thediff
folder must not exist in the helm plugins folder before executing the command below.mkdir -p ~/.local/share/helm/plugins mv diff ~/.local/share/helm/plugins/diff
-
Verify your installation.
helm plugin list
Detailed installation instructions for OpenJDK can be found here. Alternatively, you can install OpenJDK by following the instructions below.
-
Install with the command.
sudo apt-get install openjdk-8-jdk
-
Verify your installation.
java -version
This sections describes how to create and prepare two AWS EKS Clusters to deploy a two-party Carbyne Stack Virtual Cloud.
After completing the steps described below, you should have two EKS clusters called apollo
and starbuck
deployed
The following environment variables are used for configuration and can be adjusted if necessary.
export CS_CLUSTER_AWS_REGION="eu-west-1" # Ireland
export CS_CLUSTER_AWS_NODE_TYPE="c5.2xlarge" # 8 vCPU 16GiB Memory 10 GBit/s Bandwidth
Detailed installation instructions on how to create AWS EKS cluster can be found here
-
Create EKS Clusters. Both clusters can be created in parallel as the process will take some time.
eksctl create cluster --name cs-apollo --region ${CS_CLUSTER_AWS_REGION} --version 1.21 --instance-types ${CS_CLUSTER_AWS_NODE_TYPE} --nodes 1 --nodes-min 1 --nodes-max 1 eksctl create cluster --name cs-starbuck --region ${CS_CLUSTER_AWS_REGION} --version 1.21 --instance-types ${CS_CLUSTER_AWS_NODE_TYPE} --nodes 1 --nodes-min 1 --nodes-max 1
-
Export cluster's kubeonfig context. The kubeconfig for the created clusters are exported by eksctl automatically and be retrieved as follows:
export CS_APOLLO_KUBE_CONTEXT=$(kubectl config get-contexts | grep cs-apollo.${CS_CLUSTER_AWS_REGION}.eksctl.io | awk '{gsub("\*","",$0);print $1}') export CS_STARBUCK_KUBE_CONTEXT=$(kubectl config get-contexts | grep cs-starbuck.${CS_CLUSTER_AWS_REGION}.eksctl.io | awk '{gsub("\*","",$0);print $1}')
-
Download the Istio Operator v1.7.3 using:
curl -L https://istio.io/downloadIstio | ISTIO_VERSION=1.7.3 TARGET_ARCH=x86_64 sh -
-
Create an Istio Control Plane definition:
cat <<EOF > istio-control-plane.yaml apiVersion: v1 kind: Namespace metadata: name: istio-system --- apiVersion: install.istio.io/v1alpha1 kind: IstioOperator metadata: namespace: istio-system name: cs-istiocontrolplane spec: meshConfig: accessLogFile: /dev/stdout components: ingressGateways: - name: istio-ingressgateway enabled: true k8s: resources: requests: cpu: 10m memory: 40Mi service: ports: ## You can add custom gateway ports in user values overrides, # but it must include those ports since helm replaces. # Note that AWS ELB will by default perform health checks on # the first port on this list. Setting this to the health # check port will ensure that health checks always work. # https://github.com/istio/istio/issues/12503 - port: 15021 targetPort: 15021 name: status-port - port: 80 targetPort: 8080 name: http2 - port: 443 targetPort: 8443 name: https - port: 31400 targetPort: 31400 name: tcp # This is the port where sni routing happens - port: 15443 targetPort: 15443 name: tls - port: 30000 name: ephemeral-mpc-engine-port-0 - port: 30001 name: ephemeral-mpc-engine-port-1 - port: 30002 name: ephemeral-mpc-engine-port-2 - port: 30003 name: ephemeral-mpc-engine-port-3 - port: 30004 name: ephemeral-mpc-engine-port-4 pilot: k8s: env: - name: PILOT_TRACE_SAMPLING value: "100" resources: requests: cpu: 10m memory: 100Mi values: global: proxy: resources: requests: cpu: 10m memory: 40Mi pilot: autoscaleEnabled: false gateways: istio-egressgateway: autoscaleEnabled: false istio-ingressgateway: autoscaleEnabled: false EOF
-
Install Istio and deploy control plane
kubectl config use-context ${CS_APOLLO_KUBE_CONTEXT} helm install istio-operator istio-1.7.3/manifests/charts/istio-operator \ --set operatorNamespace=istio-operator \ --set watchedNamespaces="istio-system" \ --set hub="docker.io/istio" \ --set tag="1.7.3" kubectl apply -f istio-control-plane.yaml kubectl config use-context ${CS_STARBUCK_KUBE_CONTEXT} helm install istio-operator istio-1.7.3/manifests/charts/istio-operator \ --set operatorNamespace=istio-operator \ --set watchedNamespaces="istio-system" \ --set hub="docker.io/istio" \ --set tag="1.7.3" kubectl apply -f istio-control-plane.yaml
-
Switch kubectl context.
kubectl config use-context ${CS_APOLLO_KUBE_CONTEXT}
-
Wait until a hostname has been assigned to the Istio Ingress Gateway:
kubectl get services --namespace istio-system istio-ingressgateway -w
The hostname eventually appears in column
EXTERNAL-IP
. -
Export the hostname for later use:
export CS_APOLLO_HOSTNAME=$(kubectl get services --namespace istio-system istio-ingressgateway --output jsonpath='{.status.loadBalancer.ingress[0].hostname}')
-
Switch kubectl context.
kubectl config use-context ${CS_STARBUCK_KUBE_CONTEXT}
-
Wait until a hostname has been assigned to the Istio Ingress Gateway:
kubectl get services --namespace istio-system istio-ingressgateway -w
The hostname eventually appears in column
EXTERNAL-IP
. -
Export the hostname for later use:
export CS_STARBUCK_HOSTNAME=$(kubectl get services --namespace istio-system istio-ingressgateway --output jsonpath='{.status.loadBalancer.ingress[0].hostname}')
ℹ INFO
As for now, it seems like it is not easily possible (if at all) to make subtomains accessible to the hostnames automatically assigned to the clusters by AWS ELB. Since the register of an individual domain for the test clusters and make subdomains configurable / accessible would cause additional effort, one of the cluster's IPs will be used for direct access in the following.
⚠ WARNING
Accessing the cluster by one of the VCPs IP directly instead of its hostname is expected to be error-prone and should be used for test and demonstration purposes only.
export CS_APOLLO_PUBLIC_IP=$(host $CS_APOLLO_HOSTNAME | head -n 1 | awk "{print \$NF}")
export CS_STARBUCK_PUBLIC_IP=$(host $CS_STARBUCK_HOSTNAME | head -n 1 | awk "{print \$NF}")
-
Define patched Knative Serving resource configuration
cat <<EOF > knative-serving.yaml apiVersion: operator.knative.dev/v1alpha1 kind: KnativeServing metadata: name: knative-serving namespace: knative-serving spec: version: 0.19.0 manifests: - URL: https://github.com/carbynestack/serving/releases/download/v0.19.0_multiport-patch/serving-crds.yaml - URL: https://github.com/carbynestack/serving/releases/download/v0.19.0_multiport-patch/serving-core.yaml - URL: https://github.com/knative/net-istio/releases/download/v0.19.0/release.yaml - URL: https://github.com/knative/net-certmanager/releases/download/v0.19.0/release.yaml config: domain: \${CLUSTER_IP}.sslip.io: "" EOF
-
Install the Knative Operator v0.19.0 using:
kubectl config use-context ${CS_APOLLO_KUBE_CONTEXT} kubectl apply -f https://github.com/knative/operator/releases/download/v0.19.0/operator.yaml kubectl config use-context ${CS_STARBUCK_KUBE_CONTEXT} kubectl apply -f https://github.com/knative/operator/releases/download/v0.19.0/operator.yaml
-
Create a namespace for Knative Serving using:
kubectl config use-context ${CS_APOLLO_KUBE_CONTEXT} kubectl create namespace knative-serving kubectl config use-context ${CS_STARBUCK_KUBE_CONTEXT} kubectl create namespace knative-serving
-
Install the patched Knative Serving component with a sslip.io custom domain using:
kubectl config use-context ${CS_APOLLO_KUBE_CONTEXT} cat knative-serving.yaml | CLUSTER_IP=${CS_APOLLO_PUBLIC_IP} envsubst > knative-serving_apollo.yaml kubectl apply -f knative-serving_apollo.yaml kubectl config use-context ${CS_STARBUCK_KUBE_CONTEXT} cat knative-serving.yaml | CLUSTER_IP=${CS_STARBUCK_PUBLIC_IP} envsubst > knative-serving_starbuck.yaml kubectl apply -f knative-serving_starbuck.yaml
-
Download theZalando Postgres operator v1.5.0
curl -sL https://github.com/zalando/postgres-operator/archive/refs/tags/v1.5.0.tar.gz | tar -xz
-
Install the operator
kubectl config use-context ${CS_APOLLO_KUBE_CONTEXT} helm install postgres-operator postgres-operator-1.5.0/charts/postgres-operator kubectl config use-context ${CS_STARBUCK_KUBE_CONTEXT} helm install postgres-operator postgres-operator-1.5.0/charts/postgres-operator
The deployed pods on both clusters printed by the following commands should look similar to the list below:
kubectl config use-context ${CS_APOLLO_KUBE_CONTEXT}
kubectl get pods -A
kubectl config use-context ${CS_STARBUCK_KUBE_CONTEXT}
kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
default knative-operator-5744699cb5-v7vw9 1/1 Running 0 4m45s
default postgres-operator-8f59d6f5d-h5kcg 1/1 Running 0 26s
istio-operator istio-operator-6dcd77b979-cknkk 1/1 Running 0 36m
istio-system istio-ingressgateway-5c5ff8966c-srhgq 1/1 Running 0 31m
istio-system istiod-6f7dd46957-dwrmz 1/1 Running 0 31m
knative-serving activator-85fc89488c-5mlxk 1/1 Running 0 3m44s
knative-serving autoscaler-7c55df5645-x9hhf 1/1 Running 0 3m44s
knative-serving controller-6b69c8f5cc-r8g27 1/1 Running 0 3m44s
knative-serving istio-webhook-7dd7c94c7b-drbnl 1/1 Running 0 3m39s
knative-serving net-certmanager-webhook-84f545d6f-rw8kq 1/1 Running 0 3m38s
knative-serving networking-certmanager-5f9fd5f9bc-bvrwm 1/1 Running 0 3m38s
knative-serving networking-istio-85f6b5c894-jdwpn 1/1 Running 0 3m40s
knative-serving webhook-5c84fd68f4-2g9x5 1/1 Running 0 3m43s
kube-system aws-node-k92mq 1/1 Running 0 81m
kube-system coredns-7cc879f8db-kqkl6 1/1 Running 0 89m
kube-system coredns-7cc879f8db-xntqg 1/1 Running 0 89m
kube-system kube-proxy-r6n5c 1/1 Running 0 81m
This section describes how to set up a Carbyne Stack Virtual Cloud (VC) consisting of two Virtual Cloud Providers (VCP).
-
Checkout out the carbynestack repository and descend into the repository root directory using:
-
HTTP
git clone https://github.com/carbynestack/carbynestack.git cd carbynestack
-
SSH
git clone git@github.com:carbynestack/carbynestack.git cd carbynestack
-
-
Before deploying the virtual cloud providers make some common configuration available using:
export RELEASE_NAME=cs export DISCOVERY_MASTER_HOST=${CS_APOLLO_PUBLIC_IP}.sslip.io export NO_SSL_VALIDATION=true
-
Launch the
starbuck
VCP using:export FRONTEND_URL=${CS_STARBUCK_PUBLIC_IP}.sslip.io export IS_MASTER=false export AMPHORA_VC_PARTNER_URI=http://${CS_APOLLO_PUBLIC_IP}.sslip.io/amphora kubectl config use-context ${CS_STARBUCK_KUBE_CONTEXT} helmfile apply
-
Launch the
apollo
VCP using:export FRONTEND_URL=${CS_APOLLO_PUBLIC_IP}.sslip.io export IS_MASTER=true export AMPHORA_VC_PARTNER_URI=http://${CS_STARBUCK_PUBLIC_IP}.sslip.io/amphora export CASTOR_SLAVE_URI=http://${CS_STARBUCK_PUBLIC_IP}.sslip.io/castor kubectl config use-context ${CS_APOLLO_KUBE_CONTEXT} helmfile apply
-
Verify Deployment
The deployed pods on both clusters printed by the following commands should look similar to the list below:
kubectl config use-context ${CS_APOLLO_KUBE_CONTEXT} kubectl get pods kubectl config use-context ${CS_STARBUCK_KUBE_CONTEXT} kubectl get pods
NAME READY STATUS RESTARTS AGE cs-amphora-57d5ffd5f7-r978d 1/1 Running 0 117s cs-castor-79f756c745-rqdpz 1/1 Running 0 2m9s cs-cs-postgres-dbms-0 1/1 Running 0 3m8s cs-ephemeral-discovery-698d9c9f7b-qvgpn 1/1 Running 0 96s cs-ephemeral-network-controller-7f456f7b89-6rqrw 1/1 Running 0 96s cs-minio-786c47f4b8-zss8q 1/1 Running 0 2m56s cs-redis-776846984c-4pmq4 1/1 Running 0 2m9s ephemeral-generic-00001-deployment-56bdc5b9d7-xv627 2/2 Running 0 95s knative-operator-5744699cb5-v7vw9 1/1 Running 0 20m postgres-operator-8f59d6f5d-h5kcg 1/1 Running 0 16m
-
Carbyne Stack comes with a CLI that can be used to interact with a virtual cloud from the command line. Install the CLI using:
export CLI_VERSION=0.2-SNAPSHOT-2336890983-14-a4260ab curl -o cs.jar -L https://github.com/carbynestack/cli/releases/download/$CLI_VERSION/cli-$CLI_VERSION-jar-with-dependencies.jar
-
Next configure the CLI to talk to the just deployed virtual cloud by creating a matching CLI configuration file in
~/.cs
using:mkdir -p ~/.cs cat <<EOF | envsubst > ~/.cs/config { "prime" : 198766463529478683931867765928436695041, "r" : 141515903391459779531506841503331516415, "noSslValidation" : true, "trustedCertificates" : [ ], "providers" : [ { "amphoraServiceUrl" : "http://${CS_APOLLO_PUBLIC_IP}.sslip.io/amphora", "castorServiceUrl" : "http://${CS_APOLLO_PUBLIC_IP}.sslip.io/castor", "ephemeralServiceUrl" : "http://${CS_APOLLO_PUBLIC_IP}.sslip.io/", "id" : 1, "baseUrl" : "http://${CS_APOLLO_PUBLIC_IP}.sslip.io/" }, { "amphoraServiceUrl" : "http://${CS_STARBUCK_PUBLIC_IP}.sslip.io/amphora", "castorServiceUrl" : "http://${CS_STARBUCK_PUBLIC_IP}.sslip.io/castor", "ephemeralServiceUrl" : "http://${CS_STARBUCK_PUBLIC_IP}.sslip.io/", "id" : 2, "baseUrl" : "http://${CS_STARBUCK_PUBLIC_IP}.sslip.io/" } ], "rinv" : 133854242216446749056083838363708373830 } EOF
You can verify that the configuration works by fetching telemetry data from castor using:
!!! attention Replace
<#>
with either1
for theapollo
cluster or2
for thestarbuck
cluster.java -jar cs.jar castor get-telemetry <#>
Before you can actually use the services provided by the Virtual Cloud, you have to upload cryptographic material. As generating offline material is a very time-consuming process, we provide pre-generated material.
⚠ DANGER
Using pre-generated offline material is not secure at all. DO NOT DO THIS IN A PRODUCTION SETTING.
-
Download and decompress the archive containing the material using:
curl -O -L https://github.com/carbynestack/carbynestack/raw/9c0c17599ae08253398a000f2a23b3ded8611499/tuples/fake-crypto-material-0.2.zip unzip -d crypto-material fake-crypto-material-0.2.zip rm fake-crypto-material-0.2.zip
-
Upload and activate tuples using:
ℹ TIP
Adapt the
NUMBER_OF_CHUNKS
variable in the following snippet to tune the number of uploaded tuples. In caseNUMBER_OF_CHUNKS > 1
the same tuples are uploaded repeatedly.cat << 'EOF' > upload-tuples.sh #!/bin/bash SCRIPT_PATH="$( cd "$(dirname "$0")" ; pwd -P )" TUPLE_FOLDER=${SCRIPT_PATH}/crypto-material/2-p-128 CLI_PATH=${SCRIPT_PATH} NUMBER_OF_CHUNKS=1 function uploadTuples { echo ${NUMBER_OF_CHUNKS} for type in INPUT_MASK_GFP MULTIPLICATION_TRIPLE_GFP; do for (( i=0; i<${NUMBER_OF_CHUNKS}; i++ )); do local chunkId=$(uuidgen) echo "Uploading ${type} to http://${CS_APOLLO_PUBLIC_IP}.sslip.io/castor (Apollo)" java -jar ${CLI_PATH}/cs.jar castor upload-tuple -f ${TUPLE_FOLDER}/Triples-p-P0 -t ${type} -i ${chunkId} 1 local statusMaster=$? echo "Uploading ${type} to http://${CS_STARBUCK_PUBLIC_IP}.sslip.io/castor (Starbuck)" java -jar ${CLI_PATH}/cs.jar castor upload-tuple -f ${TUPLE_FOLDER}/Triples-p-P1 -t ${type} -i ${chunkId} 2 local statusSlave=$? if [[ "${statusMaster}" -eq 0 && "${statusSlave}" -eq 0 ]]; then java -jar ${CLI_PATH}/cs.jar castor activate-chunk -i ${chunkId} 1 java -jar ${CLI_PATH}/cs.jar castor activate-chunk -i ${chunkId} 2 else echo "ERROR: Failed to upload one tuple chunk - not activated" fi done done } uploadTuples EOF chmod 755 upload-tuples.sh ./upload-tuples.sh
-
You can verify that the uploaded tuples are now available for use by the Carbyne Stack services using:
⚠ **ATTENTION
Replace
<#>
with either1
for theapollo
cluster or2
for thestarbuck
cluster.java -jar cs.jar castor get-telemetry <#>
You now have a fully functional Carbyne Stack Virtual Cloud at your hands and can run first examples e.g. by following the Carbyne Stack tutorial on solving the Millionaires Problem.
In order to delete both AWS EKS clusters, run the commands as follows:
eksctl delete cluster cs-apollo
eksctl delete cluster cs-starbuck