Skip to content
This repository has been archived by the owner on Jan 3, 2022. It is now read-only.

Latest commit

 

History

History
368 lines (265 loc) · 11 KB

File metadata and controls

368 lines (265 loc) · 11 KB
id title sidebar_label
gke
Getting started on GKE
GKE

The following will help you get started running a riff function on GKE.

Create a Google Cloud project

A project is required to consume any Google Cloud services, including GKE clusters. When you log into the console you can select or create a project from the dropdown at the top.

install gcloud

Follow the quickstart instructions to install the Google Cloud SDK which includes the gcloud CLI. You may need to add the google-cloud-sdk/bin directory to your path. Once installed, gcloud init will open a browser to start an oauth flow and configure gcloud to use your project.

gcloud init

install kubectl

Kubectl is the Kubernetes CLI. If you don't already have kubectl on your machine, you can use gcloud to install it.

gcloud components install kubectl

configure gcloud

Create an environment variable, replacing ??? with your project ID (not to be confused with your project name; use gcloud projects list to find your project ID).

GCP_PROJECT_ID=???

Check your default project.

gcloud config list

If necessary change the default project.

gcloud config set project $GCP_PROJECT_ID

List the available compute zones and also regions with quotas.

gcloud compute zones list
gcloud compute regions list

Choose a zone, preferably in a region with higher CPU quota.

export GCP_ZONE=us-central1-b

Enable the necessary APIs for gcloud. You also need to enable billing for your new project.

gcloud services enable \
  cloudapis.googleapis.com \
  container.googleapis.com \
  containerregistry.googleapis.com

Create a GKE cluster

Choose a new unique lowercase cluster name and create the cluster. For this demo, three nodes should be sufficient.

# replace ??? below with your own cluster name
export CLUSTER_NAME=???
gcloud container clusters create $CLUSTER_NAME \
  --cluster-version=latest \
  --machine-type=n1-standard-2 \
  --enable-autoscaling --min-nodes=1 --max-nodes=3 \
  --enable-autorepair \
  --scopes=cloud-platform \
  --num-nodes=3 \
  --zone=$GCP_ZONE

For additional details see Knative Install on Google Kubernetes Engine.

Confirm that your kubectl context is pointing to the new cluster

kubectl config current-context

To list contexts:

kubectl config get-contexts

You should also be able to find the cluster the Kubernetes Engine console.

grant yourself cluster-admin permissions

kubectl create clusterrolebinding cluster-admin-binding \
--clusterrole=cluster-admin \
--user=$(gcloud config get-value core/account)

Install the Helm CLI

Helm is a popular package manager for Kubernetes. The riff runtime and its dependencies are provided as Helm charts.

Download and install the latest Helm 2.x release for your platform. (Helm 3 is currently in alpha and has not been tested for compatibility with riff)

After installing the Helm CLI, we need to initialize the Helm Tiller in our cluster.

kubectl create serviceaccount tiller -n kube-system
kubectl create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount kube-system:tiller
helm init --wait --service-account tiller

Please see the Helm documentation for additional Helm security configuration.

Install the riff CLI

The riff CLI is available to download from our GitHub releases page. Once installed, check that the riff CLI version is 0.4.0 or later.

riff --version
riff version 0.4.0 (d1b042f4247d8eb01ee0b9e984926028a2844fe8)

At this point it is useful to monitor your cluster using a utility like watch. To install on a Mac

brew install watch

Watch pods in a separate terminal.

watch -n 1 kubectl get pod --all-namespaces

Install riff using Helm

Load the projectriff charts

helm repo add projectriff https://projectriff.storage.googleapis.com/charts/releases
helm repo update

riff can be installed with or without Knative. The riff Core runtime is available in both environments, however, the riff Knative Runtime is only available if Knative is installed.

To install riff with Knative and Istio:

helm install projectriff/istio --name istio --version 0.4.x --namespace istio-system --wait
helm install projectriff/riff --name riff --version 0.4.x --set knative.enabled=true

Alternatively, install riff without Knative or Istio:

helm install projectriff/riff --name riff --version 0.4.x

Verify the riff install.

riff doctor
NAMESPACE     STATUS
riff-system   ok

RESOURCE                              READ      WRITE
configmaps                            allowed   allowed
secrets                               allowed   allowed
pods                                  allowed   n/a
pods/log                              allowed   n/a
applications.build.projectriff.io     allowed   allowed
containers.build.projectriff.io       allowed   allowed
functions.build.projectriff.io        allowed   allowed
deployers.core.projectriff.io         allowed   allowed
processors.streaming.projectriff.io   allowed   allowed
streams.streaming.projectriff.io      allowed   allowed
adapters.knative.projectriff.io       allowed   allowed
deployers.knative.projectriff.io      allowed   allowed

create a Kubernetes secret for pushing images to GCR

Create a GCP Service Account in the GCP console or using the gcloud CLI

gcloud iam service-accounts create push-image

Grant the service account a "storage.admin" role using the IAM manager or using gcloud.

gcloud projects add-iam-policy-binding $GCP_PROJECT_ID \
    --member serviceAccount:push-image@$GCP_PROJECT_ID.iam.gserviceaccount.com \
    --role roles/storage.admin

Create a new authentication key for the service account and save it in gcr-storage-admin.json.

gcloud iam service-accounts keys create \
  --iam-account "push-image@$GCP_PROJECT_ID.iam.gserviceaccount.com" \
  gcr-storage-admin.json

apply build credentials

Use the riff CLI to apply credentials to a container registry (if you plan on using a namespace other than default add the --namespace flag).

riff credential apply my-creds --gcr gcr-storage-admin.json --set-default-image-prefix

Create a function

This step will pull the source code for a function from a GitHub repo, build a container image based on the node function invoker, and push the resulting image to GCR. The function resource represents a build plan that will report the latest built image.

riff function create square \
  --git-repo https://github.com/projectriff-samples/node-square \
  --artifact square.js \
  --tail

After the function is created, you can get the built image by listing functions.

riff function list
NAME     LATEST IMAGE                                                                                         ARTIFACT    HANDLER   INVOKER   STATUS   AGE
square   gcr.io/$GCP_PROJECT/square@sha256:ac089ca183368aa831597f94a2dbb462a157ccf7bbe0f3868294e15a24308f68   square.js   <empty>   <empty>   Ready    1m13s

Create a Knative deployer

The Knative Runtime is only available on clusters with Istio and Knative installed. Knative deployers run riff workloads using Knative resources which provide auto-scaling (including scale-to-zero) based on HTTP request traffic, and routing.

riff knative deployer create knative-square --function-ref square --tail

After the deployer is created, you can see the hostname by listing deployers.

riff knative deployer list
NAME             TYPE       REF      HOST                                 STATUS   AGE
knative-square   function   square   knative-square.default.example.com   Ready    28s

invoke the function

Knative configures HTTP routes on the istio-ingressgateway. Requests are routed by hostname.

Look up the Loadbalancer IP for the ingressgateway; you should see a value like 35.205.114.86.

INGRESS_IP=$(kubectl get svc istio-ingressgateway --namespace istio-system --output 'jsonpath={.status.loadBalancer.ingress[0].ip}')
echo $INGRESS_IP

Invoke the function by POSTing to the ingressgateway, passing the hostname and content-type as headers.

curl http://$INGRESS_IP/ -w '\n' \
-H 'Host: knative-square.default.example.com' \
-H 'Content-Type: application/json' \
-d 7
49

Create a Core deployer

The Core runtime is available on all riff clusters. It deploys riff workloads as "vanilla" Kubernetes deployments and services.

riff core deployer create k8s-square --function-ref square --tail

After the deployer is created, you can see the service name by listing deployers.

riff core deployers list
NAME         TYPE       REF      SERVICE               STATUS   AGE
k8s-square   function   square   k8s-square-deployer   Ready    37s

invoke the function

In a separate terminal, start port-forwarding to the ClusterIP service created by the deployer.

kubectl port-forward service/k8s-square-deployer 8080:80
Forwarding from 127.0.0.1:8080 -> 8080
Forwarding from [::1]:8080 -> 8080

Make a POST request to invoke the function using the port assigned above.

curl http://localhost:8080/ -w '\n' \
-H 'Content-Type: application/json' \
-d 8
64

Note that unlike Knative, the Core runtime will not scale deployments down to zero.

Cleanup

riff knative deployer delete knative-square
riff core deployer delete k8s-square
riff function delete square

Upgrading

If you need to upgrade riff, we recommend uninstalling and then reinstalling.

Uninstalling

You can use helm to uninstall riff.

# remove any riff resources
kubectl delete riff --all-namespaces --all

# remove any Knative resources (if Knative runtime is enabled)
kubectl delete knative --all-namespaces --all

# remove riff
helm delete --purge riff
kubectl delete customresourcedefinitions.apiextensions.k8s.io -l app.kubernetes.io/managed-by=Tiller,app.kubernetes.io/instance=riff

# remove istio (if installed)
helm delete --purge istio
kubectl delete namespace istio-system
kubectl get customresourcedefinitions.apiextensions.k8s.io -oname | grep istio.io | xargs -L1 kubectl delete