Use the Google Kubernetes Engine web-ui to create a Kubernetes cluster
The following configuration was tested:
- Master Version: 1.11
- Number of nodes: 3
- Machine type: 4vCpu, 15GB RAM
- Auto Upgrade: off
- Auto Repair: off
- Enable VPC-native (using alias IP)
- Enable logging and monitoring using Stackdriver Kubernetes monitoring
Use the Google Cloud SQL web-ui to create a DB instance
The following configuration was tested:
- PostgreSQL 9.6
- Same zone as the Kubernetes cluster
- Connect using private IP only
- Machine type: 4vCPU, 15GM RAM
- Storage type: SSD
- High availability
- Automatic backups
- Connectivity: Private IP
Optional: if you enabled "Public IP", add your IP/network inside the SQL instance "Connections" tab ("Authorized networks").
The management server is an optional but recommended component which provides management services for the cluster.
Follow this guide to create the server and deploy Rancher and Jenkins on it.
Log-in to your Rancher deployment on the management server.
Add cluster > Import existing cluster > Follow instructions in the UI
Click on the cluster and then on kubeconfig file.
Download the file locally.
This is used for small volume storage for shared configurations / infrastructure
- Create service account for Tiller:
kubectl -n kube-system create sa tiller
kubectl create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount=kube-system:tiller
helm init --service-account tiller
- Download and init helm on your cluster:
curl -LO https://git.io/get_helm.sh
chmod 700 get_helm.sh
./get_helm.sh
helm init
helm repo update
- Deploy
nfs-server-provisioner
helm chart (can use Rancher catalog app) with the following values:
persistence.enabled = true
persistence.size = 5Gi
storageClass.name = cca-ckan
Or from command line:
helm install --namespace=ckan-cloud stable/nfs-server-provisioner --name cloud-nfs --set=persistence.enabled=true,persistence.size=5Gi,storageClass.name=cca-ckan
- Need to save Service Account key (JSON)
- Need to have
gcloud
command in PATH - Need to have a domain and a CloudFlare account
- Need to have a StatusCake account
- Prepare separate kubeconfig to be used by Deis (could be done after cluster initialization)
- Create storage bucket in advance (name it
ckan-storage-import-bucket
for example) - Prepare Gitlab access token (readonly permissions)
- Prepare CloudFlare access token
Follow the ckan-cloud-operator installation and usage guide in the README.md to configure ckan-cloud-operator to use with kubeconfig file.
Then run interactive initialization of the currently connected cluster:
ckan-cloud-operator cluster initialize --interactive
While interactive initialization:
- Set
enable-deis-ckan: y
- If environment is production, set
env-id
top
on "routers" step. - On "solr" step of interactive initialization choose
self-hosted: y
- On "ckan" step when asked for docker server/username/password, enter your Gitlab credentials, password should be your Gitlab access token.
Give the service account permission to change cluster roles:
kubectl create clusterrolebinding default-sa-binding --clusterrole=cluster-admin --user=<service account email>
Create an admin user:
ckan-cloud-operator users create your.name --role=admin
Get the kubeconfig file for your admin user:
ckan-cloud-operator users get-kubeconfig your.name > /path/to/your.kube-config
Warning: /path/to/your.kube-config
should not be equal to your current kubeconfig file, otherwise you'll lost your kubeconfig and not receive new kubeconfig.
Replace the kube-config file for your environment with the newly created kube-config.
You should use the ckan-cloud-operator generated kube-config for increased security and audit logs.
First, read the docs here.
Read help and enable built-in autoscaler if needed:
ckan-cloud-operator cluster setup-autoscaler --help
- Copy or fork from existing repo (for example
viderum/cloud-lithuania
) - Update parameters in
.env
file inside the repo and push to master - Make sure Gitlab CI ran successfully and pushed the image
Optional: if datapushers registry is outside gitlab organization you configured during cluster setup, create docker registry secret to retrieve datapusher images:
kubectl -n ckan-cloud create secret docker-registry datapushers-docker-registry --docker-server=registry.gitlab.com --docker-username=<username> --docker-password=<personal access token> --docker-email=<email>
Initialize datapushers:
ckan-cloud-operator datapushers initialize
ckan-cloud-operator db gcloudsql initialize --interactive --db-prefix demo
ckan-cloud-operator db proxy port-forward --db-prefix demo
ckan-cloud-operator deis-instance create from-gitlab <repo> ckan_default ckandemo
Optionally add --use-private-gitlab-repo
if the repo you passed is outside the organization you configured during cluster setup (e.g. forked to your private account). You will be asked to provide your gitlab deploy token.
Follow the steps here to create internal/external instance routes