Instructions on how to setup an Oracle Kubernetes Engine (OKE) cluster
Switch branches/tags
Nothing to show
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Failed to load latest commit information.
images admin Oct 19, 2018
terraform compartmentify Oct 19, 2018
.gitignore fiddling w kubectl Oct 12, 2018
LICENSE Initial commit Oct 12, 2018 admin Oct 19, 2018


These are instructions on how to setup an Oracle Kubernetes Engine (OKE) cluster along with a Terraform module to automate part of that process.


First off you'll need to do some pre deploy setup. That's all detailed here.

Clone the Module

Now, you'll want a local copy of this repo. You can make that with the commands:

git clone
cd oke-how-to/terraform

We now need to initialize the directory with the module in it. This makes the module aware of the OCI provider. You can do this by running:

terraform init

This gives the following output:


Now for the main attraction. Let's make sure the plan looks good:

terraform plan

That gives:

If that's good, we can go ahead and apply the deploy:

terraform apply

You'll need to enter yes when prompted. The apply should take about five minutes to run. Once complete, you'll see something like this:

Viewing the Cluster in the Console

We can check out our new cluster in the console by navigating here.

Similarly, the IaaS machines running the cluster are viewable here.

Setup the Terminal

To interact with our cluster, we need kubectl on our local machine. Instructions for that are here. I'm a big fan of easy and on a Mac, so I just ran:

brew install kubectl

That gave me this:

We're also probably going to want helm. Once again, brew is our friend. If you're on another platform, take a look here.

brew install kubernetes-helm

That gave me this:

The terraform apply dumped a kubernetes config file called config. By default, kubectl expects the config file to be in ~/.kube/config. So, we can put it there by running:

mkdir ~/.kube
mv config ~/.kube

We can make sure this all worked by running this command to check out the nodes in our cluster:

kubectl get nodes

That should give something like:

Make yourself Admin

You probably want your kubectl set up so that you're a cluster admin. Otherwise your access to your new cluster will be limited. There are some instructions on that here. You'll need to grab your user OCID (possibly from the console, here) and then run a command like:

kubectl create clusterrolebinding myadmin --clusterrole=cluster-admin --user=ocid1.user.oc1..aaaaa...zutq

That gives this:

Destroy the Deployment

When you no longer need the OKE cluster, you can run this to delete the deployment:

terraform destroy

You'll need to enter yes when prompted. Once complete, you'll see something like this: