Project that provisions kuberneres (k8s) cluster using k8s operator pattern.
The project is build around Operator SDK Framework. Reference its Prerequisites and how to install Operator SDK CLI if you are going to extend this project.
The base project was created using:
operator-sdk new cluster-operator
cd cluster-operator
operator-sdk add api --api-version=cluster-operator.infobloxopen.github.com/v1alpha1 --kind=Cluster
operator-sdk add controller --api-version=cluster-operator.infobloxopen.github.com/v1alpha1 --kind=ClusterYou can use kind or minikube for development
kind create clusterThe Operator SDK does a lot of the heavy lifting we can focus on custom type definition for our Cluster object in cluster_types.go and the business logic in cluster_controller.go.
Ater making changes run:
operator-sdk generate k8sregenerate code using code-gen code captured in zz_generated.deepcopy.go
You can do custom validation using kubebuilder tags
Following Environment Variables are required:
Following Environment Variables are optional:
CLUSTER_OPERATOR_DEVELOPMENT - If set we will do kops dry-run and will not create cloud resources
SSH_KEY - Override the default public key built into the operator for public keykind create clusterBuild and Run
make deploy-localTest to see if working
make clusterCheck status
make statusTo delete cluster
make deleteIf you stop and make changes and rerun controller:
make operator-todoAssuming you have minikube or a cluster with helm tiller you can run Build and Run
make deployYou can use the other targets to create, check status or delete
make cluster
make status
make deleteIf the cluster is Phase == Done and you want to see more detail about the cluster you can use the kube config that is saved in tmp/config.yaml to reach the cluster:
KUBECONFIG=tmp/config.yaml kubectl get nodesThe kubeconfig can also be retrieved from API using kubectl by query of the cluster CRD:
kubectl get cluster example-cluster -o yaml
apiVersion: cluster-operator.infobloxopen.github.com/v1alpha1
kind: Cluster
metadata:
annotations:
....
spec:
kops_config:
master_count: 1
master_ec2: t2.micro
name: seizadi.soheil.belamaric.com
state_store: s3://kops.state.seizadi.infoblox.com
vpc: vpc-0a75b33895655b46a
worker_count: 2
worker_ec2: t2.micro
zones:
- us-east-2a
- us-east-2b
name: seizadi
status:
kops_status:
nodes:
- hostname: ip-172-17-17-51.us-east-2.compute.internal
name: ip-172-17-17-51.us-east-2.compute.internal
role: master
status: "True"
zone: us-east-2a
....
kubeconfig:
apiVersion: v1
clusters:
- cluster:
....
phase: Done
sc-l-seizadi:cluster-operator seizadi$ kubectl -n `cat .id` get cluster example-cluster
NAME AGE
example-cluster 30mGetting debugging to work with Delve is important, go the latest version
go get -u github.com/go-delve/delve If you want to run from command line you can build binary:
cd $GOPATH/src/github.com/go-delve/delve
make installDebugging with a client-go operator was easier since you can build the project native, but with the sdk-operator you have to build with delve option
make operator-debugThen connect with remote debugger (even though you are running it local). The default port 2345 is what you need for the port, later if you run operator in cluster you need to forward the port and you will need to configure a higher number port, more on this later.
The cluster-operator uses kops for creating clusters on AWS. The base requirements are:
- AWS Key Pair with proper AWS-IAM for Kops
- AWS S3 store for Kops state
In the following examples we will create DNS Route53 and use the default VPC but these and other AWS services should be managed by cluster-operator using AWS Service Broker or AWS Service Operator I have made a decision which to use yet, the later is preferred although AWS has commited to support AWS Service Operator as a product recently.
Here is a simple cluster to create, I am creating them in two AZs:
aws ec2 describe-availability-zones --region us-east-2 | grep ZoneName
"ZoneName": "us-east-2a"
"ZoneName": "us-east-2b"
"ZoneName": "us-east-2c"AWS settings
export AWS_ACCESS_KEY_ID=AKIAIOSFODNN7EXAMPLE
export AWS_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
export AWS_DEFAULT_REGION=us-east-2Kops settings
export KOPS_CLUSTER_NAME=seizadi.soheil.belamaric.com
export KOPS_STATE_STORE=s3://kops.state.seizadi.infoblox.com
export VPC_ID=vpc-0a75b33895655b46a
export INTERNET_GATEWAY_ID=igw-047d4259cab6b99d2For now need to create the S3 state storage:
$ aws s3 cb kops.state.seizadi.infoblox.comCreate VPC
aws ec2 create-vpc --cidr-block 172.10.16.0/16 --region ${AWS_DEFAULT_REGION}Create IGW
aws ec2 create-internet-gateway --region ${AWS_DEFAULT_REGION}
aws ec2 attach-internet-gateway --internet-gateway-id ${INTERNET_GATEWAY_ID} --vpc-id ${VPC_ID} --region ${AWS_DEFAULT_REGION}kops update cluster \
--name=seizadi.soheil.belamaric.com \
--state=s3://kops.state.seizadi.infoblox.com \
--ssh-public-key=kops.pub \
--vpc=${VPC_ID} \
--master-count 1 \
--master-size=t2.micro \
--node-count=2 \
--node-size=t2.micro \
--zones=us-east-2a,us-east-2bThis creates a desired state, following to create it, or you can do the same with 'kops create --yes':
kops update cluster --yesThen to check the status:
kops validate cluster --state=s3://kops.state.seizadi.infoblox.com --name=seizadi.soheil.belamaric.com -o json{
"failures":[
{
"type":"dns",
"name":"apiserver",
"message":"Validation Failed\n\nThe dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start."
}
]
}When things are OK:
kops validate cluster -o json | jq{
"nodes": [
{
"name": "ip-172-17-17-143.us-east-2.compute.internal",
"zone": "us-east-2a",
"role": "master",
"hostname": "ip-172-17-17-143.us-east-2.compute.internal",
"status": "True"
},
{
"name": "ip-172-17-18-77.us-east-2.compute.internal",
"zone": "us-east-2b",
"role": "node",
"hostname": "ip-172-17-18-77.us-east-2.compute.internal",
"status": "True"
},
{
"name": "ip-172-17-17-247.us-east-2.compute.internal",
"zone": "us-east-2a",
"role": "node",
"hostname": "ip-172-17-17-247.us-east-2.compute.internal",
"status": "True"
}
]
}To delete cluster when you are done:
kops delete cluster --yesI created a kops container to run the commands:
docker run \
-e AWS_ACCESS_KEY_ID=AKIAIOSFODNN7EXAMPLE \
-e AWS_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY \
-e KOPS_CLUSTER_NAME=cluster1.soheil.belamaric.com \
-e KOPS_STATE_STORE=s3://kops.state.seizadi.infoblox.com \
soheileizadi/kops:v1.0 create cluster \
--vpc=vpc-0a75b33895655b46a \
--node-count=2 \
--master-size=t2.micro \
--node-size=t2.micro \
--ssh-key-name=seizadi_aws \
--zones=us-east-2a,us-east-2b \
--master-count 1 docker run \
-e AWS_ACCESS_KEY_ID=AKIAIOSFODNN7EXAMPLE \
-e AWS_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY \
-e KOPS_CLUSTER_NAME=cluster1.soheil.belamaric.com \
-e KOPS_STATE_STORE=s3://kops.state.seizadi.infoblox.com \
soheileizadi/kops:v1.0 validate cluster -o jsonYou need public DNS address to get any progress since you need to be able to access the Cluster, you can may be get around this by running Kops in an EC2 instance in the boundary but makes development difficult. I started with cluster1.seizadi-kops.local private DNS with Kops option '--dns private' moved to cluster1.soheil.belamaric.com for public interface.
There is a gossip-based discovery DNS option for the cluster name. The only requirement to enable this is to have a cluster ending in k8s.local.
The k8s nodes are based on EC2 instances and kops will need SSH keys to setup access to them, so Kops needs SSH keys. It will normally find them under (~/.ssh/id_rsa.pub), or you can set them with option --ssh-public-key which points to a specific location for the key. There is also secret command, in the example below we create a new ssh public key called admin.
kops create secret sshpublickey admin -i ~/.ssh/id_rsa.pub \
--name k8s-cluster.example.com --state s3://example.comThere is a better option --ssh-key-name that was added recently, it allows you to use AWS SSH keys insead so that they are managed outside of the Kops and more secure. The downside is that it is not a command line parameter only set in the Cluster Spec so requires yaml file.
You should consider AWS System Manager, there will not be any SSH keys or SSH Port open on EC2 so a better security profile. There is an option to specify no SSH Key.
We found that as developers had different versions of Kops or other binaries we had unexpected behavior. To solve this issue we created a container with the critical elements that we could run: soheileizadi/kops.