The mha repo contains 2 services (coin-check, ok) which are deployed to AWS using Terraform. It also supports local development with Kind.
The coin-check service exposes two endpoints:
/average
: which outputs a moving average of the bitcoin price in the lastMINUTES_POLL
period (default 10 min)./current
: which outputs the price of bitcoin, updated in the lastSECONDS_POLL
period (default 10 sec).
For the average calculation, we are using a pre-allocated slice, which is calculated according to the values from the env vars set. For the current calculation, we are writing to the same slice. It uses a FIFO behaviour in order to keep up with the polling period and maintain the same capacity throughout the life of the service.
Env var | Description | Default |
---|---|---|
PORT |
The port of the application. | 8080 |
PRICE_API |
The api used for fetching coin price information. | https://min-api.cryptocompare.com/data/price?fsym=BTC&tsyms=USD |
SECONDS_POLL |
Interval of polling for the current coin price. | 10 |
MINUTES_POLL |
Interval of polling for the average coin price. | 10 |
The ok service exposes one endpoint:
/ok
: which responds with a 200 status code when pinged.
Env var | Description | Default |
---|---|---|
PORT |
The port of the application. | 8081 |
There is no additional setup needed besides the AWS CLI configuration with the correct access/secret keys and installing the prerequisites.
-
Initiate the cluster by
cd ./deployment/terraform/mha-cluster
and runterraform init
,terraform plan
andterraform apply
. -
Deploy the k8s objects by
cd ./deployment/terraform/mha
and runterraform init
,terraform plan
andterraform apply
.
Querying the service can be done at the endpoint which is output after the terraform apply
step, which showcases the variable k8s_service_ingress_elb
.
We are storing the terraform.tfstate
in an AWS S3 bucket. This way, we can query
necessary informations about the state of the cluster when we deploy the k8s objects.
This is done with a DynamoDB entry which holds the state lock and an S3 entry with
the actual file.
We are provisioning the cluster with 2 add-ons. VPC-CNI
for enabling NetworkPolicy
rules inside our cluster. EBS-CSI
for enabling volumes support.
RBAC is enabled in the cluster.
We are using a private AWS ECR registry for our docker images.
We are fetching the registry_password
secret, in order to configure the imagePullSecret
from AWS Secrets Manager. This assumes that we have a secret created beforehand with
the necessary credentials.
We are querying the S3 bucket that holds the terraform.tfstate
to fetch cluster
information for the deployment.
Fully automated deployment process.
On PR, we are applying the terraform plan
and showcasing it in the comments.
On merge, and we are pushing the docker containers to our private
registry in AWS ECR and running the terraform apply
automatically. Therefore,
the deployment will pe pushed to AWS on merge.
The terraform apply
is run sequentially on merge. First, it checks if the workflow that
deployed the services to ECR is completed and successful. If it is, it deploys
the cluster. After the cluster is deployed, the k8s deployment will be triggered
and deployed as well.
We have a Terraform Cloud workplace set with the AWS Access/Secret Keys. We provisioned the repo with the necessary secrets in order to interact with Terraform Cloud.
- Kubernetes
- Docker
- Golang
- Kind
- AWS CLI (optional) - if you want to pull the images from the private ECR repo
- Terraform (optional) - if you want to apply part of the production setting to the local kind cluster
- tfk8s (optional) - if you want to generate HCL files from the yaml charts
The docker configuration is found in ./deployment/docker/<service_name>
.
After making changes there, you can test the service by doing make docker-coin-check
,
which will expose the service to localhost:8000
and localhost:8001
respectively.
The k8s/kind configuration is found in ./deployment/k8s/<service_name>
and the kind
cluster configuration in ./deployment/k8s/kind-cluster.yaml
.
The terraform configuration is found in ./deployment/terraform/<service_name>-local
.
If you configured the cloud custer, everything should work with the usual init
,
plan
and apply
commands.
The cluster is configured to support an nginx ingress, hostname resolution.
RBAC is enabled by default in Kind. The cluster configuration is found at
./deployment/k8s/kind-cluster.yaml
.
- Run
make kind-local-up
, which will create a local custom k8s cluster, build the docker service containers, load them into kind and create an nginx-ingress-controller.
The coin-check
service will have the port exposed on localhost:30000
and the
ok
service will have the port exposed on localhost:30001
. If you set up the
hostname
in the ingress, the services can be pinged at hello.coin-check.com
and hello.ok.com
.
OPTIONAL STEPS:
-
If you are pulling the docker images from the AWS ECR Registry you will need to run
make aws-log-in
and afterwardsmake aws-imagePull
. The~/.aws/credentials
need to be set beforehand. -
If you are using
hostname
in the ingress, you will need to runmake kind-local-ingress-host
and add the IP to your/etc/hosts
. -
If you are deploying locally using Terraform, you will need to set the
host
,client_certificate
,cluster_ca_certificate
andclient_key
in asecrets.tfvars
file. You provide the.tfvars
file to the tf commands with the flag-var-file=<file>.tfvars
. You can see the variables by runningmake kind-cred-info
. -
If you pull the images from ECR in the Terraform configuration, you will need to add the
registry_server
,registry_username
,registry_password
andregistry_email
to thesecrets.tfvars
file.
*NetworkPolicy support for Kind is not in scope: kubernetes-sigs/kind#842
TODOs
- Fix gh workflow that deals with terraform apply. The gh workflow for terraform plan and pushing to ECR works. The gh workflow succeeds when applying the cluster configuration but fails on the k8s one because it cannot read the region from the tf state. Locally there is no issue in building/deploying everything.
- Less hardcoded variables in the terraform configurations.
- Less hardcoded variables in the k8s configuration. Maybe opt for helm charts.
- Clearer k8s deployment with proper namespaces.