This was a hard course to me, as I don't have much experience with Go, kubernetes, gRPC nor unit testing. this file keeps me in track with the materials, keyword, and comments, frameworks that I learned through the course.
- migrate folder -> up schema: sql schema queries that build up the database, generated from dbdiagram.io
- migrate folder -> down schema: drop all the existing tables, write by developer.
- migrate folder was generated by mirgate plugin, can be installed for shell or as a package for go
Mirgrate
migrate create -ext sql -dir db/migrate -seq init_schema
-ext extension of the file - sql
-dir directory
-seq generate sequential version number
migrate -path db/migrate -database "postgresql://root:qwer1234@localhost:5432/simple_bank?sslmode=disable" -verbose up
migrate -path db/migrate -database "postgresql://root:qwer1234@localhost:5432/simple_bank?sslmode=disable" -verbose up
-verbose print the logs
up: mirgrate up
- create database using postgres:alphine12 image
- interact using system command line starts with docker exec
- or interact by image cli provided by postgresql image
docker run --name postgres12 -p 5432:5432 -e POSTGRES_USER=root -e POSTGRES_PASSWORD=qwer1234 -d postgres:12-alpine
docker exec -it postgres12 /bin/sh
docker exec -it posgres12 psql -U root simple_bank
- Make tool can be installed for system cli
- shorten commands by define them under Makefile
- call by make 'command'
mirage down and up database with one command -> simple/automate create/drop database with one command -> automate create database container with one command -> automate
easy for later scalibiliy and deployment.
- converting schema to table enetity (read by need, if one table is selected by query, it will then look for schema that build this table under schema path, then convert it to a entity object)
- db.go is a dependency object of the database object, it's capable of running queries that generated from account.db
- account.db are queries in go converted from query sql file under query folder
- sqlc generate is based on the defined sqlc.yaml file under the work directory
_ "github.com/lib/pg"
go mod tidy -> remove the /indirect tag
- write test cases
- single test case should be independent of each other
- using math/rand, and MakeFile go test to automate the process
- fastify can validate certain arguments
go test -v -cover ./...
-
create a transfer record with amount = 10
-
create an account entry for account1 with amount = -10
-
create entry for account 2
-
subtract 10 from account 1
-
add 10 to account 1
-
providate a reliable and consist unit of work even in case of system failure
-
provide isolation between programs access the database concurrently
- atomicity
- consistency
- isolation
- durability
- Deadlock can be resulted from operations in the database that referencing the same column even without changing it
- KEYWORD like FOR NO KEY UPDATE can prevent deadlocking occurs from same referencing
- transaction should be wrapped
- deadlock debugging should follow test-driven development (TDD) pattern or method
- deadlock can also caused by concurrent interactive exchange/update of data, which can be prevented from logic
Consistency
- dirty read
- non-repeatable read
- phantom read
- serialization anomaly
4 standard isolation levels
- read uncommitted
- read committed
- repeatable read
- serializable
- build up Go env
- re-test on main change
- using postgres container as service
- adding Mirgrate package to github action
- run test after run migrations
Takeaway? Github action is very good service deploy and CI tool that can seperates workflows into steps.
- server struct takes db object, gin engine.(pointers) This can be more efficient and reduce memory usage, especially when dealing with large data structures.
- server calls methods, main argument of the methods are gin.Context and url arguments
- var req createAccountRequest takes in the arguments
- ShouldBindJSON(&req) ShouldBindUri(&req) ShouldBindQuery(&req) desctrue the arguments from request
- http.<> indicates the server status
SERVER_ADDRESS=0.0.0.0:8081 make server
- basically, it's dotenv of javascript
- it has features like live watching and remote variable changing
-
independent tests
-
faster tests
-
100% coverage
-
by using Fake DB: as Memory
-
DB stubs: GOMOCK
Gin and Viper added go install github.com/golang/mock/mockgen@v1.6.0 go mock added
go get github.com/go-playground/validator/v10
mockgen db/mock -> generate struct and functions for all the existing db entity testing
using TestCases struct array for testing one by one
basically, mock create controller, store, server local controller and recorder for sending requests/receiving responses without actually hosting a server, it can fake database error or server error for actual controller to receive, therefore compare the actual response with the expected response from the real controller.
migrate one version up or down with migrate module. Constraint is better than unique key that consists two elements
when controller can potentially receives object that contains more complicated column, (think of enum that have possibly 100 values), validate can modularize this checking process by defining function outside the data entity struct
nearly same procedure as I implemented with ExpressJS Unit Testing is bit funky in verify the token for login, look it up for more detail
users should only have access to their own information using middleware to narrowing use's information as part of the payload, and therefore the payload will be used as query parameter into the handler function.
new branch -> merge master after being reviewed and tested
git checkout -b <newBranchName>
git push origin ft/docker
FROM base image
WORKDIR working directory inside the image
COPY .(copy everything from current folder) .(current working directory inside the image)
RUN go build -o main main.go (building our project into a binary file called main)
EXPOSE 8080 (inform docker that docker listens on this PORT)
CMD ["/app/main"] call execuatable file
docker build -t simplebank:latest .
last . meaning which directory should be built all we need is a binary, so ... we can only need to have the binary under the docker image
FROM golang:1.20-alpine AS builder
WORKDIR /app
COPY . .
RUN go build -o main main.go
# Run stage
FROM alpine3.13 # basic image of linux run stage
WORKDIR /app
COPY --from=builder /app/main .
# copy from builder stage, copy from /app/main, target location of that image
EXPOSE 8080
CMD ["/app/main"]
docker run --name simplebank -p 8080:8080 simplebank:latest
docker run --name simplebank -p 8080:8080 -e GIN_MODE=release simplebank:latest
docker container inspect
docker container inspect simplebank
docker run --name simplebank -p 8080:8080 -e GIN_MODE=release -e DB_SOURCE="postgresql://root:qwer1234@172.17.0.3:5432/simple_bank?sslmode=disable" simplebank:latest
docker network ls see networks
docker network inspect bridge
docker network --help
create our own network
docker network create bank-network
docker network connect --help
docker network connect bank-network postgres12
docker run --name simplebank --network bank-network -p 8080:8080 -e GIN_MODE=release simplebank:latest
container can connect to multiple networks at the same time, in this case, bank-network we created and bridge network
In the dockerfile, we need to download the necessary dependency, install them in the builder and copy them over to the run stage, in this case is Migrate tool.
This also arises time issue, as migrate file might run before database is built, use depend on params to specifiy the order of service under docker compose file.
However, it might still fail since the API service might start before the migrate up take place. Therefore, tools like wait-for for alphine based image.
after re-define the entrypoint under docker compose file, the cmd and entrypoint in dockerfile will be ignored.
❯ git checkout -b ft/ci-build-image
Switched to a new branch 'ft/ci-build-image'
simple-bank on ft/ci-build-image [$] via 🐹 v1.20.3
❯ git push origin ft/ci-build-image
Checkout code is necessary in helping setting up the Go environment
❯ git checkout -b ft/ci-build-image
Switched to a new branch 'ft/ci-build-image'
simple-bank on ft/ci-build-image [$] via 🐹 v1.20.3
❯ git push origin ft/ci-build-image
Checkout code is necessary in helping setting up the Go environment
after successfully JQed the response from secrets-manager, I put the command into the github action as a run, therefore, everytime before github action's build the image, the password will be replaced by the response that came from aws secret-manager
github making new branch guide
git checkout . -> discard changes in the working directory
git checkout -b ft/secrets_manager -> new branch created named *ft/secrets_manager* , and switched to this new branch
git status -> check difference
git add .
git commit -m "df"
git push origin ft/secrets-manager -> origin specifies the remote repo, where git push sends local change to remote repo.
# origin is the default name of remote repo
aws ecr get-login-password
aws ecr get-login-password | docker login --username AWS --password-stdin 927683992336.dkr.ecr.us-east-1.amazonaws.com
docker pull 927683992336.dkr.ecr.us-east-1.amazonaws.com/simple-bank:b951295ddc1d1d7e31a6b3ea8b91e685ae008852
link here stands for image URI, note that first link 'simple bank' image info is removed as it means for a registry
It wouldn't be able to run, since start.sh used $DB_SOURCE that only can be obtained from a github action RUN command that JQed credential
JQed crediential is permitted in github cloud is because the login step was executed before
source app.env
echo $DB_SOURCE
git checkout main -> switch to main branch
git pull !!! -> update local main first
git checkout -b ft/loadenv
git status
git add .
git commit -m "-"
git push origin ft/load_env
- app.env is either needed in the repo
- it's only generated in the fly after github action took care of it(how? AWS login first, then AWS secrets-manager retreive the credential -> JQed into the app.env)
- we can change the credential as much as we want, github action deploy will replace it with a deploy action re-run
- Worker Node:
- kubelet agent: make sure containers run inside pods
- container runtimes: docker
- kube-proxy: maintain network rules, allow communication with pods
- Control plane - master node
- manage the worker nodes and pods of the cluster
- API server, frontend of the control plane, exposes k8s api to interact with all other componenets of the cluster
- persistence store etcd -> backing store for all cluster data
- schedular -> watch newly created pods with no assigned nodes and select nodes for them to run on
- controller manager, combination of serveral controllers, such as :
- node controller -> noticing and responding when nodes go down
- job controller -> watches for jobs and create pods to run them
- endpoint controller -> populates the endpoints object, or join services and pods
- service accounts and token controller, create default account and API access tokens for new namespaces
- cloud controller manager -> link cluster to cloud provider's api
- node controller -> check with cloud
- route controller -> setting up routes
- service controller -> creating, updating, deleting cloud load balancer
- manage the worker nodes and pods of the cluster
master node is created by AWS
default config file at .cube/config
aws eks update-kubeconfig --name simple-bank --region us-east-2
failed because haven't granded user permission for such operation -> go to IAM created inline policy for EKSFullACcess
aws configure
aws eks update-kubeconfig --name simple-bank --region us-east-2
kubectl config use-context arn:aws:eks:us-east-2:927683992336:cluster/simple-bank
use kubectl config use-context to select the context I want to connect to
kubectl cluster-info
only amazon cluster creator can review it, otherwise we have to add permission
aws sts get-caller-identity
get current aws user status
cat \.aws\credentials
kubectl get pods
kubectl cluster-info
export AWS_PROFILE=github
# set in windows
after declaring a aws-auth.yaml file under the working directory, we are then need to kubectl apply -f eks/aws-auth.yaml
so that the user permissions can be mapped onto the github-ci user
setx /m AWS_PROFILE github
echo $Env:AWS_PROFILE
kubectl cluster-info
kubectl get service
kubectl get pods
choco install k9s
type : to switch namespace, then ns enter show all available namespaces of the cluster
:service :pod :cj -> cronjob :node :configmap :deployments d to describe selected target
kubectl apply -f eks/deployment.yaml
use kubectl to deployment the images onto the EKS server by defining a deployment.yaml file under the hood
Still can't deploy after 1 pod was hosted under the node group how?
Since each node consists of 4 pods, and 4 pods are already occupied by four system pods, therefore, no more pod is available for our projet image
number of pods that allowed to run on a EC2 instance is based on the # of elastic network interface (ENI) and # of IPs per ENI allowed on that instance
number of pods can be derived from a calculation (input is # eni and # of ips per eni)
ENI defers by each instance -> 2 eni for t3-micro
delete the old node group -> create new one with t3-small
kubectl get pods
aws secretsmanager get-secret-value --secret-id simple_bank --query SecretString --output text
aws secretsmanager get-secret-value --secret-id simple_bank --query SecretString --output text | jq 'to_entries|map("\(.key)=\(.value)")
kubectl logs simple-bank-api-deployment-788894c9bd-l8tqq
kubectl apply -f eks/deployment.yaml
The service is now hosted inside a pod's container, question now is how do we send the API request to
in order to route traffic from the outside world to the pod, we need to deploy service kubernetes object
service is abtraction object, load balancing will be handdled automatically, pods of the same deployment will share the same internal DNS
kubectl apply -f eks/service.yaml
this gives us the internal service IP If we don't specify anything, default will be ClusterIP by service.yaml
nnslookup ab2ec9f88acf64693bdf81257ace1726-2146272261.us-east-2.elb.amazonaws.com # look up the exteral ip from the service
docker run -d -p 8080:8080 36ee2e1cddcb
kubectl describe pod simple-bank-api-deployment-564cb478dc-v7hl7
right now, I can only send request to a81f24223a09c480d8ac182a022fef91-141979027.us-east-2.elb.amazonaws.com:80 instead of http://a81f24223a09c480d8ac182a022fef91-141979027.us-east-2.elb.amazonaws.com
Missing a inbound rule, HTTP rule that source all the requests to PORT 80, Port 80 is the loadbalancer port.
Tianze qwer1234
Again, stream logs failed Get "https://172.31.45.68:10250/containerLogs/default/simple-bank-api-deployment-564cb478dc-v7hl7/simple-bank-api?follow=true&sinceSeconds=300&tailLines=100×tamps=true": dial tcp 172.31.45.68:10250: i/o time
can't get response from pods? why? tcp inbound rule isn't set for the security group. Need to set that for EC2, AllTCP type free range.
:nodes :services
Deploy service onto kubernetes serice, where each copy of serivce (replica) is wrapped into a pod, and multiple pods can be put inside one node, where one node is one underlined computed instance. For t3.small, the maximum capacity is 11 pods, 4 are service-side. cluster can read its log by TCP port 10250 set on for inbound rule.
Service is also set for mapping container port to service port, loadbalancer then distribute the incoming requests onto different pods that have the same selector.
AWS-auth.ymal maps the permission from Default user (cluster creator) to user github-cl. Since cluster creator shall have the rights to deploy service onto its created cluster.