Skip to content

thua101/simple-bank

Repository files navigation

why ths readme file?

This was a hard course to me, as I don't have much experience with Go, kubernetes, gRPC nor unit testing. this file keeps me in track with the materials, keyword, and comments, frameworks that I learned through the course.

Migrate

  1. migrate folder -> up schema: sql schema queries that build up the database, generated from dbdiagram.io
  2. migrate folder -> down schema: drop all the existing tables, write by developer.
  3. migrate folder was generated by mirgate plugin, can be installed for shell or as a package for go
Mirgrate
migrate create -ext sql -dir db/migrate -seq init_schema
-ext extension of the file - sql
-dir directory
-seq generate sequential version number

migrate -path db/migrate -database "postgresql://root:qwer1234@localhost:5432/simple_bank?sslmode=disable" -verbose up

migrate -path db/migrate -database "postgresql://root:qwer1234@localhost:5432/simple_bank?sslmode=disable" -verbose up

-verbose print the logs
up: mirgrate up

Docker

  1. create database using postgres:alphine12 image
  2. interact using system command line starts with docker exec
  3. or interact by image cli provided by postgresql image
docker run --name postgres12 -p 5432:5432 -e POSTGRES_USER=root -e POSTGRES_PASSWORD=qwer1234 -d postgres:12-alpine

docker exec -it postgres12 /bin/sh
docker exec -it posgres12 psql -U root simple_bank

Makefile

  1. Make tool can be installed for system cli
  2. shorten commands by define them under Makefile
  3. call by make 'command'

what are we trying to achieve so far?

mirage down and up database with one command -> simple/automate create/drop database with one command -> automate create database container with one command -> automate

easy for later scalibiliy and deployment.

SQLC

  1. converting schema to table enetity (read by need, if one table is selected by query, it will then look for schema that build this table under schema path, then convert it to a entity object)
  2. db.go is a dependency object of the database object, it's capable of running queries that generated from account.db
  3. account.db are queries in go converted from query sql file under query folder
  4. sqlc generate is based on the defined sqlc.yaml file under the work directory

if not call function of the module directly, go formtter might remove it, keep by

_ "github.com/lib/pg"
go mod tidy -> remove the /indirect tag

Go Testify

  1. write test cases
  2. single test case should be independent of each other
  3. using math/rand, and MakeFile go test to automate the process
  4. fastify can validate certain arguments
go test -v -cover ./...

Transaction

  1. create a transfer record with amount = 10

  2. create an account entry for account1 with amount = -10

  3. create entry for account 2

  4. subtract 10 from account 1

  5. add 10 to account 1

  6. providate a reliable and consist unit of work even in case of system failure

  7. provide isolation between programs access the database concurrently

ACID property

  1. atomicity
  2. consistency
  3. isolation
  4. durability

Deadlock

  1. Deadlock can be resulted from operations in the database that referencing the same column even without changing it
  2. KEYWORD like FOR NO KEY UPDATE can prevent deadlocking occurs from same referencing
  3. transaction should be wrapped
  4. deadlock debugging should follow test-driven development (TDD) pattern or method
  5. deadlock can also caused by concurrent interactive exchange/update of data, which can be prevented from logic

Consistency

  • dirty read
  • non-repeatable read
  • phantom read
  • serialization anomaly

4 standard isolation levels

  • read uncommitted
  • read committed
  • repeatable read
  • serializable

Github Action

  1. build up Go env
  2. re-test on main change
  3. using postgres container as service
  4. adding Mirgrate package to github action
  5. run test after run migrations

Takeaway? Github action is very good service deploy and CI tool that can seperates workflows into steps.

Gin Get Request

  1. server struct takes db object, gin engine.(pointers) This can be more efficient and reduce memory usage, especially when dealing with large data structures.
  2. server calls methods, main argument of the methods are gin.Context and url arguments
  3. var req createAccountRequest takes in the arguments
  4. ShouldBindJSON(&req) ShouldBindUri(&req) ShouldBindQuery(&req) desctrue the arguments from request
  5. http.<> indicates the server status

SERVER_ADDRESS=0.0.0.0:8081 make server

Viper

  1. basically, it's dotenv of javascript
  2. it has features like live watching and remote variable changing

mock database testing

  1. independent tests

  2. faster tests

  3. 100% coverage

  4. by using Fake DB: as Memory

  5. DB stubs: GOMOCK

Gin and Viper added go install github.com/golang/mock/mockgen@v1.6.0 go mock added

go get github.com/go-playground/validator/v10

mock testing with mockgen

mockgen db/mock -> generate struct and functions for all the existing db entity testing

using TestCases struct array for testing one by one

basically, mock create controller, store, server local controller and recorder for sending requests/receiving responses without actually hosting a server, it can fake database error or server error for actual controller to receive, therefore compare the actual response with the expected response from the real controller.

migrate down/up 1

migrate one version up or down with migrate module. Constraint is better than unique key that consists two elements

validator

when controller can potentially receives object that contains more complicated column, (think of enum that have possibly 100 values), validate can modularize this checking process by defining function outside the data entity struct

bcrypt hashing + JWT

nearly same procedure as I implemented with ExpressJS Unit Testing is bit funky in verify the token for login, look it up for more detail

Authentication + user priviliages

users should only have access to their own information using middleware to narrowing use's information as part of the payload, and therefore the payload will be used as query parameter into the handler function.

23/63 it's production use capable now -> dockerize

new branch -> merge master after being reviewed and tested

git checkout -b <newBranchName>

git push origin ft/docker
FROM base image
WORKDIR working directory inside the image
COPY .(copy everything from current folder) .(current working directory inside the image)
RUN go build -o main main.go (building our project into a binary file called main)
EXPOSE 8080 (inform docker that docker listens on this PORT)
CMD ["/app/main"] call execuatable file
docker build -t simplebank:latest .

last . meaning which directory should be built all we need is a binary, so ... we can only need to have the binary under the docker image

FROM golang:1.20-alpine AS builder
WORKDIR /app
COPY . .
RUN go build -o main main.go

# Run stage
FROM alpine3.13 # basic image of linux run stage
WORKDIR /app
COPY --from=builder /app/main .

# copy from builder stage, copy from /app/main, target location of that image

EXPOSE 8080
CMD ["/app/main"]
docker run --name simplebank -p 8080:8080 simplebank:latest
docker run --name simplebank -p 8080:8080 -e GIN_MODE=release simplebank:latest
docker container inspect
docker container inspect simplebank
docker run --name simplebank -p 8080:8080 -e GIN_MODE=release -e DB_SOURCE="postgresql://root:qwer1234@172.17.0.3:5432/simple_bank?sslmode=disable" simplebank:latest

docker network ls see networks 
docker network inspect bridge
docker network --help

create our own network

docker network create bank-network

docker network connect --help

docker network connect bank-network postgres12

docker run --name simplebank --network bank-network -p 8080:8080 -e GIN_MODE=release simplebank:latest

container can connect to multiple networks at the same time, in this case, bank-network we created and bridge network

docker compose up

In the dockerfile, we need to download the necessary dependency, install them in the builder and copy them over to the run stage, in this case is Migrate tool.

This also arises time issue, as migrate file might run before database is built, use depend on params to specifiy the order of service under docker compose file.

However, it might still fail since the API service might start before the migrate up take place. Therefore, tools like wait-for for alphine based image.

wait for tool

after re-define the entrypoint under docker compose file, the cmd and entrypoint in dockerfile will be ignored.

create and push new branch

❯ git checkout -b ft/ci-build-image     
Switched to a new branch 'ft/ci-build-image'

simple-bank on  ft/ci-build-image [$] via 🐹 v1.20.3 
❯ git push origin ft/ci-build-image

Checkout code is necessary in helping setting up the Go environment

create and push new branch

❯ git checkout -b ft/ci-build-image     
Switched to a new branch 'ft/ci-build-image'

simple-bank on  ft/ci-build-image [$] via 🐹 v1.20.3 
❯ git push origin ft/ci-build-image

Checkout code is necessary in helping setting up the Go environment

JQ + ECR

after successfully JQed the response from secrets-manager, I put the command into the github action as a run, therefore, everytime before github action's build the image, the password will be replaced by the response that came from aws secret-manager

github making new branch guide

git checkout . -> discard changes in the working directory
git checkout -b ft/secrets_manager -> new branch created named *ft/secrets_manager* , and switched to this new branch
git status -> check difference
git add .
git commit -m "df"
git push origin ft/secrets-manager -> origin specifies the remote repo, where git push sends local change to remote repo.
# origin is the default name of remote repo

docker pull image from private registry from AWS ECR

aws ecr get-login-password
aws ecr get-login-password | docker login --username AWS --password-stdin 927683992336.dkr.ecr.us-east-1.amazonaws.com

docker pull 927683992336.dkr.ecr.us-east-1.amazonaws.com/simple-bank:b951295ddc1d1d7e31a6b3ea8b91e685ae008852

link here stands for image URI, note that first link 'simple bank' image info is removed as it means for a registry

It wouldn't be able to run, since start.sh used $DB_SOURCE that only can be obtained from a github action RUN command that JQed credential

JQed crediential is permitted in github cloud is because the login step was executed before

source app.env
echo $DB_SOURCE

git checkout main -> switch to main branch
git pull !!! -> update local main first
git checkout -b ft/loadenv
git status
git add .
git commit -m "-"
git push origin ft/load_env

Why is safer this way?

  1. app.env is either needed in the repo
  2. it's only generated in the fly after github action took care of it(how? AWS login first, then AWS secrets-manager retreive the credential -> JQed into the app.env)
  3. we can change the credential as much as we want, github action deploy will replace it with a deploy action re-run

Kubernetes's componenets

  1. Worker Node:
    1. kubelet agent: make sure containers run inside pods
    2. container runtimes: docker
    3. kube-proxy: maintain network rules, allow communication with pods
  2. Control plane - master node
    1. manage the worker nodes and pods of the cluster
      1. API server, frontend of the control plane, exposes k8s api to interact with all other componenets of the cluster
      2. persistence store etcd -> backing store for all cluster data
      3. schedular -> watch newly created pods with no assigned nodes and select nodes for them to run on
      4. controller manager, combination of serveral controllers, such as :
        1. node controller -> noticing and responding when nodes go down
        2. job controller -> watches for jobs and create pods to run them
        3. endpoint controller -> populates the endpoints object, or join services and pods
        4. service accounts and token controller, create default account and API access tokens for new namespaces
      5. cloud controller manager -> link cluster to cloud provider's api
        1. node controller -> check with cloud
        2. route controller -> setting up routes
        3. service controller -> creating, updating, deleting cloud load balancer

master node is created by AWS

Kubectl

default config file at .cube/config

aws eks update-kubeconfig --name simple-bank --region us-east-2

failed because haven't granded user permission for such operation -> go to IAM created inline policy for EKSFullACcess

aws configure
aws eks update-kubeconfig --name simple-bank --region us-east-2
kubectl config use-context arn:aws:eks:us-east-2:927683992336:cluster/simple-bank

use kubectl config use-context to select the context I want to connect to

kubectl cluster-info

only amazon cluster creator can review it, otherwise we have to add permission

aws sts get-caller-identity

get current aws user status

cat \.aws\credentials

kubectl get pods
kubectl cluster-info
export AWS_PROFILE=github
# set in windows

after declaring a aws-auth.yaml file under the working directory, we are then need to kubectl apply -f eks/aws-auth.yaml so that the user permissions can be mapped onto the github-ci user

setx /m AWS_PROFILE github
echo $Env:AWS_PROFILE
kubectl cluster-info
kubectl get service
kubectl get pods
choco install k9s

type : to switch namespace, then ns enter show all available namespaces of the cluster

:service :pod :cj -> cronjob :node :configmap :deployments d to describe selected target

kubernetes deployment

kubectl apply -f eks/deployment.yaml

use kubectl to deployment the images onto the EKS server by defining a deployment.yaml file under the hood

Still can't deploy after 1 pod was hosted under the node group how?

Since each node consists of 4 pods, and 4 pods are already occupied by four system pods, therefore, no more pod is available for our projet image

number of pods that allowed to run on a EC2 instance is based on the # of elastic network interface (ENI) and # of IPs per ENI allowed on that instance

number of pods can be derived from a calculation (input is # eni and # of ips per eni)

ENI defers by each instance -> 2 eni for t3-micro

delete the old node group -> create new one with t3-small

kubectl get pods
aws secretsmanager get-secret-value --secret-id simple_bank --query SecretString --output text
aws secretsmanager get-secret-value --secret-id simple_bank --query SecretString --output text | jq 'to_entries|map("\(.key)=\(.value)")
kubectl logs simple-bank-api-deployment-788894c9bd-l8tqq
kubectl apply -f eks/deployment.yaml

The service is now hosted inside a pod's container, question now is how do we send the API request to

in order to route traffic from the outside world to the pod, we need to deploy service kubernetes object

service is abtraction object, load balancing will be handdled automatically, pods of the same deployment will share the same internal DNS

kubectl apply -f eks/service.yaml

this gives us the internal service IP If we don't specify anything, default will be ClusterIP by service.yaml

nnslookup ab2ec9f88acf64693bdf81257ace1726-2146272261.us-east-2.elb.amazonaws.com # look up the exteral ip from the service
docker run -d -p 8080:8080 36ee2e1cddcb
kubectl describe pod  simple-bank-api-deployment-564cb478dc-v7hl7

right now, I can only send request to a81f24223a09c480d8ac182a022fef91-141979027.us-east-2.elb.amazonaws.com:80 instead of http://a81f24223a09c480d8ac182a022fef91-141979027.us-east-2.elb.amazonaws.com

why?

Missing a inbound rule, HTTP rule that source all the requests to PORT 80, Port 80 is the loadbalancer port.

Tianze qwer1234

Again, stream logs failed Get "https://172.31.45.68:10250/containerLogs/default/simple-bank-api-deployment-564cb478dc-v7hl7/simple-bank-api?follow=true&sinceSeconds=300&tailLines=100&timestamps=true": dial tcp 172.31.45.68:10250: i/o time

can't get response from pods? why? tcp inbound rule isn't set for the security group. Need to set that for EC2, AllTCP type free range.

:nodes :services

what did we achieve in this section?

Deploy service onto kubernetes serice, where each copy of serivce (replica) is wrapped into a pod, and multiple pods can be put inside one node, where one node is one underlined computed instance. For t3.small, the maximum capacity is 11 pods, 4 are service-side. cluster can read its log by TCP port 10250 set on for inbound rule.

Service is also set for mapping container port to service port, loadbalancer then distribute the incoming requests onto different pods that have the same selector.

AWS-auth.ymal maps the permission from Default user (cluster creator) to user github-cl. Since cluster creator shall have the rights to deploy service onto its created cluster.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published