go-backend-service
- when to use uppercase for func? when to use lowercase for func?
stackoverflow answer
In Golang, 1.
any variable (or a function) with an identifier starting with an upper-case letter (example, CamelCase) is made public (accessible) to all other packages in your program,
2.
whereas those starting with a lower-case letter (example, camelCase) is not accessible to any package except the one it is being declared in.
3.
You should use CamelCase in case you intend to use the variable (or function) in another package, or you can safely stick with camelCase.
func IsSupportedCurrency(currency string) bool {}
func errorResponse(err error) gin.H {}
-
Difference between
*
and&
-
what is BTREE?
-
how to use strut?
-
what is 32bit and 64bit?
-
difference between int64 and uint64
- int64: signed int 64 type
- uint64: unsigned int 64 type
- https://stackoverflow.com/questions/50815512/when-casting-an-int64-to-uint64-is-the-sign-retained
- two's complement: a mathematical operation to reversibly convert a positive binary number into a negative binary number with equivalent (but negative) value
- what 0x stands for?
ref
0x140000162f8
a prefix for hexadecimal numeric constants in computing
prefix: 0b/0B. suffix: b/B
: Binary type 0b1111 = 15prefix: 0. suffix: o/O
: Octalprefix: ø. suffix: d/D
: Decimalprefix: 0x/0X. suffix: h/H
Hexdecimal
Key | value |
---|---|
$ history | grep "docker run" |
display all the commands |
offset and limit | https://dataschool.com/learn-sql/limit/ |
\* \& sign in golang |
https://go.dev/tour/moretypes/1 |
what is go routines | https://go.dev/tour/concurrency/1 |
:= and = |
:= 初始化并赋值,回覆盖原来的值 = 直接赋值 |
https://blog.csdn.net/Ivan45007/article/details/121978869 | |
Short variable declarations | |
a short variable declaration may redeclare variables provided they were originally declared | |
curl man page |
https://curl.se/docs/manpage.html |
dbdiagram | https://dbdiagram.io |
$ go test -v --cover -count=1 ./api/... | run api 目录下的所有 test |
fmt.Printf() reference | https://programming.guide/go/fmt-printf-reference-cheat-sheet.html |
Paragon Initiative Enterprises | https://paragonie.com/ |
Postgres command
Key | value |
---|---|
SELECT datname FROM pg_database; |
show all databases |
SELECT now(); |
|
\q; |
|
\c simple_bank; |
|
\dt; |
show all tables |
INSERT INTO accounts (owner, balance, currency) VALUES ('rick', 100, 'USD'); | insert column |
LIMIT and OFFSET |
https://www.postgresql.org/docs/current/queries-limit.html |
LIMIT and OFFSET |
LIMIT=5 at most 5 records in one page |
LIMIT and OFFSET |
OFFSET=10 skip first 10 records |
TablePlus command
Key | Value |
---|---|
refresh data | command + R |
delete data | 1. delete 2. command + S |
modify data | 1. do the change 2. command + S |
# 1. install postgres
$ docker pull postgres:15-alpine
# 2. display images
$ docker images
# 3. start postgres containter
$ docker run -d -e POSTGRES_USER=root -e POSTGRES_PASSWORD=secret -p 5432:5432 --name postgres15 postgres:15-alpine
# 4. connect to container
# -it: run command as an interactive TTY session
# $ docker exec -it postgres15 psql -U postgres
$ docker exec -it postgres15 psql -U root
# display now
>> select now();
# quit
>> \q
# 5. display logs
$ docker logs postgres15
# 6. connect to db
# 7. tableplus
# (1) execute all queries
>> command + entry
# (2) refresh
>> command + R
# 8. stop/start
$ docker stop postgres15
$ docker start postgres15
# 9. run shell in containter
$ docker exec -it postgres15 /bin/sh
# 1. install golang-migrate
# https://github.com/golang-migrate/migrate/tree/master/cmd/migrate
$ brew install golang-migrate
# 2. check version
$ migrate -version
# 3. create db migration files
$ migrate create -ext sql -dir db/migration -seq init_schema
# 4. create db
$ createdb --username=root --owner=root simple_bank
# 5. acess db
$ psql simple_bank
# 6. delete db
$ dropdb simple_bank
# 7. exit
$ exit
# 8. create db from external
$ docker exec -it postgres15 createdb --username=root --owner=root simple_bank
# 9. access db directly
$ docker exec -it postgres15 psql -U root simple_bank
# 10. migrate db up
$ migrate -path db/migration -database "postgresql://root:secret@localhost:5432/simple_bank?sslmode=disable" -verbose up
# 11. migrate db down
$ migrate -path db/migration -database "postgresql://root:secret@localhost:5432/simple_bank?sslmode=disable" -verbose down
gorm could be very slow when there's high load
sqlx
sqlc
# (1) install sqlc
$ brew install sqlc
# (2)
$ sqlc init
# (3) generate CRUD code
$ sqlc generate
# (1) go mod init
$ go mod init github.com/RickZhou666/go-backend-service
# (2) install go dependencies
$ go mod tidy
# (3) don't modify auto generate sqlc go files
# (4) execute only
-- name: updateAccount :exec
UPDATE accounts
SET balance = $2
WHERE id = $1;
# (5) execute with return
-- name: updateAccount :one
UPDATE accounts
SET balance = $2
WHERE id = $1
RETURNING *;
# -v :for verbose log
# -cover :to measure code coverage
# ./... :to run all uts
# -count=1 :to disable cache. otherwise it will read from cache directly
$ go test -v --cover -count=1 ./...
if has Testxxxx
prefix, it will run as unit test
func TestCreateaccount(t *testing.T) {
run package tests
- run all unit tests
run test
- run single unit test
- create a transfer record with amount = 10;
- create an account entry for account 1 with amount = -10
- create an account entry for account 2 with amount = +10
- subtract 10 from the balance of account 1
- add 10 to the balance of account 2
- to provide a reliable and consistent unit of work, even in case of system failure
- to provide isolation between programs that access the database concurrently
-
atomicity (A)
either all operations complete successfully or the transaction fails and the db is unchanged -
consistentcy (C)
the db state must be valid after the transaction. All constraints must be satisfied -
Isolation (I)
Concurrent transactions must not affect each other -
Durability (D)
data written by a successful transaction must be recorded in a persistent storage
// 1. make transaction
BEGIN;
...
COMMIT;
// 2. rollback when failure
BEGIN;
...
ROLLBACK;
facing github link issue, remove from local and clone from remote
# 1. start txn for 1st
>> BEGIN;
>> select * from accounts where id = 126 for update;
# 2. start txn for 2nd
>> BEGIN;
>> select * from accounts where id = 126 for update;
# the 2nd will get blocked
# 3. update for the 1st
>> update accounts set balance = 500 where id = 126;
>> COMMIT;
# 4. after we commit 1st, the 2nd is unblocked with latest result
-- (1) add get for update query
-- name: getAccountForUpdate :one
SELECT * FROM accounts
WHERE id = $1 LIMIT 1
FOR UPDATE;
-- (2) update query
$ make sqlc
# (1) comment foregin key line in schema_up
# ALTER TABLE "entries" ADD FOREIGN KEY ("account_id") REFERENCES "accounts" ("id");
# ALTER TABLE "transfers" ADD FOREIGN KEY ("from_account_id") REFERENCES "accounts" ("id");
# ALTER TABLE "transfers" ADD FOREIGN KEY ("to_account_id") REFERENCES "accounts" ("id");
# (2) delete db
$ make migratedown
# (3) regenerate db
$ make migrateup
# (1) as the foreign key is linked to id which is primary key, and we won't change it
# so we add NO KEY in get query
-- name: getAccountForUpdate :one
SELECT * FROM accounts
WHERE id = $1 LIMIT 1
FOR NO KEY UPDATE;
# (2) regenerate
$ make sqlc
-- 1. run below in 1st
BEGIN;
UPDATE accounts SET balance = balance - 10 WHERE id = 1 RETURNING *;
-- 2. run below in 2nd
BEGIN;
UPDATE accounts SET balance = balance - 10 WHERE id = 2 RETURNING *;
-- 3. then run below in 1st
UPDATE accounts SET balance = balance + 10 WHERE id = 2 RETURNING *;
-- then it's getting stuck
-- as txn 2 tried to get a sharelock 908. but 908 is exclusivelock and holding by txn 1
-- 4. if we keeping running below in 2nd
UPDATE accounts SET balance = balance + 10 WHERE id = 1 RETURNING *;
-- we'll observed deadlock error
execute account in same order
-- 1. run below in 1st
BEGIN;
UPDATE accounts SET balance = balance - 10 WHERE id = 1 RETURNING *;
-- 2. run below in 2nd
BEGIN;
UPDATE accounts SET balance = balance + 10 WHERE id = 1 RETURNING *;
-- 2nd will be block
-- 3. run below in 1st
UPDATE accounts SET balance = balance + 10 WHERE id = 2 RETURNING *;
COMMIT;
-- 2nd will return value
update account with smaller id always
-
Diry read
A txnreads
data writtern by other concurrentuncommited
txn -
Non-repeatable read
A txnreads
thesame row twice
and sees different value because it has beenmodified
by othercommited
txn -
Phantom read
A txnre-executes
a query tofind rows
that satisfy a condition and sees adifferent set
of rows, due to changes by othercommited
txn -
Serilization anomaly
The result of agroup
of concurrentcommited txns
isimpossible to achieve
if we try to run themsequentially
in any order without overlapping
American National Standards Insititues - ANSI
# (1) pull mysql
$ docker pull mysql:8
# (2) run image in container
$ docker run -e MYSQL_ROOT_PASSWORD=secret -p 3310:3306 -d --name mysql8 mysql:8
# (4) create database
$ show databases;
$ create database simple_bank;
$ use simple_bank;
# (3) exec containter
$ docker exec -it mysql8 mysql -uroot -psecret simple_bank
# (4) check txn isolation
>> select @@transaction_isolation;
>> select @@global.transaction_isolation;
- facing
dirty read
-- (1) change txn isolation level to read uncommitted in both txns
>> set session transaction isolation level read uncommitted;
>> select @@transaction_isolation;
-- (2) start txn
>> begin;
-- or
>> start transaction;
-- (3) select all from txn 1
>> select * from accounts;
-- (4) select id 1 from txn 2
>> select * from accounts where id = 1;
-- (5) decrease $10 from txn 1 for id 1
>> update accounts set balance = balance - 10 where id = 1;
>> select * from accounts where id = 1;
-- result in $90 for id 1
-- (6) select id 1 from txn 2
>> select * from accounts where id = 1;
-- it display the modifed uncommited balance for id 1
- avoid
dirty read
- facing
non-repetable read
- facing
phantom read
-- (1) change txn isolation level to read committed in both txns
>> set session transaction isolation level read committed;
>> select @@transaction_isolation;
-- (2) start txn
>> begin;
-- or
>> start transaction;
-- (3) select all from txn 1
>> select * from accounts;
-- (4) select id 1 from txn 2
>> select * from accounts where id = 1;
-- (5) decrease $10 from txn 1 for id 1
>> update accounts set balance = balance - 10 where id = 1;
>> select * from accounts where id = 1;
-- result in $80 for id 1
-- (6) select id 1 from txn 2
>> select * from accounts where id = 1;
-- result still in $90. avoid dirty read for uncommited txn
-- (7) select all from txn 2
>> select * from accounts where balance >= 90;
-- 2 records
-- (8) commit txn 1
>> commit;
-- (9) check id 1 in txn 2
>> select * from accounts where id = 1;
-- result in $80 for id 1. non-repeatable read
-- (10) select all from txn 2
>> select * from accounts where balance >= 90;
-- only 1 records. facing phantom read
-- (11) commit
>> commit;
- avoid
dirty read
- avoid
non-repeatable read
- avoid
phantom read
- the result is not
consistent
will read same row, but result is not consistent. If another txn increase by 10 to 110, current txn will still read it as 100, if we increase by 10 as well in conrrent txn, it will change to 120. but it's from 100 -> 120
-- (1) change txn isolation level to read committed in both txns
>> set session transaction isolation level repeatable read;
>> select @@transaction_isolation;
-- (2) start txn
>> begin;
-- or
>> start transaction;
-- (3) select all from txn 1
>> select * from accounts;
-- (4) select id 1 and all records from txn 2
>> select * from accounts where id = 1;
>> select * from accounts where balance >= 80;
-- (5) decrease $10 from txn 1 for id 1
>> update accounts set balance = balance - 10 where id = 1;
>> select * from accounts where id = 1;
-- result in $70 for id 1
>> select * from accounts;
-- (6) commit txn 1
>> commit;
-- (7) select id 1 from txn 2
>> select * from accounts where id = 1;
-- result still in $80. still the old version of id 1, avoid non-repeatable read
-- (8) select all from txn 2
>> select * from accounts where balance >= 80;
-- still return 2 records. the query is repeatable
-- (9) decrease $10 from txn 2 for id 1
>> update accounts set balance = balance - 10 where id = 1;
>> select * from accounts where id = 1;
-- result in $60, which is correct value
-- (8) rollback in txn 2;
>> rollback;
-
avoid
serialization anomaly
-
the read txn will block any update or delete txn in same row
-
if update in both txn, deadlock will occur
-
if we execute both select in 2 txn, it's ok. then we execute insert in txn1, it will block as txn2 is holding a share lock. after we execute insert in txn2, it will fail and release lock, txn1 can execute.
-- (1) change txn isolation level to read committed in both txns
>> set session transaction isolation level serializable;
>> select @@transaction_isolation;
-- (2) start txn
>> begin;
-- or
>> start transaction;
-- (3) select all from txn 1
>> select * from accounts;
-- (4) select id 1 and all records from txn 2
>> select * from accounts where id = 1;
-- (5) decrease $10 from txn 1 for id 1
>> update accounts set balance = balance - 10 where id = 1;
-- the update in txn 1 is blocked. In serializable level, mysql convert all plain SELECT query to SELECT FOR SHARE. the txn holding SELECT FOR SHARE only allow other txn to READ not UPDATE or DELETE.
-- ERROR 1205 (HY000): Lock wait timeout exceeded; try restarting transaction
-- make sure u use retry mechanism when u use serializble isolation level
-- (6) restart txn in txn 1
>> select @@transaction_isolation;
>> begin;
>> select * from accounts where id = 1;
>> update accounts set balance = balance - 10 where id = 1;
-- txn 1 blocked
-- (7) update in txn 2
>> update accounts set balance = balance - 10 where id = 1;
-- now txn 2 is also waiting for share lock from txn 1
-- ERROR 1213 (40001): Deadlock found when trying to get lock; try restarting transaction
-- txn 1 successful
-- (8) rollback both txns
>> rollback;
-- (9) restart txns
>> select @@transaction_isolation;
>> begin;
-- (10) select id 1 from txn 1
>> select * from accounts where id = 1;
-- (11) select id 1 from txn 2
>> select * from accounts where id = 1;
-- (12) update in txn 1
>> update accounts set balance = balance - 10 where id = 1;
-- txn 1 blocked
-- (13) commit txn 2
>> commit;
-- share lock released, txn 1 updated
-- account balance updated in txn 1
-- (14) commit txn 1
>> commit;
- prevent
serialization anomaly
-- (1) change txn isolation level to read committed in both txns
>> set session transaction isolation level serializable;
>> select @@transaction_isolation;
-- (2) start txn
>> begin;
-- or
>> start transaction;
-- (3) select all from txn 1
>> select * from accounts;
-- (4) compute sum in txn 1
>> select sum(balance) from accounts;
-- (5) insert sum in txn 1
>> insert into accounts (owner, balance, currency) values ('sum', '160', 'usd');
>> select * from accounts;
-- (6) select all in txn 2
>> select * from accounts;
-- txn 2 is blocked, need txn 1 finish the txn
-- (7) commit in txn 1
>> commit;
-- txn 2 display the select result
-- (8) insert sum from txn 2
>> select sum(balance) from accounts;
>> insert into accounts (owner, balance, currency) values ('sum', '320', 'usd');
>> select * from accounts;
>> commit;
-- prevent serialization anomaly
try different orders for both txns
-- (1) change txn isolation level to read committed in both txns
>> set session transaction isolation level serializable;
>> select @@transaction_isolation;
-- (2) start txn
>> begin;
-- or
>> start transaction;
-- (3) select all and compute sum from txn 1
>> select * from accounts;
>> select sum(balance) from accounts;
-- (4) select all and compute sum from txn 2
>> select * from accounts;
>> select sum(balance) from accounts;
-- same sum as txn 1
-- (5) insert sum in txn 1
>> insert into accounts (owner, balance, currency) values ('sum', '640', 'usd');
-- txn 1 is blocked
-- (5) insert sum in txn 2
>> insert into accounts (owner, balance, currency) values ('sum', '640', 'usd');
-- ERROR 1213 (40001): Deadlock found when trying to get lock; try restarting transaction
-- txn 2 deadlock and txn 1 is successful
-- (6) rollback txn 2
rollback;
-- (7) select all in txn 1
>> select * from accounts;
-- new sum inserted successfully
-- (1) login to psql
$ docker exec -it postgres15 psql -U root
-- (2) check txn isolation level
>> show transaction isolation level;
-- default as read committed
-- (1) postgres can only set txn isolation level during txn
>> BEGIN;
>> set transaction isolation level read uncommitted;
>> show transaction isolation level;
-- (2) select all accounts
>> SELECT * FROM accounts;
-- (3) checking id 1 in txn 2
>> SELECT * FROM accounts where id = 1;
-- result in $100
-- (4) update balance in txn 1
>> update accounts set balance = balance - 10 where id = 1 returning *;
-- result in $90
-- (5) checking id 1 in txn 2
>> SELECT * FROM accounts where id = 1;
-- result still in $100
-- (6) commit in txn 1
>> COMMIT;
-- (7) checking id 1 in txn 2
>> SELECT * FROM accounts where id = 1;
-- result in $90
PostgreSQL's Read Uncommitted mode behaves like Read Committed
phantom read occured
-- (1) postgres can only set txn isolation level during txn
>> BEGIN;
>> set transaction isolation level read committed;
>> show transaction isolation level;
-- (2) select all accounts in txn 1
>> SELECT * FROM accounts;
-- (3) checking id 1 in txn 2
>> SELECT * FROM accounts where id = 1;
-- result in $90
-- (4) checking all balance >= 90 in txn 2
>> SELECT * FROM accounts where balance >= 90;
-- result in 2 records
-- (5) update balance in txn 1
>> update accounts set balance = balance - 10 where id = 1 returning *;
-- result in $80
-- (6) commit in txn 1
>> COMMIT;
-- (7) checking id 1 in txn 2
>> SELECT * FROM accounts where id = 1;
-- result will be in $80
-- (8) checking all balance >= 90 in txn 2
>> SELECT * FROM accounts where balance >= 90;
-- we will only see 1 record, so Phantom read occured
-- (9) commit txn 2
>> commit;
phantom read
is prevented- prevent update on row that is changed in another txn.
serialization anomaly
occured
-- (1) postgres can only set txn isolation level during txn
>> BEGIN;
>> set transaction isolation level repeatable read;
>> show transaction isolation level;
-- (2) select all accounts in txn 1
>> SELECT * FROM accounts;
-- (3) checking id 1 in txn 2
>> SELECT * FROM accounts where id = 1;
-- result in $90
-- (4) checking all balance >= 80 in txn 2
>> SELECT * FROM accounts where balance >= 80;
-- result in 2 records
-- (5) update balance in txn 1
>> update accounts set balance = balance - 10 where id = 1 returning *;
-- result in $70
-- (6) commit in txn 1
>> COMMIT;
-- (7) checking id 1 in txn 2
>> SELECT * FROM accounts where id = 1;
-- result will still be in $80
-- (8) checking all balance >= 80 in txn 2
>> SELECT * FROM accounts where balance >= 80;
-- we will only see 2 records, so Phantom read is prevented
-- (9) update id 1 in txn 2
>> update accounts set balance = balance - 10 where id = 1 returning *;
-- ERROR: current transaction is aborted, commands ignored until end of transaction block
-- (9) commit txn 2
>> commit;
replicate serialization anomaly
-- (1) postgres can only set txn isolation level during txn
>> BEGIN;
>> set transaction isolation level repeatable read;
>> show transaction isolation level;
-- (2) select all accounts in txn 1
>> SELECT * FROM accounts;
-- (3) sum balance and assign to a new account in txn 1
>> SELECT sum(balance) FROM accounts;
>> INSERT INTO accounts(owner, balance, currency) VALUES ('sum', 170, 'USD');
>> SELECT * FROM accounts;
-- result in 3 records
-- (4) checking all in txn 2
>> SELECT * FROM accounts;
-- result still in 2 records. as we use repeatable read level
-- (5) sum balance and assign to a new account in txn 2
>> SELECT sum(balance) FROM accounts;
>> INSERT INTO accounts(owner, balance, currency) VALUES ('sum', 170, 'USD');
>> SELECT * FROM accounts;
-- result in 3 records
-- (6) commit both txn 1 and txn 2
>> COMMIT;
-- (7) check all in txn 2
>> SELECT * FROM accounts;
-- there are two sum records. this is serialization anomaly
serialization anomaly
is prevented
-- (1) postgres can only set txn isolation level during txn
>> BEGIN;
>> set transaction isolation level serializable;
>> show transaction isolation level;
-- (2) select all accounts in txn 1
>> SELECT * FROM accounts;
-- (3) sum balance and assign to a new account in txn 1
>> SELECT sum(balance) FROM accounts;
>> INSERT INTO accounts(owner, balance, currency) VALUES ('sum', 510, 'USD');
>> SELECT * FROM accounts;
-- result in 5 records with new added record
-- (4) checking all in txn 2
>> SELECT * FROM accounts;
-- result still in 2 records. as we use repeatable read level
-- (5) sum balance and assign to a new account in txn 2
>> SELECT sum(balance) FROM accounts;
>> INSERT INTO accounts(owner, balance, currency) VALUES ('sum', 510, 'USD');
>> SELECT * FROM accounts;
-- result is identical with txn 1
-- (6) commit both txn 1 and txn 2
>> COMMIT;
-- txn 1 is ok
>> COMMIT;
-- tnx 2 is failed
-- ERROR: could not serialize access due to read/write dependencies among transactions
-- DETAIL: Reason code: Canceled on identification as a pivot, during commit attempt.
-- HINT: The transaction might succeed if retried.
-- (7) check all in txn 2
>> SELECT * FROM accounts;
-- records stay the same. serialization anomaly is prevented
item | MySQL | PostgreSQL |
---|---|---|
isolation levels | 4 | 3 |
mechanism | locking mechanism | dependencies detection |
default level | repeatable read | read committed |
Keep in mind
- retry mechanism
there migh be errors, timeout or deadlock - read documentation
each database engine might implement isolation level differently
github similar to below:
- Jenkins
- Travis
- CircleCI
- go to github repo go-backend-service
- Actions
- Go
- create local file first
$ mkdir -p .github/workflows
$ cd .github/workflows
$ touch ci.yml
- after setup ci.yml and push to github
- check
Actions
and ci-test - click init CI workflow
- click Jobs
Test
to check status
# add one more step in yml
- name: Run migrations
run: make migrateup
- migrate cmd is not installed golang/migrate doc
# 1. download
$ curl -L https://github.com/golang-migrate/migrate/releases/download/v4.15.2/migrate.linux-amd64.tar.gz | tar xvz
$ curl -L https://github.com/golang-migrate/migrate/releases/download/v4.12.2/migrate.linux-amd64.tar.gz | tar xvz
# 2. move to below folder
$ mv migrate /usr/bin/
Key | Value |
---|---|
Gin Model binding and validation | https://gin-gonic.com/docs/examples/binding-and-validation/ |
sqlc config | https://docs.sqlc.dev/en/latest/reference/config.html |
viper | https://github.com/spf13/viper |
Popular web frameworks | Popular HTTP routers |
---|---|
Gin | FastHttp |
Beego | Gorrila Mux |
Echo | HttpRouter |
Revel | Chi |
Martini | |
Fiber | |
Buffalo |
# (1) install Gin
# https://github.com/gin-gonic/gin
$ go get -u github.com/gin-gonic/gin
$ go mod tidy
# (2) handler function declare as *Context
# (3) validate requests
https://pkg.go.dev/github.com/go-playground/validator/v10#hdr-Baked_In_Validators_and_Tags
# One of
# (4) get dependencies
$ go get github.com/go-delve/delve/service/api
# (1) test in postman
# (2) displayed the response result
# (1) test in postman
pagenation
emit_empty_slices
If true, slices returned by :many queries will be empty instead of nil. Defaults to false.
# (1) upgrade sqlc to latest version
$ brew upgrade sqlc
# (2) check sqlc version
$ sqlc version
# (3) regenerate code
$ make sqlc
# (4) now items are initialized as empty slice
# (5) now we returned as empty array instead of null
-
why file? development: easily specify default configuration for local development and testing
-
why env vars? development: easily override the default configurations when deploy with docker containers
-
why viper?
3.1 find, load, unmarshal config file
Json, TOML, YAML, ENV, INI
3.2 Read config from environment variables or flags
Override existing values, set default values
3.3 Read config from remote system
Etcd, consul
3.4 live watching and writing config file
Reread changed file, save any modifications
# (1) install viper
$ go get github.com/spf13/viper
# (2) create app.env file
# (3) create config.go
# (4) replace variables inside main.go
# (5) test in postman
# (6) override config before call make server
$ SERVER_ADDRESS=0.0.0.0:8081 make server
-
Independent tests
Isolate tests data to avoid conflicts -
Faster tests
Redue a lot of time talking to the database -
100% coverage
easily setup edge cases: unexpected errors
- use fake db: memory
implment a fake version of DB: store data in memory - use db stubs: gomock
Generate and build stubs that returns hard-coded values
# (1) install gomock
# https://github.com/golang/mock
# $ go get github.com/golang/mock/mockgen@v1.6.0 # for version < 1.16
$ go install github.com/golang/mock/mockgen@v1.6.0
# (2) validate mockgen
$ which mockgen
# (3) setup path in ~/.zshrc in most beginning
PATH = $PATH:~/go/bin
# (4) source
$ source ~/.zshrc
# (5) check again
$ which mockgen
# (1) add store interface
# (2) turn on emit_interface -> true in sqlc.yaml
# (3) regenerate
$ make sqlc
# (4) a new file named querier.go created
# (5) remove * from server.go, as it's no longer struct pointer type, but interface instead
# (6) as we have db.Store interface, now we can use it for mock
# 1, specify path 2, specify interface name 3, sepcify the destination of generated output file
$ mockgen -destination db/mock/store.go github.com/RickZhou666/go-backend-service/db/sqlc Store
# 2. file generated to db/mock/store.go
type MockStore struct {}
type MockStoreMockRecorder struct {}
# 3. change the package name
$ mockgen -package mockdb -destination db/mock/store.go github.com/RickZhou666/go-backend-service/db/sqlc Store
# 4. add above cmd to MakeFile
# (1) create account_test.go under api
# (2) create random account func
# (3) create new controller
# (4) create new mock store
# (5) setup store mock data
# (6) start test server and send request
# (7) execute unit test for test under api folder
$ go test -v --cover -count=1 ./api/...
testing missing call
# (1) comment store.GetAccount call in account.go
# (2) rerun test
# (3) as we define store.GetAccount func to be called 1 time. it's failed
testing response boy
# (1) define body matcher
- gin.DebugMode
gin.SetMode(gin.DebugMode)
- gin.TestMode
gin.SetMode(gin.TestMode)
- gin.ReleaseMode
gin.SetMode(gin.ReleaseMode)
- gin.EnvGinMode
gin.SetMode(gin.EnvGinMode)
# (1) create struct
# (2) create func
# (3) add endpoint in server.go
# (1) create currency validator
# (2) define currency util
# (3) setup currency check in server.go and replace in transfer.go handler
# (1) create new users table
# Z - zero timezone or 00:00:00+00
password_changed_at timestamptz [not null, default: '0001-01-01 00:00:00Z']
# (2) one user can have mutilply currency account such USD account, RMB account
# but cannot have multiple USD accounts, only one
- replace init schema
- reset db
- run migrate up cmd
this is not applicable in real world
# (1) create new migration
$ migrate -help
$ migrate create -ext sql -dir db/migration -seq add_users
# (2) due to the dirty version, migrateup or migratedown will fail
# (3) overwrite dirty to FALSE
# (3) migratedown
$ make migratedown
# (4) migrate up
$ make migrateup
# (5) check users table
# (1) drop unique constraint for accounts table
# (2) drop foreign key constraint in similar way,
# check the name by clicking `info` button
# (3) drop user table
# (4) setup makefile
# (4.1) migrate up to next 1 version
migrateup1:
migrate -path db/migration -database "postgresql://root:secret@localhost:5432/simple_bank?sslmode=disable" -verbose up 1
# (4.2) migrate down to next 1 version
migratedown1:
migrate -path db/migration -database "postgresql://root:secret@localhost:5432/simple_bank?sslmode=disable" -verbose down 1
# (5) after run migratedown1
# users table disappear
# constraint in accounts table disappear
# schema_migrations version is 1
# (6) after run migrateup1
# users table is here
# constraint in accounts table is here
# schema_migrations version is 2
# (1) create user.sql under query folder
# (2) run sqlc to generate user.sql.go and model in models.go
$ make sqlc
# (3) write user unit test
# (4) require.True(t, user.PasswordChangedAt.IsZero()) func represnet below
# (5) if we run packages there will be error due to foreign key
# (6) bind user to account
# (1) run make test api/account_test.go failed
# as new func added into querier.go
# but not added into mock/store.go
$ make test
# (2) rerun mock
$ make mock
# CreateUser and GetUser func added
# (3) rerun test
$ make test
# all passed
# (1) create new account
# (2) change in api/accounts.go
if pqErr, ok := err.(*pq.Error); ok {
// used to check error name
// log.Println(pqErr.Code.Name())
switch pqErr.Code.Name() {
case "foreign_key_violation", "unique_violation":
ctx.JSON(http.StatusForbidden, errorResponse(err))
return
}
}
# (3) if we send same account twice
# 2023/01/06 00:46:48 unique_violation
# (4) test in API
now response code is 403 Forbidden
# (1) change to EUR then make call
# (2) check db
# (1) create password.go
# (2) create password_test.go
# (3) replace to user_test.go
# (4) run all test
$ go test -v --cover -count=1 ./...
# all passed
random salt value was generated
# (1) create user api handler
# (2) register in server.go
# (3) start server
$ make server
# (4) send request
# successful
# (5) send 2nd time
# got 403 forbidden error
# "error": "pq: duplicate key value violates unique constraint \"users_pkey\""
# (6) create with same email address
# got 403 forbidden error
# "error": "pq: duplicate key value violates unique constraint \"users_email_key\""
# (7) which point to users table two constraint
# (7.1) users_pkey
# (7.1) users_email_key
# (8) invalid username
# 400 bad request
# "error": "Key: 'createUserRequest.Username' Error:Field validation for 'Username' failed on the 'alphanum' tag"
# (9) invalid email
# 400 bad request
# "error": "Key: 'createUserRequest.Email' Error:Field validation for 'Email' failed on the 'email' tag"
# (10) invalid password too short
# 400 bad request
# "error": "Key: 'createUserRequest.Password' Error:Field validation for 'Password' failed on the 'min' tag"
# (11) we should not return hashed password
# (12) add createUserResponse struct
# (13) restart server
# there is no hased password returned
(1) cannot cover empty user case
arg = db.CreateUserParams{}
(2) cannot cover weak password
hashedPassword, err := util.HashPassword("xyz")
(3) create customized matcher to check
- Symmetric digital signature algorithm
- The same secret key is used to sign & verify token
- for local user: internal services, where the secret key can be shared
- HS256, HS384, HS512
- HS256 = HMAC + SHA256
- HMAC: Hashed-based Message Authentication Code
- SHA: Secure Hash Algorithm
- 256/384/512: number of output bits
- Asymmetric digital signature algorithm
- The private key is used to sign token
- The public key is used to verify token
- For public use: internal service signs token, but external service needs to verify it
- RS256, RS383, RS512 || PS256, PS383, PS512 || ES256, ES383, ES512
- RS256 = RSA PKCSv1.5 + SHA256 [PKCS: Public-Key Cryptography Standards]
- PS256 = RSA PSS + SHA256 [PSS: Probabilistic Signature Scheme]
- ES256 = ECDSA + SHA256 [ECDSA: Elliptic Curve Digital Signature Algorithm]
- Weak Algorithms
- Give developers too many algorithms to choose
- Some algorithms are konwn to be vulnerable:
- RSA PKCSv1.5: Padding oracle attack
- ECDSA: invalid-curve attack
- Trivial Forgery
- Set "alg" header to "none"
- Set "alg" header to "HS256" while the server normally verifies token with a RSA public key
- Stronger algorithms
- Developers don't have to choose the algorithm
- Only need to select the version of PASETO
- Each version has 1 strong cipher suite
- Only 2 most recent PASETO versions are accepted
- Non-trivial Forgery
- No more "alg" header or "none" algorithm
- Everything is authenticated
- Encrypted payload for local use
# (1) download uuid pkg
$ go get github.com/google/uuid
# (2) install JWT
$ go get github.com/dgrijalva/jwt-go
# (3) missing func of interface
# (1) download PASETo
$ go get github.com/o1egl/paseto
# (1) add token config in config.go
# (2) add config var to NewServer func in server.go, so it will support token creation
# (3) add loginUser handler in user.go
# (4) add new endpoint in server.go
# (5) start api call to test
the json variable is different
tokenMaker, err := token.NewPasetoMaker(config.TokenSymmetricKey)
if err != nil {
return nil, fmt.Errorf("cannot create token maker: %w", err)
}
tokenMaker, err := token.NewPasetoMaker(config.TokenSymmetricKey)
if err != nil {
return nil, fmt.Errorf("cannot create token maker: %w", err)
}
# (1) add middleware
# (2) add middleware unit test
# (3) Account and Transfer call will call middleware first
# (4) change Account and Transfer UT, as it will go through middle first
# (5) createAccount via authorizationPayload
A logged-in user can only create an account for him/herself
a logged-in user can only get accounts that he/she owns
a logged-in user can only list accounts that belong to him/her
# (1) update list accounts query add owner condition
# (2) update query
$ make sqlc
# (3) regenerate mockstore for our api unit test
$ make mock
A logged-in user can only send money from his/her own account
- token expired as we defined as 15m in app.env
- list accounts with active token
- transfer money
- unauthorized user cannot make txn