Skip to content

moemoe89/btc

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

BTC Service


CI Workflow

BTC Service handles BTC transaction and User balance related data.

Table of Contents

Project Summary

Item Description
Golang Version 1.19
Database TimescaleDB and pgx
Database Documentation SchemaSpy
Cache Redis and go-redis
Migration migrate
moq mockgen
Linter GolangCI-Lint
Testing testing and testify/assert
Load Testing ghz
API gRPC and gRPC-Gateway
Application Architecture Clean Architecture
Directory Structure Standard Go Project Layout
CI (Lint, Test, Generate) GitHubActions
Visualize Code Diagram go-callviz
Sequence Diagram Mermaid
Protobuf Operations buf
Instrumentation OpenTelemetry and Jaeger
Logger zap
Messaging RabbitMQ and amqp091-go

Architecture Diagram


Excalidraw link

Architecture-Diagram

Installation

1. Set Up Golang Development Environment

See the following page to download and install Golang.

https://go.dev/doc/install

2. Install Development Utility Tools

You can install all tools for development and deployment for this service by running:

$ go mod download
$ make install

Development workflow and guidelines

1. API

This project using gRPC and Protocol Buffers, thus all needed data like Service definition, RPC's list, Entities will store in api/proto directory.

If you unfamiliar with Protocol Buffer, please visit this link for the detail:

For generating the Proto files, make sure to have these libs installed on your system, please refer to this link:

The validation for this API using protoc-gen-validate, for the detail please refer to this lib:

This service also implementing gRPC-Gateway with this library:

For generating the gRPC-Gateway and OpenAPI files, there's required additional package such as:

  • github.com/grpc-ecosystem/grpc-gateway/v2/protoc-gen-grpc-gateway
  • github.com/grpc-ecosystem/grpc-gateway/v2/protoc-gen-openapiv2

Then, generating the Protobuf files can be done my this command:

$ make protoc

NOTE:

If you have any difficulties installing all dependencies needed for generating the proto files, you can easily build the docker image and use that instead.

Here are 2 commands for building and generating:

make build-protoc
make docker-protoc

2. TimescaleDB + GUI

pgAdmin

NOTE

In this project, the Database Replication could be implemented, then we need 2 databases Master and Slave. But if only 1 database exists, we can easily the replication on the App side by setting the env variables.

From IS_REPLICA true to false.

Run TimescaleDB locally with the GUI (pgAdmin) can be executed with the following docker-compose command:

$ docker-compose -f ./development/docker-compose.yml up timescaledb-master
$ docker-compose -f ./development/docker-compose.yml up timescaledb-slave
$ docker-compose -f ./development/docker-compose.yml up pgadmin

NOTE: TimescaleDB will use port 5432 (Master) and 5433 (Slave) and pgAdmin will use 5050, please make sure those port are unused in your system. If the port conflicted, you can change the port on the development/docker-compose.yml file.

The default email & password for pgAdmin are:

  • email: admin@admin.com
  • password: admin123

With this following TimescaleDB Master info:

  • host: timescaledb-master -> change this to localhost if you try to connect from outside Docker
  • port: 5432
  • username: test
  • password: test
  • db: test

And this is the info for TimescaleDB Slave:

  • host: timescaledb-slave -> change this to localhost if you try to connect from outside Docker
  • port: 5433
  • username: test
  • password: test
  • db: test

If you don't have a docker-compose installed, please refer to this page https://docs.docker.com/compose/

3. Migration

Make sure the database already running, after that we need some tables and dummy data in order to test the application, please run this command to do the migration:

$ docker-compose -f ./development/docker-compose.yml up migration

This migration also seeds some test data, because when creating a transaction, will require existing User ID. By this seeds, we will have 5 users test data, from ID 1 to 5.

4. Database Schema

SchemaSpy

If you want to check the overall Database Schema, you can use a UI tool based on browser using SchemaSpy.

The docker-compose for SchemaSpy already exist, but make sure to run the tiemscaledb-master and do the migration, thne you can just run this command:

$ docker-compose -f ./development/docker-compose.yml up schemaspy

The HTML and assets file will be generated under development/schemaspy/output directory.

5. Cache

When getting transactions list and user balance, there's a cache implemented using Redis in order to have middle layer and avoid call the main DB frequently.

To start running Redis, there's a docker-compose command available:

$ docker-compose -f ./development/docker-compose.yml up redis

6. Instrumentation

Jaeger

This service implements https://opentelemetry.io/ to enable instrumentation in order to measure the performance. The data exported to Jaeger and can be seen in the Jaeger UI http://localhost:16686

For running the Jaeger exporter, easily run with docker-compose command:

$ docker-compose -f ./development/docker-compose.yml up jaeger

7. Unit Test

Make sure the database already running, then you can simply execute the following command to run all test cases in this service:

$ make test

8. Linter

For running the linter make sure these libraries already installed in your system:

Then checks the Go and Proto code style using lint can be done with this command:

$ make lint

9. Mock

This service using Mock in some places like in the repository, usecase, pkg, etc. To automatically updating the mock if the interface changed, easily run with go generate command:

$ make mock

10. Run the service

For running the service, you need the database running and set up some env variables:

# app config
export APP_ENV=dev
export SERVER_PORT=8080

# master db config
export POSTGRES_USER_MASTER=test
export POSTGRES_PASSWORD_MASTER=test
export POSTGRES_HOST_MASTER=localhost
export POSTGRES_PORT_MASTER=5432
export POSTGRES_DB_MASTER=test

# slave db config
export POSTGRES_USER_SLAVE=test
export POSTGRES_PASSWORD_SLAVE=test
export POSTGRES_HOST_SLAVE=localhost
export POSTGRES_PORT_SLAVE=5433
export POSTGRES_DB_SLAVE=test

# use replica config
export IS_REPLICA=true

# tracing config
export OTEL_AGENT=http://localhost:14268/api/traces

# cache config
export REDIS_HOST=localhost:6379

Or you can just execute the sh file:

$ ./scripts/run.sh

11. Test the service

The example how to call the gRPC service written in Golang can be seen on this example-client file.

NOTE: To test this service need the migration to be done. After that you can choose the User ID's from 1 to 5.

If you want to test by GUI client, you can use either BloomRPC (although already no longer active) or Postman. For the detail please visit these links:

Basically you just need to import the api/proto/service.proto file if you want to test via BloomRPC / Postman.

NOTE: There will be a possibility issue when importing the proto file to BloomRPC or Postman. It is caused by some path issue, the usage of gRPC Gateway and protoc-gen-validate library. To solve this issue, there's need a modification for the proto file.

BloomRPC

BloomRPC

BloomRPC will have these issues when trying to import the proto file:

Error while importing protos
illegal name ';' (/path/btc/api/proto/service.proto, line 20)

Error while importing protos
no such type: e.Transaction

To fix this issue, fix the import path from:

import "proto/entity.proto";

To this:

import "../proto/entity.proto";

and remove gRPC Gateway related annotations:

option (grpc.gateway.protoc_gen_openapiv2.options.openapiv2_swagger) = {
  ...
};

Postman

Postman

There's some issue when importing to Postman. Basically we need to do the same things like BloomRPC and disable the validate import.

import "proto/entity.proto";
import "validate/validate.proto";
import "protoc-gen-openapiv2/options/annotations.proto";

To this:

import "../proto/entity.proto";
// import "validate/validate.proto";
// import "protoc-gen-openapiv2/options/annotations.proto";

Also don't forget to set the import path e.g. {YOUR-DIR}/btc/api/proto

gRPC-Gateway

Swagger

This service has HTTP server built on gRPC-Gateway, if you prefer to test using HTTP instead HTTP2 protocol, you can copy the Swagger file here api/openapiv2/proto/service.swagger.json and then copy paste to this URL https://editor.swagger.io/

By default, HTTP server running on gRPC port + 1, if the gRPC port is 8080, then HTTP server will run on 8081.

12. Load Testing

ghz

In order to make sure the service ready to handle a big traffic, it will better if we can do Load Testing to see the performance.

Since the service running in gRPC, we need the tool that support to do HTTP2 request. In this case we can use https://ghz.sh/ because it is very simple and can generate various output report type.

NOTE: Like importing the proto file to BloomRPC / Postman, when running the ghz there's will be issue shown due to the tool can't read the path & validate lib.

Here are some possibility issues when we're trying to run the ghz commands:

  • ./api/proto/service.proto:5:8: open api/proto/proto/entity.proto: no such file or directory
  • ./api/proto/service.proto:7:8: open api/proto/validate/validate.proto: no such file or directory
  • ./api/proto/service.proto:29:22: field CreateTransactionRequest.user_id: unknown extension validate.rules

To fix this issue, you need to change some file in proto file:

import "proto/entity.proto";
import "validate/validate.proto";
import "google/api/annotations.proto";
import "protoc-gen-openapiv2/options/annotations.proto";

To this:

import "../proto/entity.proto";
// import "validate/validate.proto";
// import "google/api/annotations.proto";
// import "protoc-gen-openapiv2/options/annotations.proto";

And all validation on each field such as:

// CreateTransactionRequest
message CreateTransactionRequest {
  // (Required) The ID of User.
  int64 user_id = 1 [(validate.rules).int64.gte = 1];
  // (Required) The date and time of the created transaction.
  google.protobuf.Timestamp datetime = 2 [(validate.rules).timestamp.required = true];
  // (Required) The amount of the transaction, should not be 0.
  float amount = 3 [(validate.rules).float = {gte: 0.1, lte: -0.1}];
}

To this:

// CreateTransactionRequest
message CreateTransactionRequest {
  // (Required) The ID of User.
  int64 user_id = 1;
  // (Required) The date and time of the created transaction.
  google.protobuf.Timestamp datetime = 2;
  // (Required) The amount of the transaction, should not be 0.
  float amount = 3;
}

and remove gRPC Gateway related annotations:

option (grpc.gateway.protoc_gen_openapiv2.options.openapiv2_swagger) = {
  ...
};

Then, you can run this ghz command to do Load Testing for specific RPC, for the example:

1. CreateTransaction RPC:

ghz --insecure --proto ./api/proto/service.proto --call BTCService.CreateTransaction -d '{ "user_id": 1, "datetime": { "seconds": 1676339196, "nanos": 0 }, "amount": 100 }' 0.0.0.0:8080 -O html -o load_testing_create_transaction.html

2. ListTransaction RPC:

ghz --insecure --proto ./api/proto/service.proto --call BTCService.ListTransaction -d '{ "user_id": 1, "start_datetime": { "seconds": 1676339196, "nanos": 0 }, "end_datetime": { "seconds": 1676339196, "nanos": 0 } }' 0.0.0.0:8080 -O html -o load_testing_list_transaction.html

3. GetUserBalance RPC:

ghz --insecure --proto ./api/proto/service.proto --call BTCService.GetUserBalance -d '{ "user_id": 1 }' 0.0.0.0:8080 -O html -o load_testing_get_user_balance.html

13. Messaging

In order to avoid failing when creates the transaction and support for easily retry, there's a simple Event based system using RabbitMQ.

To test the event based you need to run the rabbitmq, the server and the consumer server.

$ docker-compose -f ./development/docker-compose.yml up timescaledb-master timescaledb-slave pgadmin jaeger rabbitmq
$ ./scripts/run.sh
$ ./scripts/run-consumer.sh

After that you can try to send a message by publishing a message.

go run ./scripts/example-publish

NOTE

If you have any difficulties to run the service, easily just run all dependencies by docker-compose for the example:

docker-compose -f ./development/docker-compose.yml up

Then you will have all services running like timescaledb-master, timescaledb-slave, pgadmin, jaeger, rabbitmq, redis also running the migration and run btc-server + btc-consumer.

Project Structure

This project follow https://github.com/golang-standards/project-layout

However, for have a clear direction when working in this project, here are some small guide about each directory:

  • api: contains Protobuf files, generated protobuf, swagger, etc.
  • build: Docker file for the service, migration, etc.
  • cmd: main Go file for running the service, producer, consumer, etc.
  • development: file to support development like docker-compose.
  • docs: file about project documentations such as diagram, sequence diagram, etc.
  • internal: internal code that can't be shared.
  • migrations: database migration files.
  • pkg: package code that can be shared.
  • scripts: shell script, go script to help build or testing something.
  • tools: package that need to store on go.mod in order to easily do installation.

GitHub Actions CI

GitHubActionsCI

This project has GitHub Actions CI to do some automation such as:

Documentation

Visualize Code Diagram

GraphDiagram

To help give a better understanding about reading the code such as relations with packages and types, here are some diagrams listed generated automatically using https://github.com/ofabry/go-callvis

  1. main diagram
  2. di diagram
  3. handler diagram
  4. usecases diagram
  5. datastore diagram

RPC Sequence Diagram

SequenceDiagram

To help give a better understanding about reading the RPC flow such as relations with usecases and repositories, here are some sequence diagrams (generated automatically) listed in Markdown file and written in Mermaid JS https://mermaid-js.github.io/mermaid/ format.

To generate the RPC sequence diagram, there's a Makefile command that can be use:

  1. Run this command to generate specific RPC make sequence-diagram RPC=GetData.
  2. For generates multiple RPC's, just adding the other RPC by comma make sequence-diagram RPC=GetData,GetList.
  3. For generates all RPC's, use wildcard * in the parameter make sequence-diagram RPC=*.
  1. CreateTransaction RPC - Sequence Diagram
  2. GetUserBalance RPC - Sequence Diagram
  3. ListTransaction RPC - Sequence Diagram

TODO


In the future, here is the Architecture Diagram want to achieve to improve the performance.

  • Basically need to separate the service into Create & Search service, following CQRS pattern and using event based for the communication.

  • Create service will have responsibility only for inserting the data and trigger message to Search service. To create the transaction need to publish an event from client.

  • Search service will have responsibility regarding searching data such as search transactions and get user balance. Client can call this service directly using gRPC or REST (gRPC-Gateway). Search service also do indexing the transaction data into search engine service like Elasticsearch, triggered from Create service by event. For the user balance still getting from replica database, and both transaction and user balance will have a middle layer cache using Redis. Just in case connection to Elasticsearch fail, the search service still be able to get the data from replica database directly.

Excalidraw

Future-Architecture-Diagram