Skip to content

ivanspasov99/golang-api

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

32 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Task Definition

Table Of Content

  1. API Docs
  2. Full Software Lifecycle
  3. Job Handler
  4. Testing
  5. Logging Package
  6. Graph Algorithm
  7. Security

API Docs

Prerequisites

  1. Installed go CLI

Execute the following steps to run the application

  1. git clone https://github.com/ivanspasov99/golang-api
  2. go run main.go in the root of the repository
POST /job?mode={mode} Accepts job with tasks and returns ordered commands as different format depending on the `mode` passed as query parameter. `mode=[bash, json]`
Query
name type data type description default
mode optional string represents required response format - JSON, Bash supported JSON
Responses
http code Content-Type Request Response
200 application/json Example Request Example Response
200 text Example Request Example Response
400 application/json Request which consist of cycle between tasks Cycle not allowed
400 application/json Request which requires task which does not exist Vertex (Task) does not exist
Example JSON Request

curl -d @testing/input.json http://localhost:8080

Example JSON Response
[
  {
    "name":"task-1",
    "command":"touch /tmp/file1"
  },
  {
    "name":"task-3",
    "command":"echo 'Hello World!' > /tmp/file1"
  },
  {
    "name":"task-2",
    "command":"cat /tmp/file1"
  },
  {
    "name":"task-4",
    "command":"rm /tmp/file1"
  }
]
Example Bash Request

curl -d @testing/input.json http://localhost:8080?mode=bash | bash

Example Bash Response
#!/usr/bin/env bash
touch /tmp/file1
echo "Hello World!" > /tmp/file1
cat /tmp/file1
rm /tmp/file1

Full Software Lifecycle

What should be added to be production ready.

API Docs could be added using Swagger

Architecture

Diagrams should be added

  • Block Diagram
  • Flow Diagrams

Package

  • All Configurations will be passed as env variables through charts. Example how should they be handled in Config. Also, there is example for k8s cluster communication client
  • Using Helm & Kubernetes

CI

Executed on Every PR/Merge

  • Build - Using Docker
  • Integration - Jenkins or Azure. Execution of Unit, Performance testing
  • Generate Version and Release

Executed on Merge

  • Push Image to Container Registry (GCP)
  • Push Helm Chart to Artifact Registry (GCP)
  • Security, Compliant checks on built image

CD

Depends on whether Continuous Delivery/Continues Deployment/Progressive Delivery is set and what Service Mesh is used

  • Configure Environment Configuration Repository (keeping all microservices version and used for release/promotion)
  • Deploy with Argo CD
  • Configure Cluster Bootstrapping if multiple clusters are used for different environments
  • Secrets Management - External Secrets Operator
  • Centralized Secret Management Platform - HashiCorp Vault

Monitor

Tools that have specified in Job Handler

Job Handler

Job Handler handles every job in separate goroutine, it is highly possible real big data scenario so performance is crucial. Therefore, it is taken into account and time complexity is linear - Graph Implementation O(n + e).

  • Handler could be made with Gin HTTP framework as it give us greater flexibility and ready features. Some boilerplate will be removed (HTTP verbs management)
  • Encoding/Decoding special symbols use-cases are not taken into account
  • Job Processing is separated to two middlewares using chain of responsibility pattern - job.Handle and job.HandleError as both will grow in the future so they should be separated as abstractions
  • More middlewares could be added with the same technique (Ex: Authorization Module)
  • Tests for job.Handle and job.HandleError are skipped. They are required but would be the same as the most of the written ones. Writer and Request would be mocked and all scenarios would be tested.
  • Monitoring/Alerting is out of scope. Could be done with different tools depending on requirements
    • Sentry - Error Alerting, could alert the DoD (developer on duty) for errors which should be process immediately
    • Kibana - Logging Analyse tool
    • Prometheus - Resource/Performance analyse tool

Testing

Testing is created using Table Driven Testing over BDT (Behavior Driven Testing). Output could be improved when test fails, as it would bring big value in debugging faster.

Logging Package

Package encapsulate productive json requirement logging which is required by a lot of analysing log tools

Logging package could be extended with dynamic logging and log level state which represent the option to change the level of logging (debug, warn, info, error) This help in generating fewer logs when not needed and set more logs when problem arise for debugging purposes This could be implemented through a configmap in the which is deployed in k8s cluster for examples separately from the application, then you can consume/read it as env variable in the code

Graph Algorithm

It is better to use already implemented packages which are community adopted and tested, but I have decided to refresh my skills a little bit

It is best to be implemented using generics as now it is very limited to one type/struct etc

Time Complexity - O(n + e)

The implementation is using maps which does no guarantee order of the topological sorting. It can be implemented with arrays or can be improved with following custom map key logic

// key will be used as map key
type key struct {
	name string
}

// graph will look like
type graph struct {
	vertices map[key]*Vertex
	edges    map[key]*Edge
}

// comparison function
func (k key) Less(other key) bool {
	// Return true if k should be sorted before the other.
	// Return false otherwise.
}

Security

Security not part of the task

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages