KubeStock is a production-grade, cloud-native inventory stock management system built using microservices architecture and comprehensive DevOps practices. The project demonstrates enterprise-level implementation of Infrastructure as Code (IaC), GitOps-based continuous deployment, containerization, Kubernetes orchestration, and zero-trust security across all layers.
- ✅ 100% Infrastructure Automation via Terraform
- ✅ Zero-Touch Deployments with ArgoCD GitOps
- ✅ Multi-AZ High Availability deployment
- ✅ Zero-Trust Security Model across all layers
- ✅ Full-Stack Observability with Prometheus, Grafana, and Loki
- ✅ Automated CI/CD Pipeline with security scanning
- ✅ Auto-Scaling at both Pod and Cluster levels
KubeStock manages warehouse operations through five independent microservices:
| Service | Port | Database | Purpose |
|---|---|---|---|
| ms-product | 3002 | product_catalog_db | Product catalog management |
| ms-inventory | 3003 | inventory_db | Stock tracking and adjustments |
| ms-supplier | 3004 | supplier_db | Supplier and purchase orders |
| ms-order-management | 3005 | order_db | Order processing and fulfillment |
| ms-identity | 3006 | identity_db | User identity management |
| web (frontend) | 80 | N/A | React-based UI |
| Component | Technology | Version |
|---|---|---|
| Frontend | React + Vite | 19.1.1 / 7.1.7 |
| Backend | Node.js + Express | 18 / 4.18.2 |
| Database | PostgreSQL | 15 |
| API Gateway | Kong (DB-less) | 3.0+ |
| Identity Provider | WSO2 Asgardeo | OAuth 2.0/OIDC |
| Component | Technology | Version |
|---|---|---|
| Cloud Provider | AWS | ap-south-1 |
| IaC | Terraform | >= 1.5 |
| Container Runtime | Docker | Latest |
| Orchestration | Kubernetes | 1.28+ |
| CI/CD | GitHub Actions | Latest |
| GitOps | ArgoCD | 2.9+ |
| Monitoring | Prometheus + Grafana | 10.2+ |
| Logging | Loki + Promtail | Latest |
| Secrets Management | AWS Secrets Manager + ESO | Latest |
%%{init: {'theme': 'base', 'themeVariables': { 'primaryColor': '#326ce5', 'primaryTextColor': '#fff', 'primaryBorderColor': '#326ce5', 'lineColor': '#5D6D7E', 'secondaryColor': '#f0f0f0', 'tertiaryColor': '#fff'}}}%%
graph TB
subgraph Internet["Internet"]
Users[("Users / Clients")]
end
subgraph AWS["AWS Cloud (ap-south-1)"]
subgraph VPC["VPC (10.0.0.0/16)"]
subgraph PublicSubnets["Public Subnets"]
Bastion["Bastion Host<br/>t3.micro"]
DevServer["Dev Server<br/>t3.medium"]
NAT["NAT Gateway"]
end
subgraph PrivateSubnets["Private Subnets (3 AZs)"]
subgraph K8sCluster["Kubernetes Cluster"]
ControlPlane["Control Plane<br/>t3.medium"]
subgraph WorkerNodes["Worker Nodes"]
Worker1["Worker 1<br/>t3.medium"]
Worker2["Worker 2<br/>t3.medium"]
Worker3["Worker 3<br/>t3.medium"]
Worker4["Worker 4<br/>t3.medium"]
end
end
RDS[("RDS PostgreSQL<br/>db.t3.micro")]
end
NLB["Network Load Balancer<br/>(Internal)"]
end
subgraph AWSServices["AWS Services"]
ECR["ECR<br/>Container Registry"]
SecretsManager["AWS Secrets Manager"]
end
end
subgraph External["External Services"]
Asgardeo["WSO2 Asgardeo<br/>Identity Provider"]
GitHub["GitHub<br/>Source Code"]
end
Users --> NLB
NLB --> K8sCluster
DevServer --> ControlPlane
DevServer --> WorkerNodes
Bastion --> NLB
K8sCluster --> RDS
K8sCluster --> ECR
K8sCluster --> SecretsManager
K8sCluster --> Asgardeo
GitHub --> ECR
classDef aws fill:#FF9900,stroke:#232F3E,color:#232F3E
classDef k8s fill:#326ce5,stroke:#fff,color:#fff
classDef external fill:#4CAF50,stroke:#2E7D32,color:#fff
classDef network fill:#9C27B0,stroke:#6A1B9A,color:#fff
class ECR,SecretsManager,RDS aws
class ControlPlane,Worker1,Worker2,Worker3,Worker4 k8s
class Asgardeo,GitHub external
class NLB,NAT network
| Component | Count | Instance Type | Location |
|---|---|---|---|
| Control Plane | 1 | t3.medium | Private Subnet (ap-south-1a) |
| Worker Nodes | 4 | t3.medium | Private Subnets (ap-south-1b, ap-south-1c) |
| Bastion Host | 1 | t3.micro | Public Subnet |
| Dev Server | 1 | t3.medium | Public Subnet |
- VPC CIDR: 10.0.0.0/16
- Multi-AZ Deployment: 3 Availability Zones
- Public Subnets: 10.0.1.0/24, 10.0.2.0/24, 10.0.3.0/24
- Private Subnets: 10.0.10.0/24, 10.0.11.0/24, 10.0.12.0/24
- NAT Gateway: Single NAT (cost-optimized)
- Internal NLB: Load balancer for K8s API and app traffic
%%{init: {'theme': 'base', 'themeVariables': { 'primaryColor': '#326ce5', 'primaryTextColor': '#fff', 'primaryBorderColor': '#326ce5', 'lineColor': '#5D6D7E'}}}%%
graph TB
subgraph External["External Traffic"]
Client[("Users / API Clients")]
end
subgraph K8sCluster["Kubernetes Cluster"]
subgraph KongNS["Namespace: kong"]
Kong["Kong API Gateway<br/>Rate Limiting | CORS | Prometheus"]
end
subgraph AppNS["Namespace: kubestock-staging / production"]
subgraph Frontend["Frontend"]
Web["Web Application<br/>(React + Vite)<br/>Port: 80"]
end
subgraph Microservices["Backend Microservices"]
MSProduct["ms-product<br/>Product Catalog<br/>Port: 3002"]
MSInventory["ms-inventory<br/>Inventory Management<br/>Port: 3003"]
MSSupplier["ms-supplier<br/>Supplier Management<br/>Port: 3004"]
MSOrder["ms-order-management<br/>Order Processing<br/>Port: 3005"]
MSIdentity["ms-identity<br/>Identity Service<br/>Port: 3006"]
end
end
subgraph ObsNS["Namespace: observability"]
Prometheus["Prometheus<br/>Metrics"]
Grafana["Grafana<br/>Dashboards"]
Loki["Loki<br/>Log Aggregation"]
Promtail["Promtail<br/>Log Collection"]
end
end
subgraph Database["RDS PostgreSQL"]
ProductDB[("product_catalog_db")]
InventoryDB[("inventory_db")]
SupplierDB[("supplier_db")]
OrderDB[("order_db")]
IdentityDB[("identity_db")]
end
Client --> Kong
Kong --> Web
Kong --> MSProduct
Kong --> MSInventory
Kong --> MSSupplier
Kong --> MSOrder
Kong --> MSIdentity
MSProduct --> ProductDB
MSInventory --> InventoryDB
MSSupplier --> SupplierDB
MSOrder --> OrderDB
MSIdentity --> IdentityDB
MSProduct --> MSInventory
MSOrder --> MSProduct
MSOrder --> MSInventory
MSSupplier --> MSInventory
Prometheus --> MSProduct
Prometheus --> MSInventory
Prometheus --> MSSupplier
Prometheus --> MSOrder
Prometheus --> MSIdentity
Prometheus --> Kong
Grafana --> Prometheus
Grafana --> Loki
Promtail --> Loki
classDef frontend fill:#61DAFB,stroke:#21a1c4,color:#000
classDef backend fill:#68A063,stroke:#3E7B27,color:#fff
classDef db fill:#336791,stroke:#1d4f6e,color:#fff
classDef gateway fill:#003459,stroke:#00171F,color:#fff
classDef monitoring fill:#E6522C,stroke:#B33D1C,color:#fff
class Web frontend
class MSProduct,MSInventory,MSSupplier,MSOrder,MSIdentity backend
class ProductDB,InventoryDB,SupplierDB,OrderDB,IdentityDB db
class Kong gateway
class Prometheus,Grafana,Loki,Promtail monitoring
- Modular Terraform structure with reusable components
- Modules: networking, security, compute, kubernetes, rds, ecr, cicd
- Environment separation: demo and production configurations
- Remote state management in S3 with locking
- Automated apply via GitHub Actions
Continuous Integration:
- Detect changed services via Git diff
- Build Docker images with multi-stage builds
- Run security scans (Trivy, npm audit)
- Push to Amazon ECR
- Tag with commit SHA for traceability
Continuous Deployment:
- Update GitOps repository with new image tags
- ArgoCD auto-syncs to staging environment
- Manual approval gate for production
- ArgoCD deploys to production on approval
Pipeline Stages:
- ✅ Lint & Test
- ✅ Security Scanning
- ✅ Docker Build & Push
- ✅ GitOps Update
- ✅ Auto-deploy to Staging
⚠️ Manual Production Approval- ✅ Deploy to Production
%%{init: {'theme': 'base', 'themeVariables': { 'primaryColor': '#2196F3', 'lineColor': '#5D6D7E'}}}%%
flowchart LR
subgraph Developer["Developer Workflow"]
Dev["Developer"]
Feature["Feature Branch"]
PR["Pull Request"]
end
subgraph CICD["CI/CD Pipeline (GitHub Actions)"]
subgraph PRChecks["PR Checks"]
Detect["Detect Changes"]
Lint["Lint"]
Test["Unit Tests"]
Security["Security Scan<br/>(npm audit)"]
Build["Docker Build"]
Trivy["Trivy Scan"]
end
subgraph MainPipeline["Main Branch Pipeline"]
DetectMain["Detect Changes"]
BuildPush["Build & Push<br/>to ECR"]
StagingDeploy["Update GitOps<br/>(Staging)"]
Approval["Manual Approval"]
ProdDeploy["Update GitOps<br/>(Production)"]
end
end
subgraph GitOps["GitOps Repository"]
GitOpsRepo[("kubestock-gitops")]
Staging["overlays/staging"]
Production["overlays/production"]
end
subgraph ArgoCD["ArgoCD"]
StagingApp["kubestock-staging<br/>(Auto-Sync)"]
ProdApp["kubestock-production<br/>(Manual Sync)"]
end
subgraph K8s["Kubernetes Cluster"]
StagingNS["Staging"]
ProdNS["Production"]
end
subgraph Registry["ECR"]
ECR[("Container Images")]
end
Dev -->|push| Feature
Feature -->|create| PR
PR --> Detect
Detect --> Lint & Test & Security
Lint --> Build
Test --> Build
Security --> Build
Build --> Trivy
PR -->|merge| DetectMain
DetectMain --> BuildPush
BuildPush --> ECR
BuildPush --> StagingDeploy
StagingDeploy --> Staging
Staging --> StagingApp
StagingApp --> StagingNS
StagingDeploy --> Approval
Approval --> ProdDeploy
ProdDeploy --> Production
Production --> ProdApp
ProdApp --> ProdNS
ECR --> StagingNS
ECR --> ProdNS
classDef dev fill:#E8F5E9,stroke:#4CAF50,color:#000
classDef ci fill:#E3F2FD,stroke:#2196F3,color:#000
classDef gitops fill:#FFF3E0,stroke:#FF9800,color:#000
classDef argo fill:#EF7B4D,stroke:#B85C3B,color:#fff
classDef k8s fill:#326ce5,stroke:#fff,color:#fff
class Dev,Feature,PR dev
class Detect,Lint,Test,Security,Build,Trivy,DetectMain,BuildPush,StagingDeploy,Approval,ProdDeploy ci
class GitOpsRepo,Staging,Production gitops
class StagingApp,ProdApp argo
class StagingNS,ProdNS k8s
%%{init: {'theme': 'base', 'themeVariables': { 'primaryColor': '#EF7B4D', 'lineColor': '#5D6D7E'}}}%%
flowchart TB
subgraph Repositories["Git Repositories"]
AppRepo[("Application<br/>Source Code")]
GitOpsRepo[("kubestock-gitops<br/>K8s Manifests")]
end
subgraph ArgoCD["ArgoCD Controller"]
Controller["Application<br/>Controller"]
subgraph Projects["AppProjects"]
InfraProject["Infrastructure<br/>Cluster-wide"]
StagingProject["Staging<br/>Environment"]
ProdProject["Production<br/>Environment"]
end
subgraph Applications["Applications"]
subgraph InfraApps["Infrastructure"]
ESO["external-secrets"]
Metrics["metrics-server"]
Autoscaler["cluster-autoscaler"]
end
subgraph EnvApps["Environments"]
KongStaging["kong-staging"]
KubestockStaging["kubestock-staging"]
KongProd["kong-production"]
KubestockProd["kubestock-production"]
end
end
end
subgraph SyncPolicies["Sync Policies"]
StagingPolicy["Staging: Auto-Sync ON<br/>Prune ON | Self-Heal ON"]
ProdPolicy["Production: Manual Sync<br/>Explicit Control"]
end
subgraph K8sCluster["Kubernetes Cluster"]
StagingNS["kubestock-staging<br/>namespace"]
ProdNS["kubestock-production<br/>namespace"]
end
AppRepo -->|triggers| CICD["GitHub Actions"]
CICD -->|updates| GitOpsRepo
GitOpsRepo --> Controller
Controller --> InfraProject & StagingProject & ProdProject
StagingPolicy --> StagingProject
ProdPolicy --> ProdProject
KubestockStaging --> StagingNS
KubestockProd --> ProdNS
classDef repo fill:#F5F5F5,stroke:#616161,color:#000
classDef argo fill:#EF7B4D,stroke:#B85C3B,color:#fff
classDef policy fill:#E8F5E9,stroke:#4CAF50,color:#000
classDef ns fill:#326ce5,stroke:#fff,color:#fff
class AppRepo,GitOpsRepo repo
class Controller,InfraProject,StagingProject,ProdProject,ESO,Metrics,Autoscaler,KongStaging,KubestockStaging,KongProd,KubestockProd argo
class StagingPolicy,ProdPolicy policy
class StagingNS,ProdNS ns
Application Code → CI Build → ECR Image → GitOps Update → ArgoCD Sync → K8s Cluster
- Declarative deployments with Kustomize overlays
- Automatic staging sync for rapid iteration
- Manual production sync for controlled releases
- Self-healing and automatic rollback on failures
Infrastructure Layer:
- Private subnets for all Kubernetes nodes
- Security groups with least-privilege rules
- No direct internet exposure for worker nodes
- NAT Gateway for outbound-only access
Platform Layer:
- Kubernetes RBAC for access control
- Pod Security Standards (non-root, read-only filesystem)
- Network Policies for micro-segmentation
- Secrets management via AWS Secrets Manager + External Secrets Operator
Application Layer:
- OAuth 2.0/OIDC authentication via WSO2 Asgardeo
- Kong API Gateway with rate limiting
- Input validation with Joi schemas
- Security headers via Helmet middleware
- SQL injection prevention with parameterized queries
CI/CD Security:
- Branch protection and required PR reviews
- Dependency vulnerability scanning (npm audit)
- Container image scanning (Trivy)
- ECR image scanning on push
- OIDC-based AWS authentication (no long-lived credentials)
- Prometheus with 30-second scrape interval
- Node Exporter for infrastructure metrics
- kube-state-metrics for Kubernetes object metrics
- Application metrics exposed at
/metricsendpoint
- Grafana dashboards for microservices health
- Real-time monitoring of CPU, memory, and network
- Request rate and error tracking
- Custom business metrics per service
- Promtail DaemonSet collects container logs
- Loki for centralized log storage
- Kubernetes metadata enrichment (pod, namespace, node)
- Grafana Explore for log querying
- Configuration: 1-8 replicas per service
- Target: 70% CPU utilization
- Scale-up stabilization: 30 seconds
- Scale-down stabilization: 300 seconds
- Node range: 1-8 worker nodes
- Scale-up time: ~2-3 minutes
- Scale-down delay: 10 minutes
- Instance type: t3.medium
| Repository | Purpose | URL |
|---|---|---|
| Application Code | Microservices and frontend | Application Repo |
| GitOps Manifests | Kubernetes configurations | GitOps Repo |
| Infrastructure | Terraform IaC | Infrastructure Repo |
Developer Push → PR Checks → Merge to Main
↓
Build & Test
↓
Security Scan (Trivy)
↓
Push to ECR
↓
Update GitOps Repo
↓
ArgoCD Auto-Sync (Staging)
↓
Manual Approval
↓
ArgoCD Deploy (Production)
| Metric | Value |
|---|---|
| Build Time (per service) | ~2-3 minutes |
| Total Deployment Time | ~5-8 minutes |
| Rollback Time | ~1-2 minutes |
| ArgoCD Sync Time | ~1-2 minutes |
Monthly Estimate: ~$233
Key optimizations:
- Single NAT Gateway (saves ~$64/month)
- Dev server can be stopped when idle (saves ~$28/month)
- t3.medium instances with burstable performance
- ECR lifecycle policies (keep last 5 images)
- Cluster Autoscaler for dynamic provisioning
- Multi-AZ deployment across 3 availability zones
- Rolling update strategy with zero-downtime
- Automatic pod rescheduling on node failure
- Horizontal Pod Autoscaling (1-8 replicas)
- Cluster Autoscaling (1-8 nodes)
- Independent service scaling
- Zero-trust network model
- Multi-layer security controls
- Automated vulnerability scanning
- Secrets rotation and management
- Full-stack metrics collection
- Centralized log aggregation
- Custom dashboards and alerts
- Request tracing capabilities
✅ Infrastructure as Code - 100% Terraform automation
✅ GitOps - Git as single source of truth
✅ Immutable Infrastructure - Container-based deployments
✅ Automated Testing - Unit, integration, and security tests
✅ Continuous Monitoring - Prometheus + Grafana stack
✅ Security Scanning - Multi-stage vulnerability detection
✅ Environment Parity - Consistent staging and production
✅ Declarative Configuration - Kubernetes manifests via Kustomize
| Namespace | Purpose | Components |
|---|---|---|
kubestock-staging |
Staging environment | Microservices, frontend |
kubestock-production |
Production environment | Microservices, frontend |
kong |
API Gateway | Kong deployment |
observability |
Monitoring/Logging | Prometheus, Grafana, Loki, Promtail |
argocd |
GitOps | ArgoCD application controller |
external-secrets |
Secrets management | External Secrets Operator |
cluster-autoscaler |
Auto-scaling | Cluster Autoscaler deployment |
Status: ✅ Complete
Deployment: Production-Ready
Last Updated: January 2026
This project represents a complete, production-grade DevOps implementation suitable for enterprise-level cloud-native applications.
