Polyglot micro-lending platform built on event-driven microservices. Java handles loan lifecycle and SOAP/CBS integration. Clojure handles credit scoring through pure functional pipelines. React/TypeScript provides the analytics dashboard. The whole thing runs on Docker Compose locally (15 containers with full observability) or GCP Cloud Run in the cloud at $0-3/month.
Live: nanolend-app.vercel.app ・ API Docs: Swagger UI
graph TB
DASH["Dashboard · React/TS"] --> GW["API Gateway · Spring Cloud Gateway"]
GW --> LS["Loan Service · Java 21 + Spring Boot"]
GW --> SE["Scoring Engine · Clojure + Ring/Reitit"]
GW --> RMB["Read Model Builder · CQRS"]
LS --> CBS["Mock CBS · SOAP/WSDL"]
LS --> SE
LS -->|outbox| BROKER
BROKER -->|events| EP["Event Processor"]
BROKER -->|events| RMB
LS --- PG1[("loan_db")]
SE --- PG2[("scoring_db")]
RMB --- PG3[("analytics_db")]
EP --- REDIS[("Redis")]
subgraph BROKER["Event Broker"]
K["Kafka · local"]
Q["QStash · cloud"]
end
CQRS separates the write path (loan-service → loan_db → outbox → Kafka/QStash) from the read path (read-model-builder → analytics_db → dashboard). The event processor handles audit trails and aggregate stats in Redis. All event consumers are idempotent — deduplication via an event_log table means redelivery is always safe.
| Layer | Choice | Rationale |
|---|---|---|
| Loan lifecycle | Java 21 + Spring Boot 3.3 | Transactional correctness, JPA, mature ecosystem for SOAP/WSDL integration with core banking |
| Credit scoring | Clojure 1.11 | Scoring is a data transformation pipeline. Threading macros (->) make it read like a spec. Immutable data eliminates concurrency bugs. Same JVM, zero friction |
| Frontend | React 18 + TypeScript | Recharts for portfolio analytics, Tailwind for layout, Vite for fast iteration |
| Event broker (local) | Kafka (KRaft) | Durable event log, partitioned consumers, replay capability. No ZooKeeper |
| Event broker (cloud) | QStash | HTTP push — wakes Cloud Run on-demand. Kafka consumers poll, which requires always-on processes ($7/mo each). QStash delivers the same semantics at $0 |
| Database | PostgreSQL 16 × 3 | One per bounded context: loan_db (transactional), scoring_db (scoring history), analytics_db (materialized views) |
| Service | Stack | Purpose |
|---|---|---|
| loan-service | Java 21, Spring Boot, JPA, Flyway | Loan applications, repayments, top-ups, transactional outbox, CBS/SOAP integration |
| scoring-engine | Clojure, Ring, Reitit, next.jdbc | Behavioral credit scoring — pure functional pipeline with atom-based caching |
| event-processor | Java, Spring Kafka | Kafka/QStash consumer, audit trail (event_log), Redis aggregate stats, DLQ |
| read-model-builder | Java, Spring Kafka | CQRS read side — projects events into denormalized views (loan_summary, daily_stats, customer_360) |
| api-gateway | Spring Cloud Gateway | Routing, Redis-backed rate limiting (token bucket), correlation IDs, CORS |
| mock-cbs | Java, Spring WS | Core Banking System simulator — WSDL-first SOAP endpoint for KYC verification |
| dashboard | React 18, TypeScript, Vite, Tailwind | Portfolio analytics, repayment tracking, customer 360 views |
Loan state changes and events are written in a single ACID transaction. A scheduled publisher polls the outbox and forwards to Kafka (local) or QStash (cloud). No dual-write problem. At-least-once delivery guaranteed.
Both brokers are abstracted behind @ConditionalOnProperty. Local mode uses @KafkaListener (poll-based). Cloud mode uses EventPushController (HTTP push). Same handler logic, different transport — switching costs nothing:
Local/Docker: OutboxPublisher → KafkaTemplate → @KafkaListener (poll)
Cloud Run: QStashOutboxPublisher → QStash API → POST /internal/events (push)
This isn't just a cost optimization. Kafka consumers require always-on processes. Cloud Run kills idle containers. QStash pushes events via HTTP, triggering cold starts on-demand — genuine scale-to-zero.
Resilience4j wraps all calls to the scoring engine. On failure (50% threshold over sliding window of 10), the circuit opens and scoring requests fall back to a Kafka scoring-fallback topic for later processing. The loan stays in SCORING state until recovery.
The dashboard never queries loan_db directly. Events flow through the read-model-builder into analytics_db as denormalized projections — loan_summary_view, daily_stats, customer_360. Write path optimizes for consistency. Read path optimizes for query speed.
Full observability stack included. This is the complete platform:
cd infrastructure && docker compose up --build -d| Component | Port | Notes |
|---|---|---|
| PostgreSQL 16 | 5432 | 3 databases (loan_db, scoring_db, analytics_db) |
| Kafka (KRaft) | 9092 | Event streaming, no ZooKeeper dependency |
| Redis 7 | 6379 | Rate limiting, aggregate stats cache |
| Prometheus | 9090 | Scrapes all services every 15s |
| Grafana | 3001 | 4 pre-provisioned dashboards (loan ops, Kafka health, JVM metrics, circuit breaker) |
| Elasticsearch | 9200 | Centralized log aggregation |
| Logstash | 5044 | Log pipeline — structured JSON from all services |
| Kibana | 5601 | Log search and visualization |
The cloud deployment strips the observability stack intentionally. Running Prometheus, Grafana, and ELK on Cloud Run would cost more than the services themselves — a dedicated Grafana instance alone defeats the purpose of scale-to-zero. Cloud Run's built-in logging and the Actuator /prometheus endpoints remain available if needed.
cd infrastructure/terraform && terraform init && terraform apply
cd frontend/dashboard && vercel deploy --prod| Component | Provider | Why |
|---|---|---|
| Backend (6 services) | GCP Cloud Run (africa-south1) | Scale-to-zero, pay-per-request |
| Frontend | Vercel | Edge-cached SPA, API rewrites to Cloud Run |
| Database (3x) | Neon Postgres | Auto-suspend after 5 min idle, built-in connection pooling |
| Event Broker | QStash (Upstash) | HTTP push, no always-on consumers needed |
| Cache | Upstash Redis | TLS, 10K cmd/day free tier |
| Secrets | GCP Secret Manager | Injected as env vars via Terraform |
| IaC | Terraform | 6 Cloud Run services, Secret Manager, Artifact Registry, IAM |
Cost rationale: Managed GCP equivalents (Cloud SQL + Memorystore + VPC Connector) would run ~$64/month. Neon + Upstash + QStash deliver the same functionality at $0 idle. The trade-off is cold start latency (~2-3s for Neon, ~5-15s for JVM on Cloud Run) — acceptable for a platform that isn't serving production traffic 24/7.
| Tool | URL | What it shows |
|---|---|---|
| Swagger UI | http://localhost:8080/swagger-ui.html | Full OpenAPI spec |
| Grafana | http://localhost:3001 | Loan operations, Kafka lag, JVM heap, circuit breaker state |
| Prometheus | http://localhost:9090 | Raw metrics, PromQL |
| Kibana | http://localhost:5601 | Aggregated structured logs across all services |
Full OpenAPI spec at /swagger-ui.
| Endpoint | Method | Description |
|---|---|---|
/api/v1/subscriptions |
POST | Customer onboarding — triggers SOAP KYC via CBS |
/api/v1/loans |
POST | Loan application — scoring, interest rate calculation, outbox event |
/api/v1/loans/{id} |
GET | Loan by ID |
/api/v1/loans/{id}/repayments |
POST | Repayment — updates balance, triggers behavioral score adjustment |
/api/v1/loans/topup |
POST | Top-up against partially repaid loan (≥25% repaid required) |
/api/v1/score/initiate |
POST | Credit scoring — pure functional pipeline, cached 5 min |
/api/v1/score/adjust |
POST | Behavioral adjustment (on-time: +5, late: -10, completed: +25) |
/api/v1/analytics/loans |
GET | Denormalized loan view (CQRS read side) |
/api/v1/analytics/customers/{num} |
GET | Customer 360 — loans, repayment history, credit evolution |
/api/v1/analytics/stats/summary |
GET | Aggregate portfolio stats |
lendstream/
├── services/
│ ├── loan-service/ # Java 21 — loan lifecycle, outbox, SOAP client
│ ├── scoring-engine/ # Clojure — functional credit scoring pipeline
│ ├── event-processor/ # Java — Kafka/QStash consumer, audit, stats
│ ├── read-model-builder/ # Java — CQRS projections into analytics_db
│ ├── api-gateway/ # Spring Cloud Gateway — routing, rate limiting
│ └── mock-cbs/ # SOAP/WSDL core banking simulator
├── frontend/
│ └── dashboard/ # React 18 + TypeScript + Vite
├── infrastructure/
│ ├── docker-compose.yml # 15 containers (app + infra + observability)
│ ├── terraform/ # GCP Cloud Run, Secret Manager, Artifact Registry
│ ├── k8s/ # Kubernetes manifests
│ ├── grafana/ # 4 pre-provisioned dashboards
│ ├── prometheus.yml # Scrape config
│ └── logstash/ # ELK pipeline
└── scripts/ # setup, dev, test, build, deploy