A self-hosted macroeconomic intelligence platform powered by a local LLM (Ollama). It continuously monitors global macro conditions — regime, liquidity, stress, and inflation cycles — and answers free-text questions about the current economic environment.
- Regime detection — 6-regime scorecard (z-score based) + unsupervised HMM regime detector
- Inflation regime — unsupervised HMM on 65 years of FRED inflation data (CPI/PCE/M2/real rate)
- Recession probability — walk-forward calibrated logistic model (6m and 12m horizons)
- US Liquidity Index — Fed balance sheet, RRP, TGA, M2
- Stress monitor — composite 0–100 index (VIX, HY spreads, Fear & Greed, DXY)
- Correlation analysis — Pearson/Spearman with bootstrap CI, OLS, and regime-conditional breakdowns
- Extended macro — real rates, breakevens, copper/gold ratio, financial conditions, housing starts, credit impulse (FRED)
- Cross-asset signals — momentum, z-scores, and risk-on/risk-off composite across equities, bonds, crypto, commodities, FX (yfinance)
- Global markets — China (SSE, CNY, 10Y CGB), India (NIFTY, INR, 10Y), Europe peripheral spreads (IT/ES vs DE)
- Regime backtests — return distributions by macro regime for any asset
- Ask Macro — autonomous tool-calling agent: routes any macro question to 21 data tools
- Agent traceability — every
/askcall persists its full agentic loop trace to SQLite (tools called, arguments, raw results, latency per iteration) - Alerts — webhook subscriptions, threshold triggers, scheduler-driven snapshots
- React dashboard — live gauges, yield curve, HMM panel, recession radar, cross-asset panel, and conversational interface
| Layer | Technology |
|---|---|
| Backend | FastAPI + Uvicorn, Python 3.12 |
| LLM | Ollama (local, model configurable) |
| Data | FRED API, yfinance, ECB SDW, BoJ |
| Database | SQLite (aiosqlite) — cache, history, alerts |
| ML | scikit-learn, hmmlearn, statsmodels, numpy |
| Frontend | React 18 + Vite + Tailwind CSS v3 + Recharts |
| Config | pydantic-settings, env prefix MACRO_ |
- Python 3.12+
- Ollama installed and running locally
- Free FRED API key: fred.stlouisfed.org
- Node.js 18+ (frontend only)
git clone <repo-url>
cd macro-analyst
# Install Python dependencies
pip install -r requirements.txt
# Copy and fill in environment variables
cp .env.example .env
# Edit .env — required: MACRO_FRED_API_KEY and MACRO_OLLAMA_URL# Backend
uvicorn app.api:app --host 0.0.0.0 --port 8000 --reload
# Frontend (dev)
cd ui && npm install && npm run dev
# → http://localhost:5173
# Or build the frontend for production (served by FastAPI at /ui)
cd ui && npm run builddocker compose up| Variable | Required | Default | Description |
|---|---|---|---|
MACRO_FRED_API_KEY |
Yes | — | FRED API key (free) |
MACRO_OLLAMA_URL |
Yes | http://localhost:11434 |
Ollama base URL |
MACRO_API_KEY |
No | (empty = open) | Bearer auth for all endpoints |
MACRO_SNAPSHOT_INTERVAL_HOURS |
No | 1 |
Scheduler snapshot frequency |
MACRO_ALERT_WEBHOOK_URL |
No | — | Webhook URL for alert delivery |
See .env.example for the full list.
| Method | Path | Description |
|---|---|---|
| GET | /health |
Monitoring endpoint — Ollama, scheduler, DB, ML models |
| GET | /brief |
Ultra-compact macro brief (no LLM, near-instant) |
| GET | /snapshot |
Full macro snapshot (regime + liquidity + stress + brief) |
| GET | /macro-context |
Compact snapshot for peer agent injection (< 500 tokens) |
| POST | /ask |
Free-text macro question → bullets + conclusion |
| POST | /ask/structured |
Same as /ask but returns machine-readable MacroAnswer |
| GET | /regime |
Regime scorecard (6 regimes, z-scores) |
| GET | /liquidity |
US Liquidity Index |
| GET | /stress |
Stress monitor 0–100 |
| GET | /correlations |
Asset–driver correlation analysis |
| GET | /indicators |
Raw indicator values — includes extended_macro and cross_asset |
| GET | /history |
Snapshot history |
| GET | /regime/timeline |
Regime timeline |
| POST | /alerts/subscribe |
Register a webhook |
| GET | /v1/tools |
OpenAI-compatible tool catalog (21 tools) |
| POST | /v1/tool_call |
Invoke a tool by name |
| GET | /ask/traces |
Agentic loop traces for recent /ask calls |
| DELETE | /cache |
Flush the SQLite cache |
Swagger UI: http://localhost:8000/docs
Full reference: docs/api.md
The /ask agent has access to 21 tools exposed via /v1/tools (OpenAI-compatible catalog):
| Tool | Description |
|---|---|
get_macro_snapshot |
Full snapshot — regime, liquidity, stress, brief |
ask_macro |
Nested macro question (sub-agent) |
get_market_prices |
Equity indices, crypto, futures (yfinance) |
get_yield_curve |
US Treasury curve 1M–30Y + spreads (FRED) |
get_forex |
Major FX pairs (yfinance) |
get_commodities |
Gold, oil, copper, wheat, natural gas (yfinance) |
get_credit_spreads |
IG/HY OAS, BAA–AAA, TED spread (FRED) |
get_macro_indicators |
CPI, PCE, unemployment, INDPRO, NFCI (FRED) |
get_upcoming_events |
Forward calendar — FOMC, CPI, NFP (FRED releases) |
get_regime_history |
Regime timeline from snapshot history |
run_backtest_by_regime |
Return distribution by macro regime for any ticker |
recession_probability |
Calibrated recession probability at 6m and 12m (ML) |
get_extended_macro |
Real rates, breakevens, copper/gold, financial conditions, housing, credit impulse (FRED) |
get_cross_asset_signals |
Momentum + z-scores across equities, bonds, crypto, commodities, FX (yfinance) |
get_equity_sectors |
SPDR sector ETF performance (yfinance) |
get_ecb_rates |
ECB deposit rate, HICP, EUR/USD (ECB SDW + FRED) |
get_boj_rates |
BoJ policy rate, CPI, JPY/USD, 10Y JGB (BoJ + FRED) |
get_europe_spreads |
Italy/Spain vs Germany 10Y spreads (FRED) |
get_china_markets |
SSE Composite, CNY/USD, 10Y CGB yield (FRED + yfinance) |
get_india_markets |
NIFTY 50, INR/USD, 10Y India bond (FRED + yfinance) |
detect_regime_hmm |
Unsupervised HMM regime detection (ML inference) |
Real traces from the ask_traces table — questions sent to POST /ask, answers generated by the agentic pipeline.
Q: What is the current macro regime and main risks?
The current macro regime is Goldilocks (score 0.526, 33% agreement). Stress is elevated at 71 / 100 (70th percentile over 5 years), driven by a VIX of 25.5 and real rates at +1.25% that are compressing the 2Y/10Y spread to just +0.54 bps.
Key points:
- Stress is high at 71.0 — at the 70.99th percentile vs. the last 5 years
- Real rates remain elevated at +1.25%, suppressing duration appetite
- Risk bias score is 0.452 — mild risk-off tilt within the goldilocks regime
Conclusion: The economy faces a latent regime-shift risk — sticky inflation and high stress could tip the balance despite the current goldilocks classification.
Latency: 29s — 1 agentic iteration
Q: What is the BTC price and current crypto outlook?
BTC is at $69,201 (+4.9% today, +1.56% on the week). The macro backdrop is goldilocks with a bullish crypto bias, but stress at 71 is a headwind worth monitoring.
Key points:
- BTC 1-day change: +4.9%, 1-week: +1.56%
- Regime: goldilocks with expanding liquidity and risk-on sentiment
- Stress at 71 / 100 represents a non-trivial tail risk for risk assets
Conclusion: Bitcoin shows strong bullish momentum within a stable goldilocks regime — but elevated stress warrants position discipline.
Latency: 18s — 1 agentic iteration
Every call to POST /ask runs an autonomous agentic loop: the LLM selects tools, executes them in parallel, receives their results, and iterates until it can answer. Each run is fully traced and persisted to SQLite.
For each /ask call, one row is written to the ask_traces table with:
| Field | Description |
|---|---|
question |
The user's question verbatim |
latency_s |
Total wall-clock time for the full pipeline |
total_iterations |
Number of tool-calling rounds |
iterations |
JSON array — one entry per round (see below) |
final_response |
Raw LLM output before parsing |
Each iteration entry contains:
iteration— round number (1-based)assistant_content— any text the LLM emitted before the tool callstools— list of{tool, args, result}for every tool called in that round
# Most recent 20 traces (last 7 days)
curl http://localhost:8000/ask/traces
# Last 5 traces over 30 days
curl "http://localhost:8000/ask/traces?limit=5&days=30"Example response (one trace):
{
"id": 42,
"timestamp": "2026-03-10T14:32:01Z",
"question": "What is the current macro regime?",
"latency_s": 28.9,
"total_iterations": 1,
"final_response": "{\"direct_answer\": \"Goldilocks...\"}",
"iterations": [
{
"iteration": 1,
"assistant_content": "",
"tools": [
{
"tool": "get_macro_snapshot",
"args": {},
"result": { "regime": "goldilocks", "stress": 71.0, "vix": 25.5 }
},
{
"tool": "detect_regime_hmm",
"args": {},
"result": { "current_state": 2, "state_proba": [0.02, 0.11, 0.87] }
}
]
}
]
}Traceability is fundamental to production AI systems:
- Auditability — know exactly which data sources the LLM consulted before answering
- Debugging — if an answer is wrong, inspect whether the right tools were called with the right arguments and whether the results were correct
- Trust — answers backed by traceable tool calls are verifiable, not hallucinated
- Iteration — tool selection patterns reveal which questions are answered well and which need better routing rules
Three production models, all trained offline and served via app/ml/api/:
| Model | Task | Approach | History |
|---|---|---|---|
hmm_regime |
Unsupervised macro regime | GaussianHMM + PCA + BIC selection | 1987+ |
hmm_inflation |
Unsupervised inflation regime | GaussianHMM + PCA + BIC selection | 1961+ |
recession_6m |
Recession probability at 6 months | Walk-forward logistic + Platt | 1968+ |
recession_12m |
Recession probability at 12 months | Walk-forward logistic + Platt | 1968+ |
# Train all models
python scripts/train_models.py
# Train a specific model
python scripts/train_models.py --model hmm
python scripts/train_models.py --model hmm-inflation
python scripts/train_models.py --model recession
# Rebuild FRED/yfinance panels from scratch, then train
python scripts/train_models.py --rebuild-panels
# Dry-run — build panels only, no training
python scripts/train_models.py --dry-runArtifacts are saved to storage/models/artifacts/ (.joblib) and model cards to storage/models/metadata/ (.json).
Model documentation:
macro-analyst/
├── app/
│ ├── agent/ # Snapshot + Ask autonomous pipelines
│ ├── compute/ # Regime, stress, liquidity, correlations
│ ├── ml/ # HMM models, RecessionProbit, features, validation
│ │ ├── api/ # Async inference entry points
│ │ ├── models/ # GaussianHMM, RecessionProbit
│ │ ├── features/ # Feature sets, transforms, spreads
│ │ ├── labels/ # USREC recession labels
│ │ ├── datasets/ # FRED + market panel builders
│ │ ├── backtests/ # Regime return analysis
│ │ ├── validation/ # Walk-forward CV, metrics
│ │ └── registry/ # Model store (joblib) + metadata
│ ├── providers/ # FRED, ECB, BoJ, yfinance
│ ├── llm/ # Ollama client
│ ├── api.py # FastAPI — all endpoints
│ ├── tools.py # 21 async data tools
│ ├── tool_spec.py # OpenAI function-calling catalog (21 tools)
│ ├── config.py # pydantic-settings
│ ├── cache.py # SQLite cache
│ ├── history.py # Snapshot history persistence
│ ├── alerts.py # Webhook + alert management
│ └── scheduler.py # APScheduler
├── docs/ # Full documentation
├── scripts/ # Offline training and backfill utilities
├── storage/ # Parquet panels + model artifacts
├── tests/ # pytest test suite (≥80% coverage)
├── ui/ # React + Vite dashboard (14 components)
└── docker-compose.yml
Full architecture: docs/architecture.md
# Install dev dependencies (pytest + pytest-cov + pytest-asyncio)
pip install -r requirements-dev.txt
# Run all tests with coverage report
pytest
# Run a specific test file
pytest tests/test_api_v3.py -v
# Coverage only
pytest --cov=app --cov-report=htmlCoverage target: 80% (enforced by pytest.ini).
Two requirements files:
requirements.txt— runtime dependencies only (FastAPI, ML libs, etc.). Use this in production and Docker.requirements-dev.txt— includes everything inrequirements.txtplus test tooling (pytest,pytest-asyncio,pytest-cov). Use this locally and in CI.
GitHub Actions workflows are defined in .github/workflows/:
| Workflow | Trigger | Jobs |
|---|---|---|
ci.yml |
Push / PR on master |
lint (ruff) → tests (pytest ≥80% coverage) → UI build (tsc + vite) → Docker dry-run |
release.yml |
Push of a v*.*.* tag |
Build UI → build & push Docker image to GitHub Container Registry |
To publish a new release:
git tag v1.2.0
git push origin v1.2.0The image is pushed to ghcr.io/<owner>/macro-analyst with tags 1.2.0, 1.2, and latest.
Tests that require a live Ollama instance (test_tools_full, test_agent_ask, test_snapshot, test_llm_client) are excluded from CI.
| Document | Content |
|---|---|
| docs/setup.md | Installation, configuration, Docker |
| docs/architecture.md | System layers, data flow, design decisions |
| docs/api.md | Full API endpoint reference |
| docs/ml/hmm_regime.md | HMM Regime Detector |
| docs/ml/hmm_inflation.md | HMM Inflation Detector |
| docs/ml/recession_probit.md | Recession Probit Model |
MIT