Understand your infrastructure, not just your code.
Infrawise gives AI coding assistants deterministic infrastructure awareness.
It statically analyzes your codebase, cloud infrastructure, and database schemas, then exposes that context through MCP so tools like Claude Code can understand your actual tables, indexes, query patterns, and service relationships instead of guessing from source files alone.
AI coding assistants can read your source files but have no deterministic knowledge of your infrastructure. They do not know which GSIs exist, how tables are partitioned, which functions already trigger scans, or where indexes are missing. So they guess.
Infrawise replaces guessing with infrastructure-aware context.
Without Infrawise, an AI assistant might:
- Suggest a
.scan()on your Orders table that has 50M rows - Recommend adding a GSI on
statusthat you already have - Write a
SELECT *when you need to keep query cost low - Not notice that 5 functions are already hammering the same partition key
With Infrawise, it knows:
- Your exact table schemas, partition keys, sort keys, and GSIs
- Which functions query which tables and how
- Which patterns are already flagged as high severity
- The exact
CREATE INDEXSQL or GSI config for your tables — not generic advice
Infrawise is not an AI agent framework, an infrastructure provisioning tool, an observability platform, or a cloud management dashboard.
It is a deterministic infrastructure intelligence layer for AI-assisted development.
npm install -g infrawiseor use without installing:
npx infrawise init1. Initialize in your repo
cd your-project
infrawise initDetects your AWS profile and region, asks a few questions, writes infrawise.yaml. That's the only file it creates in your project.
2. Validate everything is connected
infrawise doctor3. Run analysis
infrawise analyzeFindings (3 total)
1. [HIGH] Full table scan detected on DynamoDB table "Orders"
listAllOrders() scans without any filter — reads every item in the table.
Recommendation: Replace Scan with Query using a partition key or add a GSI.
2. [MEDIUM] PostgreSQL table "users" has no index on column "email"
Filtering on "email" causes sequential scans.
Recommendation: CREATE INDEX CONCURRENTLY idx_users_email ON users(email);
3. [MEDIUM] DynamoDB table "Sessions" accessed by 6 distinct code paths
High access concentration may create hot partition issues at scale.
infrawise dev✔ Tool server running
✔ Context engine initialized
MCP endpoint: http://localhost:3000/mcp
Available tools: http://localhost:3000/mcp/tools
Claude Code — edit .claude/settings.json in your repo (project-level) or ~/.claude/settings.json (global):
{
"mcpServers": {
"infrawise": {
"url": "http://localhost:3000/mcp"
}
}
}To let Claude Code manage the server lifecycle automatically:
{
"mcpServers": {
"infrawise": {
"command": "infrawise",
"args": ["dev"]
}
}
}Cursor and Windsurf — add http://localhost:3000/mcp as an MCP server in editor settings.
| Tool | What it provides |
|---|---|
get_infra_overview |
Complete snapshot — all services, counts, and high-severity findings |
get_graph_summary |
Full infrastructure graph — all nodes, edges, and findings |
analyze_function |
Issues in a specific function — scans, missing indexes, N+1 |
suggest_gsi |
Exact GSI config for a DynamoDB table + attribute |
postgres_index_suggestions |
Exact CREATE INDEX SQL for your actual table |
suggest_mongo_index |
Exact createIndex command for a MongoDB collection + field |
mysql_index_suggestions |
Exact ALTER TABLE ADD INDEX SQL for your MySQL table |
get_queue_details |
SQS queues — DLQ status, encryption, message counts |
get_topic_details |
SNS topics — subscription counts and protocols |
get_secrets_overview |
Secrets Manager — names and rotation status (values never included) |
get_parameter_overview |
SSM Parameter Store — names, types, tiers (values never included) |
get_lambda_overview |
Lambda functions — runtime, memory, timeout, env var key names |
get_log_errors |
CloudWatch error patterns and counts (no raw log messages) |
| Command | What it does |
|---|---|
infrawise init |
Detect AWS + repo, generate infrawise.yaml |
infrawise auth |
Select or switch AWS profile |
infrawise analyze |
Scan repo + AWS, build graph, print findings |
infrawise dev |
Start MCP server at http://localhost:3000/mcp |
infrawise doctor |
Validate AWS access, DB connectivity, and config |
infrawise.yaml is generated by infrawise init and lives in your repo root. Every service must be explicitly enabled: true — infrawise never connects to anything not listed in config.
project: payments-service
aws:
profile: default # AWS profile from ~/.aws/credentials
region: ap-south-1
dynamodb:
enabled: true
includeTables: # omit to include all tables
- Orders
- Users
postgres:
enabled: true
connectionString: postgresql://infrawise_ro:password@host:5432/mydb
mysql:
enabled: false
connectionString: ""
mongodb:
enabled: false
connectionString: ""
sqs:
enabled: true
sns:
enabled: true
ssm:
enabled: true
paths: [] # filter by prefix e.g. ["/myapp/prod"]
secretsManager:
enabled: true
lambda:
enabled: true
rds:
enabled: false
cloudwatchLogs:
enabled: false
logGroupPrefixes: []
windowHours: 24
analysis:
sampleSize: 100Infrawise is read-only. Minimum IAM policy required:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"dynamodb:ListTables",
"dynamodb:DescribeTable"
],
"Resource": "*"
}
]
}For SSO profiles, log in before running infrawise:
aws sso login --profile myprofileCreate a read-only user for infrawise:
CREATE USER infrawise_ro WITH PASSWORD 'yourpassword';
GRANT CONNECT ON DATABASE yourdb TO infrawise_ro;
GRANT USAGE ON SCHEMA public TO infrawise_ro;
GRANT SELECT ON ALL TABLES IN SCHEMA public TO infrawise_ro;For Amazon RDS: allow inbound on port 5432 from your machine's IP in the security group.
Infrawise has two analysis layers:
Works from AWS APIs, database schema introspection, and IaC files — no dependency on application code:
| Service | What it checks |
|---|---|
| DynamoDB schema | Tables, GSIs, partition keys |
| PostgreSQL / MySQL schema | Tables, indexes, column types |
| MongoDB schema | Collections, indexes |
| SQS | Missing DLQs, unencrypted queues, large backlogs |
| Secrets Manager | Missing secret rotation |
| Lambda | Default memory (128 MB), high timeouts |
| RDS | Publicly accessible, no backups, unencrypted, no deletion protection, single-AZ |
| CloudWatch Logs | Log groups with no retention policy |
| Terraform / CloudFormation / CDK | IaC drift vs deployed state |
Uses ts-morph AST analysis to detect which functions call which tables and how:
| Analyzer | Severity | What it detects |
|---|---|---|
| Full Table Scan (DynamoDB) | High | .scan() calls without filters |
| Missing GSI | Medium | Queries on attributes without a matching GSI |
| Hot Partition | Medium | 5+ distinct code paths hitting the same table |
| Missing Index (PostgreSQL) | Medium | Tables queried without indexes |
| N+1 Query | Medium | Repeated query patterns from ORM loops |
| Large SELECT | Low | SELECT * usage |
| Missing MySQL Index | Medium | MySQL tables queried without indexes |
| MySQL Full Table Scan | High | Full table scan patterns in MySQL queries |
| Missing Mongo Index | Medium | Collections queried without secondary indexes |
| Collection Scan | High | find() calls without filter predicates |
Non-TypeScript/JavaScript projects still get full value from infrastructure-level analyzers — code correlation (function-to-table mapping, N+1 patterns) is skipped.
The scanner supports: AWS SDK v3/v2 for DynamoDB, pg/Prisma/Knex for PostgreSQL, mysql2/Knex for MySQL, driver/Mongoose for MongoDB, and AWS SDK v3 for SQS/SNS/SSM/Secrets/Lambda.
- Infrawise scans your repository and infrastructure metadata
- A graph engine maps services, schemas, indexes, and query patterns
- Rule-based analyzers detect infrastructure and query anti-patterns
- The resulting context is exposed through MCP
- AI coding assistants query this context while generating code
Infrawise does not use an LLM to analyze your infrastructure. All extraction and analysis are deterministic: AST parsing, schema introspection, rule-based analyzers, and graph correlation. LLMs are only consumers of the generated context through MCP.
- Read-only — never writes to AWS or your database, never executes DDL
- Local-first — everything runs on your machine, nothing sent to external servers
- No telemetry — zero data collection
- Credentials — uses your existing AWS credential chain, never stored by infrawise
Your repo (any language) Your repo (TS/JS only)
│ │
│ Repository Scanner (ts-morph AST)
│ which functions → which tables
│ │
┌───────┴──────────────────────────────────┴────────────┐
│ infrawise analyze │
│ │
│ AWS APIs / DB schema / IaC files + Code ops (opt) │
│ (works for any project) (TS/JS only) │
│ │ │
│ Graph Engine │
│ (nodes + edges) │
│ │ │
│ Analyzer Engine │
│ (rule-based, deterministic) │
└─────────────────────────┬─────────────────────────────┘
│
┌──────────────────┐
│ MCP Server │ ◄── Claude Code
│ localhost:3000 │ ◄── Cursor
└──────────────────┘ ◄── Windsurf
src/
types.ts Shared type definitions
core/ Config (Zod + YAML), logger (Pino), local cache
graph/ Graph engine — nodes, edges, builder
adapters/ Flat extractors: dynamodb.ts, postgres.ts, mysql.ts,
mongodb.ts, aws.ts, logs.ts, terraform.ts
analyzers/ 23 rule-based analyzers
context/ Repository scanner (ts-morph AST)
server/ Fastify MCP HTTP server (plain JSON-RPC, no SDK)
cli/ CLI commands (Commander.js)
- Code-level correlation supports TypeScript and JavaScript only
- Dynamically constructed queries may not always be resolved statically
- Runtime tracing is not yet implemented
- Large monorepos may require future incremental analysis optimization
- Runtime tracing integration
- Incremental analysis for large monorepos
- Kubernetes workload graphing
- VS Code extension
- Infrastructure drift detection
- OpenTelemetry integration
- CI/CD reporting mode
- Multi-repository graph correlation
Node.js 24+, pnpm 9+, AWS CLI (for integration testing).
git clone https://github.com/Sidd27/infrawise
cd infrawise
pnpm install
pnpm buildpnpm build # compile
pnpm test # run all tests
pnpm typecheck # TypeScript strict check
pnpm lint # ESLint- Create your analyzer in
src/analyzers/ - Implement the
Analyzerinterface:
export class MyAnalyzer implements Analyzer {
name = 'MyAnalyzer';
async analyze(graph: SystemGraph): Promise<Finding[]> { ... }
}- Export it from
src/analyzers/index.ts - Add tests in
src/analyzers/__tests__/
- Create your extractor as
src/adapters/yourdb.ts - Export a function returning
Promise<YourTableMetadata[]> - Add the metadata type to
src/types.tsif needed - Wire it into
src/cli/commands/analyze.ts
The git tag is the source of truth. The version in root package.json and the tag must always match.
pnpm release patch # 0.1.2 → 0.1.3 (bug fixes)
pnpm release minor # 0.1.2 → 0.2.0 (new features, backwards compatible)
pnpm release major # 0.1.2 → 1.0.0 (breaking changes)
pnpm release 1.5.0 # explicit version
git push origin main --tagsPushing a v*.*.* tag creates a draft GitHub release. Publishing that release triggers npm publish automatically.
pnpm lintpassespnpm typecheckpassespnpm testpasses- New analyzers have unit tests with mock graph data
- No hardcoded AWS regions or credentials
MIT