Your AI agent just called the same tool 47 times with identical parameters. Your logs look fine. You're silently burning $200 in tokens.
Syrin catches these failures before production.
Syrin is a development toolkit for MCP (Model Context Protocol) servers — the standard way AI agents call external tools.
Without Syrin:
Tool called 47x → No visibility
$200 burned → Silent failure
Logs look "fine" → Debug for hoursWith Syrin:
Loop detected at call #3 → Execution halted
Full event trace → See exactly what happened
Contract validated → Catch issues before runtimeTool Loops — Model proposes the same tool repeatedly with no progress
Wrong Tool Selection — Similar names, overlapping schemas, ambiguous descriptions cause silent misbehavior
Silent Failures — Tool throws an error but execution continues with broken state
Contract Mismatches — Input/output schemas don't align between chained tools
Hidden Dependencies — Tools assume state that doesn't exist
Documentation: https://docs.syrin.dev
| syrin analyse Catch contract issues |
syrin dev Interactive development |
syrin test Sandboxed tool testing |
![]() |
![]() |
![]() |
| syrin init Project setup |
syrin list Inspect tools |
syrin test --connection Connection test |
![]() |
![]() |
![]() |
# No install needed — run directly
npx @syrin/cli analyse --transport http --url http://localhost:8000/mcpOr install globally:
npm install -g @syrin/cli
syrin init --global
syrin doctor # Check your environment
syrin analyse --transport http --url http://localhost:8000/mcp # Analyze an MCP server
syrin dev --exec --transport http --url http://localhost:8000/mcp # Interactive dev modeOr initialize a project with local config:
npx @syrin/cli init
syrin doctor
syrin analyseWant to try with a sample server? Clone the repo and use the included example MCP server:
git clone https://github.com/Syrin-Labs/cli.git && cd cli/examples/demo-mcp-py
python3 -m venv .venv && source .venv/bin/activate && pip install -r requirements.txt
python server.py --mode http --port 8000 &
npx @syrin/cli analyse --transport http --url http://localhost:8000/mcpRequirements: Node.js >= 20.12, npm >= 9
| Command | What It Does |
|---|---|
syrin analyse |
Static analysis — catches contract issues before runtime |
syrin dev |
Interactive mode — see exactly what your LLM proposes |
syrin test |
Contract testing — validate tools in sandboxed execution |
syrin doctor |
Environment check — validate config and connections |
syrin list |
Inspect tools, resources, and prompts from your server |
Syrin supports both local (project-specific) and global (user-wide) configurations. This allows you to:
- Use Syrin from any directory without initializing a project
- Share LLM API keys across multiple projects
- Set default agent names and LLM providers globally
# Set up global configuration
syrin config setup --global
# Set API keys in global .env
syrin config edit-env --global
# Use Syrin from any directory
syrin dev --exec --transport http --url http://localhost:8000/mcp# View global config
syrin config list --global
# Set global LLM provider
syrin config set openai.model "gpt-4-turbo" --global
# Set default provider
syrin config set-default claude --globalSee the Configuration Guide for more details.
The Problem: Your LLM picks the wrong tool, or calls tools with missing parameters. You only find out after deployment when users report broken behavior.
The Solution: Static analysis of your tool contracts catches issues before any code runs.
syrin analyse # Check all tool contracts
syrin analyse --ci # Exit code 1 on errors (for CI pipelines)
syrin analyse --strict # Treat warnings as errorsWhat it catches:
- Vague or missing tool descriptions
- Parameters without descriptions (LLMs guess wrong)
- Overlapping tools that confuse model selection
- Schema mismatches between chained tools
- Circular dependencies
The Problem: Your LLM calls tools, but you can't see why it chose that tool, what parameters it's sending, or what happens between steps. You're debugging blind.
The Solution: An interactive environment where you see every tool proposal before it executes.
syrin dev # Preview mode (no execution)
syrin dev --exec # Enable execution when readyWhat you get:
- See exactly which tool the LLM wants to call and why
- Inspect parameters before they're sent
- Step through tool chains one call at a time
- Full event trace of every decision
The Problem: A tool works fine in manual testing, but in production it has side effects you didn't expect, returns massive outputs that blow your context window, or behaves differently on repeated calls.
The Solution: Sandboxed execution that validates each tool against its behavioral contract.
syrin test # Test all tools
syrin test --tool fetch_user # Test specific tool
syrin test --strict # Warnings become errors
syrin test --json # JSON output for CIWhat it catches:
- Unexpected side effects (file writes, network calls)
- Non-deterministic outputs
- Output size explosions
- Hidden dependencies on external state
- Contract violations
The Problem: Something's misconfigured, but you're not sure what. API keys? Transport settings? MCP connection?
The Solution: A single command that checks everything.
syrin doctor # Check config, env, connections
syrin test --connection # Test MCP connection onlyDefine behavioral guarantees for your tools in tools/<tool-name>.yaml files:
version: 1
tool: fetch_user
contract:
input_schema: FetchUserRequest
output_schema: User
guarantees:
side_effects: none
max_output_size: 10kbSee Tool Contracts Documentation for details.
Syrin supports two configuration layers:
- Local (
syrin.yamlin project root) — transport, MCP connection, LLM providers - Global (
~/.syrin/syrin.yaml) — shared LLM API keys and defaults across projects
Local config overrides global config. CLI flags override both.
Configuration reference: https://docs.syrin.dev/configuration
- stdio – Syrin manages the MCP server process (recommended for development)
- http – Syrin connects to an existing server (common in production)
Transport documentation: https://docs.syrin.dev/configuration/transport
Supported providers:
- OpenAI
- Claude (Anthropic)
- Ollama (local models)
LLMs propose actions.
Syrin governs execution.
Provider configuration: https://docs.syrin.dev/configuration/llm
- Discord — Ask questions, share feedback
- GitHub Discussions — Feature ideas, show & tell
- Documentation — Full guides and API reference
Found a bug or have a feature request? Open an issue — we read every one.
If Syrin helped you catch something your logs missed, a star on GitHub helps others find it too.
- Documentation: https://docs.syrin.dev
- GitHub: https://github.com/Syrin-Labs/cli
- Issues: https://github.com/Syrin-Labs/cli/issues
- npm: https://www.npmjs.com/package/@syrin/cli
Contributions are welcome! Please read our Contributing Guide and Code of Conduct before submitting PRs.
For security issues, please see our Security Policy.
ISC License. See LICENSE for details.
Made with care by Syrin Labs.






