FTL is an open-source framework for building and running polyglot Model Context Protocol(MCP) servers. It's designed from the ground up to be fast, secure, and portable, using a modern stack of open standards.
We believe the future of AI tooling shouldn't be locked into proprietary ecosystems. FTL is our commitment to that vision, built entirely on:
- WebAssembly (WASM): For secure, sandboxed execution with sub-millisecond cold starts.
- The Component Model: To compose tools written in different languages (Rust, Python, Go, TS) into a single application.
- Spin: The CNCF-hosted developer tool and runtime for building and running WASM applications.
This foundation ensures that what you build with FTL today will be compatible with the open, interoperable ecosystem of tomorrow.
This monorepo contains everything you need to build and deploy AI tools:
- ftl: CLI for managing FTL applications and deployments (Go)
- MCP Components: Pre-built gateway and authorizer for secure MCP servers (Rust/WASM)
- SDKs: Multi-language support for building AI tools (Python, Rust, TypeScript, Go)
- Templates: Quick-start patterns for common use cases
- Examples: Real-world applications demonstrating best practices
- ftl (This Repo): The open-source framework and CLI for building MCP servers that can run anywhere Spin apps are supported.
- FTL Engine: Our optional, managed platform for deploying ftl applications to a globally distributed edge network for the simplest path to production
- Polyglot by Design: SDKs for Python, Rust, TypeScript, and Go let you write tools in the best language for the job.
- Seamless Composition: Mix and match tools written in different languages within a single MCP server.
- Secure & Sandboxed: Each tool runs in an isolated WASM sandbox, with no access to the host system unless explicitly granted
- Run Anywhere: Deploy to any host compatible with Spin/Wasmtime.
- MCP Compliant: Out-of-the-box support for Streamable HTTP and spec-compliant Authorization
- Blazing Fast: Sub-millisecond cold starts and near-native performance, powered by Wasmtime.
To build tools in different languages, you'll need their corresponding toolchains:
- Rust:
cargo
(via rustup) - TypeScript/JavaScript:
node
andnpm
(via Node.js) - Python:
python3
andcomponentize-py
(install withpip install componentize-py
) - Go:
go
andtinygo
(via Go and TinyGo)
To get ftl
installed run the install script. Download and run manually or download and install with curl/wget:
curl -o- https://raw.githubusercontent.com/fastertools/ftl/main/install.sh | bash
wget -qO- https://raw.githubusercontent.com/fastertools/ftl/main/install.sh | bash
ftl init fast-project
cd fast
ftl add fast-tool --language rust
ftl up --watch
β Starting development server with auto-rebuild...
π Watching for file changes
Serving http://127.0.0.1:3000
Available Routes:
mcp: http://127.0.0.1:3000 (wildcard)
Example mcp.json config
{
"mcpServers": {
"fasttools": {
"url": "http://127.0.0.1:3000",
"transport": "http"
}
}
}
claude mcp add -t http fasttools http://127.0.0.1:3000
For the simplest path to a production-grade, globally-distributed deployment, you can use FTL Engine. It handles scaling, security, and distribution for you on Akamai's edge network.
First join Discord to request early access.
ftl eng login
ftl eng deploy
βΆ Deploying project to FTL Engine
β Configuring MCP authorization settings...
β MCP authorization set to: public
β Deployed!
MCP URL: https://8e264fc0-xxxx-aaaa-9999-9f5ab760092a.fwf.app
FTL composes your individual tool components with our gateway and authorizer components into a single Spin application. All calls between components happen securely in-memory, eliminating network latency between your tools.
graph TB
subgraph "MCP Clients"
Desktops["Cursor, Claude, ChatGPT"]
Agents["LangGraph, Mastra, ADK, OpenAI Agents SDK"]
Realtime["11.ai, LiveKit, Pipecat"]
end
MCP["Model Context Protocol<br/>(Streamable HTTP)"]
subgraph "Host"
subgraph "Spin/Wasmtime Runtime"
subgraph "FTL Application"
subgraph "FTL Components"
MCPAuth["MCP Authorizer"]
MCPGateway["MCP Gateway<br/>(Protocol, Routing, Validation)"]
end
subgraph "User Tool Components"
Weather["Weather Tools<br/>(TS/JS)"]
Physics["Physics Tools<br/>(Rust)"]
Data["Data Tools<br/>(Python)"]
Custom["Fun Tools<br/>(Go)"]
end
end
end
end
Desktops -.->| | MCP
Agents -.->| | MCP
Realtime -.->| | MCP
MCP -.->| | MCPAuth
MCPAuth -.->|"Authorized requests (in-memory call)"| MCPGateway
MCPGateway -.->|"In-memory call"| Weather
MCPGateway -.->|"In-memory call"| Physics
MCPGateway -.->|"In-memory call"| Data
MCPGateway -.->|"In-memory call"| Custom
Internal isolation and MCP-compliant authorization.
Each WebAssembly module executes within a sandboxed environment separated from the host runtime using fault isolation techniques.
A component is a WebAssembly binary (which may or may not contain modules) that is restricted to interact only through the modules' imported and exported functions.
Allowed outbound hosts and accessible variables can be configured per individual tool component within a server.
Out-of-the-box support for configurable MCP-compliant authorization, including
- Spec-compliant OAuth 2.1 implementation
- OAuth 2.0 Dynamic Client Registration Protocol (RFC7591).
- OAuth 2.0 Protected Resource Metadata (RFC9728).
- OAuth 2.0 Authorization Server Metadata (RFC8414).
Plug in your own JWT issuer with simple configuration.
FTL Engine is an end-to-end platform for running remote tools called by AI agents.
Tools cold start in under half a millisecond, instantly scale up to meet demand, and scale down to zero.
Engines run on Fermyon Wasm Functions and Akamai, the most globally distributed edge compute network.
Cost scales predictably with usage. There are no idle costs and no price variables like execution duration, region, memory, provisioned concurrency, reserved concurrency, etc. Cold starts and init phases are architected out. Engine specs are fixed and scaling is completely horizontal and automatic.
Tools are automatically deployed across the global network edge. Tool calls are routed to an Engine running on the most optimal Akamai edge PoP, enabling consistently low latency across geographic regions.
The FTL components handle MCP implementation, auth, tool call routing, and tool call argument validation.
Bring your own JWT issuer or OAuth provider via simple configuration. Or use FTL's by default.
We welcome contributions and discussion. Please see the Contributing Guide for details.
Apache-2.0 - see LICENSE for details.
FTL is built on top of these excellent projects: