A Claude Skills compatible runtime with dual sandboxing: macOS seatbelt for native Python and shell scripts (primary), plus experimental WASM-based sandboxing for cross-platform security. OpenSkills implements the Claude Code Agent Skills specification, providing a secure, flexible runtime for executing skills in any agent framework.
OpenSkills is syntactically 100% compatible with Claude Skills, meaning any skill that follows the Claude Skills format (SKILL.md with YAML frontmatter) will work with OpenSkills. What makes OpenSkills unique is its dual sandboxing approach:
- macOS seatbelt sandboxing (primary) for native Python and shell scripts - production-ready, fully supported
- WASM/WASI sandboxing (experimental) for cross-platform security and consistency - available for early adopters
Primary execution model: Native Python and shell scripts via macOS seatbelt (with Linux seccomp support planned). This is the recommended, production-ready approach that works with the full Python ecosystem and native tools.
Experimental WASM support: WASM sandboxing is available for developers who want to explore cross-platform deterministic execution, but it is not required for using OpenSkills. Most skills work perfectly fine with native scripts.
OpenSkills can be integrated into any agent framework (LangChain, Vercel AI SDK, custom frameworks) to give agents access to Claude-compatible skills.
-
100% Syntactic Compatibility: OpenSkills reads and executes skills using the exact same SKILL.md format as Claude Skills. Skills can be shared between Claude Code and OpenSkills without modification.
-
Dual Sandbox Architecture: OpenSkills combines macOS seatbelt (primary) with experimental WASM/WASI 0.3 sandboxing:
- macOS Seatbelt (primary): Native Python and shell script execution with OS-level sandboxing - production-ready, full ecosystem support
- WASM/WASI (experimental): Cross-platform security, capability-based permissions, memory safety, deterministic execution - available for early adopters
- Automatic Detection: Runtime automatically chooses the appropriate sandbox based on skill type
- Native-first: Most skills use native scripts; WASM is optional for specific use cases
-
Native Scripts First: OpenSkills prioritizes native Python and shell script execution, which provides full access to the Python ecosystem and native tools. WASM compilation is available as an experimental option for specific use cases requiring cross-platform determinism.
OpenSkills is designed for any agent framework that needs Claude-compatible skills:
- Agent Framework Integration: Works with LangChain, Vercel AI SDK, custom frameworks, or any system that needs tool-like capabilities
- Enterprise Agents: Internal skills developed by trusted developers
- Native Scripts: Primary execution model using Python and shell scripts with OS-level sandboxing
- Cross-Platform Native: macOS seatbelt (production), Linux seccomp (planned)
- Experimental WASM: Optional WASM execution for specific use cases requiring determinism
- Security & Auditability: Both sandboxing methods provide strong isolation and audit logging
Recommended approach: Use native Python and shell scripts for most skills. WASM is available for experimental use cases but is not required.
-
Native Scripts on Non-macOS:
- Native Python and shell scripts are supported only on macOS (seatbelt)
- Linux seccomp support is planned
-
WASM Support (Experimental):
- WASM sandboxing is experimental and not the primary execution method
- Build workflow required: JavaScript/TypeScript skills must be compiled to WASM components before execution
- Limited native library support: Native Python packages, shell tools, etc. don't work in WASM
- WASI compatibility required: Code must use WASI APIs, not native OS APIs
Recommendation: Use native Python and shell scripts for production skills. WASM is available for experimental use cases but is not required.
OpenSkills will evolve to address limitations while maintaining its native-first approach:
-
Linux Native Scripting: Linux seccomp support is planned to complete cross-platform native sandboxing (macOS seatbelt is already production-ready).
-
WASM Improvements (experimental): Continued development of WASM support for specific use cases requiring determinism and cross-platform consistency.
-
Enhanced Tooling: Better development tools and templates for both native scripts and WASM compilation.
- β 100% Claude Skills Compatible: Full SKILL.md format support
- π Dual Sandbox Architecture: macOS seatbelt (primary) + experimental WASM/WASI 0.3
- π§° Native Script Support: Execute Python and shell scripts on macOS via seatbelt (production-ready)
- π€ Any Agent Framework: Integrate with LangChain, Vercel AI SDK, or custom frameworks
- π Pre-built Tools: Ready-to-use tool definitions for TS/Python (~200 lines less code)
- π Progressive Disclosure: Efficient tiered loading (metadata β instructions β resources)
- π Multi-Language Bindings: Rust core with TypeScript and Python bindings
- π‘οΈ Capability-Based Security: Fine-grained permissions via seatbelt profiles (and WASI for experimental WASM)
- ποΈ Build Tool:
openskills buildfor compiling TS/JS to WASM components (experimental) - π Cross-Platform Native: macOS seatbelt (production), Linux seccomp (planned)
- π Workspace Management: Built-in sandboxed workspace for file I/O operations
# Rust (from source)
git clone https://github.com/Geeksfino/openskills.git
cd openskills
# Initialize submodules (required for tests and examples)
git submodule update --init --recursive
cd runtime
cargo build --release
# TypeScript
npm install @finogeek/openskills
# Python
pip install finclip-openskills
# Note: Pre-built wheels are available for macOS and Linux only.
# Windows users need to build from source: git clone https://github.com/Geeksfino/openskills.git && cd openskills/bindings/python && pip install maturin && maturin developOpenSkills uses a plugin-based build system for compiling JavaScript/TypeScript β WASM. The system supports multiple build backends (plugins), allowing you to choose the compiler that best fits your needs.
Plugin System Architecture:
- Plugins: Modular build backends that handle compilation (e.g.,
javy,quickjs,assemblyscript) - Auto-detection: When no plugin is specified, the system tries available plugins in order until one works
- Plugin selection: Choose explicitly via
--pluginflag or.openskills.tomlconfig file
Recommended for new users: The quickjs plugin (easiest setup - just run the setup script below)
First-time setup (required before building skills):
Run the setup script to install build tools and download dependencies:
# This will:
# - Download the WASI adapter
# - Install javy CLI (downloads pre-built binary when available)
# - Install wasm-tools
# - Check for optional tools (AssemblyScript)
./scripts/setup_build_tools.shBuild a skill:
# Build a skill from TypeScript/JavaScript
cd my-skill
openskills build
# Auto-detection: tries plugins in order (javy β quickjs β assemblyscript)
# until it finds one that's available and has all dependenciesChoose a plugin explicitly:
openskills build --plugin quickjs # Recommended: easiest setup
openskills build --plugin javy # Requires javy plugin.wasm file
openskills build --plugin assemblyscript # Requires asc compiler
openskills build --list-plugins # Show all available plugins and their statusPlugin comparison:
quickjs(recommended): Easiest setup - just run setup script. Uses javy CLI + wasm-tools. Supports WASI 0.3.javy: Requires building javy plugin.wasm file. Uses javy-codegen library. Legacy support.assemblyscript: High-performance TypeScript-like language. Requires asc compiler.
Alternative: javy plugin setup (if you prefer the default javy plugin):
If you want to use the javy plugin instead of quickjs, you need to build the javy plugin:
# Build the javy plugin (one-time setup)
./scripts/build_javy_plugin.sh
# Export the plugin path (or add to your shell profile)
export JAVY_PLUGIN_PATH=/tmp/javy/target/wasm32-wasip1/release/plugin_wizened.wasmConfig file (optional): place .openskills.toml or openskills.toml in the skill directory.
[build]
plugin = "quickjs" # or "assemblyscript"
# Plugin options are usually auto-detected
# [build.plugin_options]
# adapter_path = "~/.cache/openskills/wasi_preview1_adapter.wasm"How the plugin system works:
- Plugin selection: You can specify a plugin via
--pluginflag, config file, or let the system auto-detect - Auto-detection: When no plugin is specified, the system tries registered plugins in order until it finds one that:
- Is available (has all required dependencies)
- Supports the source file extension (.ts, .js, etc.)
- Plugin execution: Each plugin handles the full compilation pipeline:
- TypeScript transpilation (if needed)
- JavaScript/TypeScript β WASM core module
- WASM core β WASI 0.3 component (for quickjs/assemblyscript)
- Automatic setup: QuickJS/AssemblyScript plugins auto-download the WASI adapter if needed
- Configuration: Plugins can be configured via
.openskills.tomlor--plugin-optionflags
See Build Tool Guide for detailed information about the build process and plugin mechanism.
use openskills_runtime::{OpenSkillRuntime, ExecutionOptions};
use serde_json::json;
// Discover skills from standard locations
let mut runtime = OpenSkillRuntime::new();
runtime.discover_skills()?;
// Execute a skill
let result = runtime.execute_skill(
"my-skill",
ExecutionOptions {
timeout_ms: Some(5000),
input: Some(json!({"input": "data"})),
..Default::default()
}
)?;See Developer Guide for detailed usage examples.
OpenSkills works with any agent framework to give agents access to Claude-compatible skills. The runtime provides pre-built tools that eliminate boilerplate code and simplify agent setup.
Vercel AI SDK (TypeScript) - ~120 lines total:
import { OpenSkillRuntime } from "@finogeek/openskills";
import { createSkillTools, getAgentSystemPrompt } from "@finogeek/openskills/tools";
import { generateText } from "ai";
// Initialize runtime
const runtime = OpenSkillRuntime.fromDirectory("./skills");
runtime.discoverSkills();
// Create pre-built tools (replaces ~200 lines of manual tool definitions)
const tools = createSkillTools(runtime, {
workspaceDir: "./output" // Sandboxed workspace for file I/O
});
// Get skill-agnostic system prompt (teaches agent HOW to use skills)
const systemPrompt = getAgentSystemPrompt(runtime);
// Use with any LLM
const result = await generateText({
model: yourModel,
system: systemPrompt,
prompt: userQuery,
tools,
});LangChain (Python) - Pre-built tools available:
from openskills import OpenSkillRuntime
from openskills_tools import create_langchain_tools, get_agent_system_prompt
# Initialize runtime
runtime = OpenSkillRuntime.from_directory("./skills")
runtime.discover_skills()
# Create pre-built LangChain tools
tools = create_langchain_tools(runtime, workspace_dir="./output")
# Get system prompt
system_prompt = get_agent_system_prompt(runtime)
# Use with LangChain agent
agent = create_agent(model, tools, system_prompt=system_prompt)Benefits of Pre-built Tools:
- β ~200 lines less code: No need to manually define tools
- β Workspace management: Automatic sandboxed file I/O
- β Skill-agnostic prompts: Runtime generates system prompts
- β Security built-in: Path validation, permission checks
- β Works with any skill: No code changes needed
If you need custom tool definitions, you can still integrate manually:
Vercel AI SDK (Manual)
import { OpenSkillRuntime } from "@finogeek/openskills";
import { tool } from "ai";
const runtime = OpenSkillRuntime.fromDirectory("./skills");
const runSkill = tool({
inputSchema: z.object({ skill_id: z.string(), input: z.string() }),
execute: async ({ skill_id, input }) => {
return runtime.executeSkill(skill_id, { input }).outputJson;
},
});See examples/agents/simple for a complete example using pre-built tools, or examples/agents for other integration patterns.
OpenSkills uses a Rust core runtime with language bindings:
ββββββββββββββββββββββ
β Your Application β
β (TS/Python/Rust) β
ββββββββββββ¬βββββββββββ
β
ββββββββΌβββββββ
β Bindings β (napi-rs / PyO3)
ββββββββ¬βββββββ
β
ββββββββΌβββββββ
β Rust Core β (openskills-runtime)
ββββββββ¬βββββββ
β
ββββββββΌβββββββ
β Execution β (WASM/WASI 0.3 + seatbelt on macOS)
βββββββββββββββ
- Skill Discovery: Scans directories for SKILL.md files
- Progressive Loading: Loads metadata β instructions β resources on demand
- Execution: Runs
wasm/skill.wasmin Wasmtime or native.py/.shvia seatbelt on macOS - Permission Enforcement: Capabilities mapped from
allowed-toolsfor WASM or seatbelt - Audit Logging: All executions logged with input/output hashes
OpenSkills is the only runtime that combines:
- WASM/WASI Sandboxing: Cross-platform security with capability-based permissions
- macOS Seatbelt Sandboxing: Native Python and shell script execution with OS-level isolation
- Automatic Detection: Runtime automatically chooses the right sandbox for each skill
- Agent Framework Agnostic: Works with any agent framework (LangChain, Vercel AI SDK, custom)
This dual approach means you get:
- Native Flexibility: Full Python ecosystem and native tools via seatbelt (primary)
- Experimental WASM: Cross-platform determinism for specific use cases (optional)
- Security: Both sandboxing methods provide strong isolation
- Compatibility: 100% compatible with Claude Skills specification
Status: WASM sandboxing is experimental and not the primary execution method. Most skills work perfectly with native Python and shell scripts.
Developer Note on WASI Versions: The documentation refers to "WASI 0.3" as our target, but the current build toolchain (using the
wasi_snapshot_preview1adapter) produces WASI 0.2 components. The runtime supports both WASI 0.2 and 0.3 - it attempts 0.3 instantiation first, then falls back to 0.2. Native WASI 0.3 toolchains (e.g., Rust'swasm32-wasip3target) are expected to mature in 2026, at which point components can be built natively for WASI 0.3 without the adapter.
While native scripts are our primary execution model, we're investing in WASM support for specific use cases where it provides unique value. Here's our perspective on WASM's role:
β
Determinism: Same input β same output, critical for audit, replay, and compliance
β
Fast Startup: Millisecond-level startup times, great for frequently-invoked agent skills
β
Strong Isolation by Design: No syscalls unless explicitly exposed, capability-based access via WASI
β
Portability: Identical execution on macOS, Linux, Windows
β
Narrow Attack Surface: No shell, no fork bombs, no ptrace exploits
Best for: Policy logic, orchestration, validation, scoring, reasoning glue, and deterministic workflows.
β Full Python Ecosystem: NumPy, SciPy, pandas, PyTorch rely on native extensions, BLAS, CUDA
β GPU & Hardware Acceleration: Experimental, fragile, not regulator-friendly
β OS-Native Behaviors: File watchers, shared memory tricks, complex IPC
β Legacy Skills: Many assume Python + OS capabilities
You cannot wish these away. This is why we prioritize native scripts for production use.
Docker is an OS boundary. WASM is a language boundary.
They are complementary, not competing:
βββββββββββββββββββββββββββββββ
β Agent Runtime β
β β
β βββββββββββββββββββββββββ β
β β WASM Skill Sandbox β β β Experimental: logic, policy, orchestration
β β - deterministic β β
β β - auditable β β
β β - fast startup β β
β βββββββββββββββββββββββββ β
β β β
β delegate call β
β βΌ β
β βββββββββββββββββββββββββ β
β β Native Skill Sandbox β β β Primary: Python, ML, quant, native tools
β β - Python β β
β β - ML / Quant β β
β β - Seatbelt/seccomp β β
β βββββββββββββββββββββββββ β
βββββββββββββββββββββββββββββββ
WASM is:
- Always available (experimental)
- Default for specific use cases requiring determinism
- Trusted for logic and policy enforcement
Native is:
- Primary execution model
- Required for full ecosystem access
- Heavily controlled via OS sandboxes
No. And it shouldn't be.
- Docker = process isolation, filesystem virtualization, networking namespaces, cgroups
- WASM = instruction sandbox, capability runtime
They solve different problems. Trying to replace Docker with WASM leads to complexity, disappointment, and hacks.
WASM is a strong sandbox, but not a complete one.
What WASM isolates well:
- β Memory safety (no arbitrary memory access)
- β CPU instructions (no privileged ops)
- β No syscalls unless exposed
- β Deterministic execution
What WASM cannot fully control alone:
- β Resource exhaustion (CPU time, memory growth, infinite loops) - needs host-enforced limits
- β Host bugs - if the WASM runtime has a vulnerability, no second line of defense
- β Native escapes via host functions - filesystem, networking, crypto functions run natively
Industry reality: Even serious systems layer sandboxes:
- Cloudflare Workers: WASM + OS isolation
- Fastly Compute@Edge: WASM + VM
- Wasmtime in production: WASM + seccomp
- Deno: V8 + OS sandbox
Nobody serious runs WASM naked at high trust boundaries.
For finance agents, you care about:
| Requirement | Native Scripts | WASM (Experimental) |
|---|---|---|
| Auditability | ββββ | βββββ |
| Determinism | βββ | βββββ |
| Policy enforcement | ββββ | βββββ |
| Legacy quant code | βββββ | β |
| ML ecosystem | βββββ | β |
The answer is not "WASM or not."
The answer is: Native scripts first, WASM when necessary.
WASM will:
- Get better WASI support
- Get better language support
- Become a standard control layer
WASM will not:
- Replace Python ML stacks
- Replace OS-level sandboxes
- Become "run anything"
Betting on it as a universal runtime is risky.
Betting on it as a core logic sandbox is smart.
β
Yes, support WASM/WASI long-term - for specific use cases
β No, do not rely on WASM alone - native scripts are primary
β
Treat WASM as the control plane - logic, policy, orchestration
β
Layer OS sandbox for native code - full ecosystem access
β Do not promise "Docker replacement" - they solve different problems
One sentence to anchor our architecture:
"WASM gives us deterministic control; OS sandboxes give us practical power."
This gives us:
- Credibility: Honest about limitations
- Safety: Defense in depth
- Flexibility: Right tool for the job
- Future optionality: Can evolve as WASM matures
| Aspect | Claude Code | OpenSkills |
|---|---|---|
| SKILL.md Format | β Full support | β 100% compatible |
| Sandbox | seatbelt/seccomp | seatbelt (macOS, primary) + WASM/WASI 0.3 (experimental) β |
| Cross-platform | OS-specific | Native macOS (production), Linux planned; WASM identical (experimental) |
| Script Execution | Native (Python, shell) | Native (macOS, primary) + WASM components (experimental) |
| Build Required | No | No for native scripts. Yes for WASM (experimental, TS/JS β WASM) |
| Native Python | β Supported | β macOS (seatbelt) |
| Shell Scripts | β Supported | β macOS (seatbelt) |
| Agent Framework | Claude Desktop & Claude Agent SDK | Any framework β |
| Use Case | Desktop users, arbitrary skills | Enterprise agents, any agent framework |
openskills/
βββ runtime/ # Rust core runtime
β βββ src/
β β βββ build.rs # Build tool for TS/JS β WASM (uses javy-codegen)
β β βββ wasm_runner.rs # WASI 0.3 execution
β β βββ native_runner.rs # Seatbelt execution (macOS)
β β βββ ...
β βββ BUILD.md # Build tool documentation
βββ scripts/
β βββ build_javy_plugin.sh # Helper script to build javy plugin
βββ bindings/ # Language bindings
β βββ ts/ # TypeScript (napi-rs)
β βββ python/ # Python (PyO3)
βββ docs/ # Documentation
β βββ developers.md # Developer guide
β βββ contributing.md # Contributing guide
β βββ architecture.md # Architecture details
β βββ spec.md # Specification
βββ examples/ # Example skills
βββ scripts/ # Build scripts
- Developer Guide: Using OpenSkills in your applications
- Build Tool Guide: Compiling TypeScript/JavaScript skills
- Contributing Guide: How to contribute to OpenSkills
- Architecture: Internal architecture and design
- Specification: Complete runtime specification
# Clone with submodules (for tests and examples)
git clone https://github.com/Geeksfino/openskills.git
cd openskills
git submodule update --init --recursive
# Build everything
./scripts/build_all.sh
# Build runtime only
cd runtime
cargo build --release
# Build bindings
./scripts/build_bindings.shThe examples/claude-official-skills directory is a git submodule pointing to anthropics/skills. This provides access to official Claude Skills for testing and reference.
- Initial clone: Use
git clone --recursive <url>or rungit submodule update --init --recursiveafter cloning - Updating:
cd examples/claude-official-skills && git pull && cd ../.. && git add examples/claude-official-skills && git commit - Tests: The test suite gracefully skips tests if the submodule is not initialized
- β Rust Runtime: Fully functional
- β TypeScript Bindings: Working
- β Python Bindings: Working (requires Python β€3.13)
- β Native Scripting: Seatbelt sandbox (macOS, production-ready)
- π§ͺ WASM Execution: WASI 0.3 component model (experimental)
- π§ͺ Build Tool:
openskills buildfor TS/JS β WASM compilation (experimental) - π§ Native Scripting (Linux): Seccomp support planned
- FinClip ChatKit: A mobile-friendly SDK for building AI-powered chat experiences. Provides production-ready chat UI components for iOS and Android, with support for AG-UI, MCP-UI and OpenAI Apps SDK integration. Perfect for developers building mobile agent applications that need both the runtime capabilities of OpenSkills and polished chat interfaces.
MIT