Skip to content

Strand-AI/lambda-cli

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

47 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Lambda CLI

Caution

UNOFFICIAL PROJECT — This is a community-built tool, not affiliated with or endorsed by Lambda.

CI License: MIT npm MCP Install in VS Code Install in Cursor

A fast CLI and MCP server for managing Lambda cloud GPU instances.

Two ways to use it:

  • CLI (lambda) - Direct terminal commands for managing GPU instances
  • MCP Server (lambda-mcp) - Let AI assistants like Claude manage your GPU infrastructure

Installation

Homebrew (macOS/Linux)

brew install strand-ai/tap/lambda-cli

From Source

cargo install --git https://github.com/Strand-AI/lambda-cli

Pre-built Binaries

Download from GitHub Releases.

Authentication

Get your API key from the Lambda dashboard.

Option 1: Environment Variable

export LAMBDA_API_KEY=<your-key>

Option 2: Command (1Password, etc.)

export LAMBDA_API_KEY_COMMAND="op read op://Personal/Lambda/api-key"

The command is executed at startup and its output is used as the API key. This works with any secret manager.

Notifications (Optional)

Get notified on Slack, Discord, or Telegram when your instance is ready and SSH-able.

Configuration

Set one or more of these environment variables:

# Slack (incoming webhook)
export LAMBDA_NOTIFY_SLACK_WEBHOOK="https://hooks.slack.com/services/T00/B00/XXX"

# Discord (webhook URL)
export LAMBDA_NOTIFY_DISCORD_WEBHOOK="https://discord.com/api/webhooks/123/abc"

# Telegram (bot token + chat ID)
export LAMBDA_NOTIFY_TELEGRAM_BOT_TOKEN="123456:ABC-DEF..."
export LAMBDA_NOTIFY_TELEGRAM_CHAT_ID="123456789"

Setup Guides

Slack: Create an Incoming Webhook in your workspace.

Discord: In channel settings → Integrations → Webhooks → New Webhook → Copy Webhook URL.

Telegram:

  1. Message @BotFather/newbot → copy the token
  2. Message your bot, then visit https://api.telegram.org/bot<TOKEN>/getUpdates to find your chat ID

CLI Usage

Commands

Command Description
lambda list Show available GPU types with pricing and availability
lambda running Show your running instances
lambda start Launch a new instance
lambda stop Terminate an instance
lambda find Poll until a GPU type is available, then launch

Examples

List available GPUs:

lambda list

Start an instance:

lambda start --gpu gpu_1x_a10 --ssh my-key --name "dev-box"

Stop an instance:

lambda stop --instance-id <id>

Wait for availability and auto-launch:

lambda find --gpu gpu_8x_h100 --ssh my-key --interval 30

CLI Options

start

Flag Description
-g, --gpu Instance type (required)
-s, --ssh SSH key name (required)
-n, --name Instance name
-r, --region Region (auto-selects if omitted)
-f, --filesystem Filesystem to attach (must be in same region)
--no-notify Disable notifications even if env vars are set

find

Flag Description
-g, --gpu Instance type to wait for (required)
-s, --ssh SSH key name (required)
--interval Poll interval in seconds (default: 10)
-n, --name Instance name when launched
-f, --filesystem Filesystem to attach when launched
--no-notify Disable notifications even if env vars are set

Notifications are automatic when env vars are configured. Use --no-notify to disable:

lambda start --gpu gpu_1x_a10 --ssh my-key --no-notify

MCP Server

The lambda-mcp binary is an MCP (Model Context Protocol) server that lets AI assistants manage your Lambda infrastructure.

Quick Start with npx

The easiest way to use lambda-mcp is via npx—no installation required:

npx @strand-ai/lambda-mcp

Options

Flag Description
--eager Execute API key command at startup instead of on first use

API Key Loading

When using LAMBDA_API_KEY_COMMAND, the MCP server defers command execution until the first API request by default. This avoids unnecessary delays when starting Claude Code if you don't use Lambda tools in every session.

Use --eager to execute the command at startup instead:

npx @strand-ai/lambda-mcp --eager

Note: The CLI (lambda) always executes the API key command at startup since it's used for immediate operations.

Available Tools

Tool Description
list_gpu_types List all available GPU instance types with pricing, specs, and current availability
start_instance Launch a new GPU instance (auto-notifies if configured)
stop_instance Terminate a running instance
list_running_instances Show all running instances with status and connection details
check_availability Check if a specific GPU type is available

Auto-Notifications

When notification environment variables are configured, the MCP server automatically sends notifications when instances become SSH-able. No additional flags needed—just set the LAMBDA_NOTIFY_* env vars and launch instances as usual.

Claude Code Setup

claude mcp add lambda -s user -e LAMBDA_API_KEY=your-api-key -- npx -y @strand-ai/lambda-mcp

With 1Password CLI:

claude mcp add lambda -s user -e LAMBDA_API_KEY_COMMAND="op read op://Personal/Lambda/api-key" -- npx -y @strand-ai/lambda-mcp

Then restart Claude Code.

Example Prompts

Once configured, you can ask Claude things like:

  • "What GPUs are currently available on Lambda?"
  • "Launch an H100 instance with my ssh key 'macbook'"
  • "Show me my running instances"
  • "Check if any A100s are available"
  • "Terminate instance i-abc123"

Development

# Build
cargo build

# Run tests
cargo test

# Run CLI
cargo run --bin lambda -- list

# Run MCP server
cargo run --bin lambda-mcp

Releasing

To create a release:

  1. Update the version in Cargo.toml
  2. Merge to main — this automatically:
    • Creates a git tag
    • Builds binaries for all platforms
    • Publishes to npm
    • Updates the Homebrew formula

About

Unofficial CLI and MCP server for Lambda cloud GPU instances

Resources

License

Contributing

Stars

Watchers

Forks

Packages

 
 
 

Contributors