MCP server that enables LLMs to execute commands on remote servers via SSH.
This project provides a Model Context Protocol (MCP) server that allows Large Language Models to:
- Connect to remote SSH servers
- Execute commands on those servers
- Retrieve the results
Create a configuration file at ~/.ssh-mcp-config.yaml
with your SSH connection details:
connections:
server1:
hostname: example.com
port: 22
username: user
auth_method: key # or password
key_path: ~/.ssh/id_rsa # only needed for key auth
server2:
hostname: another-server.com
port: 2222
username: admin
auth_method: password
password: ${SSH_PASSWORD} # Use environment variable
defaults:
timeout: 30 # seconds
max_output_size: 1048576 # 1MB
allowed_commands:
- ls
- cat
- grep
# Add more allowed commands as needed
# Clone the repository
git clone https://github.com/yourusername/ssh-mcp.git
cd ssh-mcp
# Install dependencies with uv
uv add "mcp[cli]" paramiko pyyaml
# Install the package
uv add --editable .
Run the MCP server:
mcp dev ssh_mcp/server.py
For containerized deployment, see the comprehensive Docker Guide.
Quick start:
# Build and run
docker build -t ssh-mcp .
docker run -it --rm \
-v ~/.ssh:/home/app/.ssh:ro \
-v ~/.ssh-mcp-config.yaml:/home/app/.ssh-mcp-config.yaml:ro \
ssh-mcp
- SSH-MCP implements command allowlisting to prevent arbitrary command execution
- Sensitive information like passwords should be provided via environment variables
- The configuration file should have appropriate permissions (chmod 600)