This project implements a Model Context Protocol (MCP) integration between AI coding assistants (Cursor, GitHub Copilot, Windsurf) and Figma, allowing AI assistants to communicate with Figma for reading designs and modifying them programmatically.
This MCP server works with all major AI-powered coding assistants:
- ✅ Cursor - Full MCP support with Agent mode
- ✅ GitHub Copilot (VS Code, JetBrains, Eclipse, Xcode) - Native MCP support
- ✅ Windsurf - Built-in MCP integration with Cascade agent
- ✅ Claude Desktop - Anthropic's flagship MCP client
- ✅ VS Code with GitHub Copilot extension
- ✅ Zed - Early MCP adopter
- ✅ Neovim - Via MCP plugin
cursorfigmavideo.mp4
src/talk_to_figma_mcp/
- TypeScript MCP server for Figma integrationsrc/mcp_plugin/
- Figma plugin for communicating with AI assistants (Cursor, GitHub Copilot, Windsurf, Claude Desktop, etc.)src/socket.ts
- WebSocket server that facilitates communication between the MCP server and Figma plugin (started automatically by the MCP server)
- Node.js version 18 or higher:
node --version
-
AI Assistant with MCP support (see compatibility list above)
-
Figma Plugin - Install from Figma community page or install locally
- Setup the project (downloads dependencies and builds):
npm run setup
- Start the MCP server (this will also start the WebSocket server automatically):
npx ai-figma-mcp
GitHub Copilot has native MCP support across all major IDEs.
Method 1: Using VS Code UI (Recommended)
- Open Command Palette (
Cmd+Shift+P
on macOS,Ctrl+Shift+P
on Windows/Linux) - Type "MCP: Add Server" and select it
- Choose
HTTP (sse)
as the server type - Enter MCP configuration:
{ "mcpServers": { "TalkToFigma": { "command": "npx", "args": ["ai-figma-mcp@latest"] } } }
Method 2: Manual Configuration
Add to your VS Code settings.json
:
{
"mcp": {
"servers": {
"TalkToFigma": {
"command": "npx",
"args": ["ai-figma-mcp@latest"]
}
}
}
}
- Click ⚙️ in the lower right corner
- Select "Edit settings"
- Under MCP section, click "Edit in
mcp.json
" - Add configuration:
{ "servers": { "TalkToFigma": { "command": "npx", "args": ["ai-figma-mcp@latest"] } } }
- Open Copilot Chat panel (click Copilot icon in status bar)
- Select "Edit preferences"
- Navigate to Copilot Chat → MCP
- Add the same JSON configuration as above
- Open GitHub Copilot for Xcode extension
- In agent mode, click the tools icon
- Select "Edit config"
- Add the configuration to
mcp.json
Cursor has excellent MCP support with Agent mode.
- Open Cursor settings (
Cmd+,
on macOS,Ctrl+,
on Windows/Linux) - Navigate to MCP configuration
- Add server configuration:
{ "mcpServers": { "TalkToFigma": { "command": "npx", "args": ["ai-figma-mcp@latest"] } } }
- Restart Cursor
- Switch to Agent mode in the chat interface
- The MCP tools will appear automatically in the available tools list
Windsurf has native MCP support with its Cascade agent.
- Open Windsurf settings
- Navigate to Cascade → MCP Servers
- Click "Add MCP Server"
- Configure the server:
{ "servers": { "TalkToFigma": { "command": "npx", "args": ["ai-figma-mcp@latest"] } } }
- The server will appear in your MCP tools in Cascade mode
-
Locate Claude Desktop config file:
- macOS:
~/Library/Application Support/Claude/claude_desktop_config.json
- Windows:
%APPDATA%/Claude/claude_desktop_config.json
- macOS:
-
Add MCP server configuration:
{ "mcpServers": { "TalkToFigma": { "command": "npx", "args": ["ai-figma-mcp@latest"] } } }
-
Restart Claude Desktop
Once configured, you can use natural language to interact with Figma through your AI assistant:
"Join the Figma channel 'vblckgfu' and tell me about the current design"
"What elements are currently selected in Figma?"
"Read the design and describe the layout structure"
"Get information about all text nodes in the current design"
"Analyze the current Figma design and suggest improvements"
"What colors are being used in this design?"
"Check if there are any accessibility issues with the text contrast"
"Change the background color of the selected frame to blue"
"Update all heading text to use a larger font size"
"Create a new button component with consistent styling"
"Add annotations to explain the design decisions"
"Create documentation for this component library"
"Scan for any missing annotations in the design system"
Bulk text content replacement
Thanks to @dusskapark for contributing the bulk text replacement feature. Here is the demo video.
Instance Override Propagation Another contribution from @dusskapark Propagate component instance overrides from a source instance to multiple target instances with a single command. This feature dramatically reduces repetitive design work when working with component instances that need similar customizations. Check out our demo video.
If you want to build from source or contribute to the project:
- Clone the repository:
git clone https://github.com/your-repo/ai-figma-mcp.git
cd ai-figma-mcp
- Install dependencies and build:
npm run setup
- For local development, you can point directly to the built server:
{
"mcpServers": {
"TalkToFigmaLocal": {
"command": "node",
"args": ["/path/to/your/project/dist/talk_to_figma_mcp/server.js"]
}
}
}
Replace /path/to/your/project/
with the actual absolute path to this cloned repository.
Option 1: From Figma Community (Recommended)
- Go to MCP Figma Plugin on Figma Community
- Click "Install"
- The plugin will be available in your Figma plugins menu
Option 2: Manual Installation (Development)
- In Figma, go to Plugins > Development > New Plugin
- Choose "Link existing plugin"
- Select the
src/mcp_plugin/manifest.json
file from this project. - The plugin should now be available in your Figma development plugins.
- Start the MCP Server: Run
npx ai-figma-mcp
in your terminal. This also starts the WebSocket server. - Configure your AI Assistant: Follow the instructions in the "AI Assistant Configuration" section for your specific assistant to recognize the MCP server.
- Open Figma and run the MCP Figma Plugin: Find it in your plugins list.
- Connect: In your AI assistant, use a command like
"Join Figma channel abcdefg"
(replaceabcdefg
with the channel ID displayed in the Figma plugin). - Interact: Start giving commands to your AI assistant to interact with Figma.
For GitHub Copilot coding agent, you can configure MCP servers at the repository level:
- Navigate to your repository settings on GitHub
- Go to Copilot → Coding agent
- Add MCP configuration:
{ "mcpServers": { "TalkToFigma": { "command": "npx", "args": ["ai-figma-mcp@latest"], "tools": ["*"] } } }
The MCP server respects the following environment variables:
FIGMA_WEBSOCKET_PORT
: (Optional) Custom port for the WebSocket server. Defaults to3055
.export FIGMA_WEBSOCKET_PORT=3056
DEBUG
: (Optional) Set totrue
for more verbose logging from the MCP server.export DEBUG=true
If you are running the MCP server via npx
, these variables should be set in the environment where you run the npx
command.
-
MCP Server not found or
npx
command fails:- Ensure Node.js (v18+) is installed and that
npx
is available in your system's PATH. - Try running
npx ai-figma-mcp --version
manually in your terminal to see if it executes. - If you've recently published or updated the package, there might be a delay for the npm registry. Try
npx ai-figma-mcp@latest
ornpm cache clean --force
then trynpx
again.
- Ensure Node.js (v18+) is installed and that
-
WebSocket connection failed / No connection to Figma plugin:
- Check MCP Server Logs: The MCP server (e.g.,
npx @sethdouglasford/mcp-figma
) automatically starts the WebSocket server. Examine the console output from the MCP server for any errors related to "Socket script not found", "Starting WebSocket server", or "WebSocket server process exited". - Port Conflicts: Ensure the port
3055
(or your configuredFIGMA_WEBSOCKET_PORT
) is not being used by another application. You can check this withlsof -i :3055
(macOS/Linux) ornetstat -ano | findstr "3055"
(Windows). - Firewall: Verify that your firewall is not blocking connections on the WebSocket port.
- Figma Plugin Channel: Double-check that the channel ID you used with the
join_channel
command in your AI assistant matches the channel ID displayed in the Figma plugin UI. - WSL Issues: If using Windows Subsystem for Linux (WSL), see the "Windows + WSL Guide".
- Check MCP Server Logs: The MCP server (e.g.,
-
AI Assistant not recognizing tools / MCP server not listed:
- Correct Configuration: Double-check the JSON configuration in your AI assistant's settings. Typos in the command (
npx
), package name (@sethdouglasford/mcp-figma
), or arguments can cause issues. - Agent Mode: Ensure your AI assistant is in "Agent" or "Tools" mode, not just a simple "Chat" or "Ask" mode.
- Restart Assistant/IDE: After adding or modifying the MCP configuration, restart your AI assistant or IDE (Cursor, VS Code, etc.).
- MCP Server Status: Some assistants might show the status of connected MCP servers in their settings or a dedicated panel. Check if "TalkToFigma" (or the name you used) is listed and if there are any error messages.
- Correct Configuration: Double-check the JSON configuration in your AI assistant's settings. Typos in the command (
-
Permission issues with Figma Plugin:
- Ensure the MCP Figma Plugin is installed and actively running in your Figma file.
- The plugin needs to be the active window/context in Figma for some operations.
If you are running the MCP server (Node.js part) within WSL and Figma on Windows:
- Node.js in WSL: Ensure Node.js v18+ is installed in your WSL distribution.
node --version npm --version
- Host Configuration for WebSocket Server:
The WebSocket server (
src/socket.ts
) by default listens onlocalhost
. When running the MCP server in WSL and the Figma plugin on Windows, the plugin needs to connect to WSL's IP address.- Modify
src/socket.ts
in your local project:// const HOST = process.env.WS_HOST || 'localhost'; const HOST = process.env.WS_HOST || '0.0.0.0'; // Change 'localhost' to '0.0.0.0'
- Rebuild the project:
npm run build
. - When you run the MCP server (e.g.,
npx ai-figma-mcp
), it will now use thesocket.js
that listens on0.0.0.0
.
- Modify
- Figma Plugin Connection:
- In the Figma plugin, when it asks for the WebSocket server address, you'll need to provide the IP address of your WSL instance. You can find this by running
hostname -I
in your WSL terminal. - For example, if
hostname -I
gives172.23.x.x
, you'd enterws://172.23.x.x:3055
(or your configured port) into the Figma plugin. - Alternatively: Instead of modifying the plugin, if your
npx
command runs the server from your WSL environment, the MCP server will start the socket server within WSL. The Figma plugin on Windows then needs to connect tows://<WSL_IP_ADDRESS>:<PORT>
. Thejoin_channel
command from your AI assistant (which also runs in WSL or connects to the MCP server in WSL) will correctly interact with this setup. The crucial part is that the Figma plugin on Windows must be able to reach the WebSocket server running inside WSL.
- In the Figma plugin, when it asks for the WebSocket server address, you'll need to provide the IP address of your WSL instance. You can find this by running
- Port Forwarding (If needed):
In some WSL setups, you might need to explicitly forward the port from Windows to WSL.
And to remove it:
# Run in PowerShell as Administrator on Windows netsh interface portproxy add v4tov4 listenport=3055 listenaddress=0.0.0.0 connectport=3055 connectaddress=<Your_WSL_IP_Address>
Replacenetsh interface portproxy delete v4tov4 listenport=3055 listenaddress=0.0.0.0
<Your_WSL_IP_Address>
with the actual IP of your WSL instance.
The primary change here is that the npm run socket
step is removed from the "Windows + WSL Guide" because the MCP server handles starting the socket server. The instruction to modify src/socket.ts
to listen on 0.0.0.0
is still relevant for WSL scenarios if users are building from source and need the WebSocket server to be accessible from Windows. If they are using the npx
published package, they cannot directly modify src/socket.ts
. The FIGMA_WEBSOCKET_HOST
environment variable would be a better way to control this for the packaged version, which we should add to socket.ts
.
The MCP server provides the following tools for interacting with Figma:
get_document_info
- Get information about the current Figma documentget_selection
- Get information about the current selectionread_my_design
- Get detailed node information about the current selection without parametersget_node_info
- Get detailed information about a specific nodeget_nodes_info
- Get detailed information about multiple nodes by providing an array of node IDs
get_annotations
- Get all annotations in the current document or specific nodeset_annotation
- Create or update an annotation with markdown supportset_multiple_annotations
- Batch create/update multiple annotations efficientlyscan_nodes_by_types
- Scan for nodes with specific types (useful for finding annotation targets)
get_reactions
- Get all prototype reactions from nodes with visual highlight animationset_default_connector
- Set a copied FigJam connector as the default connector style for creating connections (must be set before creating connections)create_connections
- Create FigJam connector lines between nodes, based on prototype flows or custom mapping
create_rectangle
- Create a new rectangle with position, size, and optional namecreate_frame
- Create a new frame with position, size, and optional namecreate_text
- Create a new text node with customizable font properties
scan_text_nodes
- Scan text nodes with intelligent chunking for large designsset_text_content
- Set the text content of a single text nodeset_multiple_text_contents
- Batch update multiple text nodes efficiently
set_layout_mode
- Set the layout mode and wrap behavior of a frame (NONE, HORIZONTAL, VERTICAL)set_padding
- Set padding values for an auto-layout frame (top, right, bottom, left)set_axis_align
- Set primary and counter axis alignment for auto-layout framesset_layout_sizing
- Set horizontal and vertical sizing modes for auto-layout frames (FIXED, HUG, FILL)set_item_spacing
- Set distance between children in an auto-layout frame
set_fill_color
- Set the fill color of a node (RGBA)set_stroke_color
- Set the stroke color and weight of a nodeset_corner_radius
- Set the corner radius of a node with optional per-corner control
move_node
- Move a node to a new positionresize_node
- Resize a node with new dimensionsdelete_node
- Delete a nodedelete_multiple_nodes
- Delete multiple nodes at once efficientlyclone_node
- Create a copy of an existing node with optional position offset
get_styles
- Get information about local stylesget_local_components
- Get information about local componentscreate_component_instance
- Create an instance of a componentget_instance_overrides
- Extract override properties from a selected component instanceset_instance_overrides
- Apply extracted overrides to target instances
export_node_as_image
- Export a node as an image (PNG, JPG, SVG, or PDF) - limited support on image currently returning base64 as text
join_channel
- Join a specific channel to communicate with Figma
The MCP server includes several helper prompts to guide you through complex design tasks:
design_strategy
- Best practices for working with Figma designsread_design_strategy
- Best practices for reading Figma designstext_replacement_strategy
- Systematic approach for replacing text in Figma designsannotation_conversion_strategy
- Strategy for converting manual annotations to Figma's native annotationsswap_overrides_instances
- Strategy for transferring overrides between component instances in Figmareaction_to_connector_strategy
- Strategy for converting Figma prototype reactions to connector lines using the output of 'get_reactions', and guiding the use 'create_connections' in sequence
- Navigate to the Figma plugin directory:
cd src/mcp_plugin
- Edit
code.js
andui.html
.
To build the TypeScript files for the MCP server and WebSocket server:
npm run build
This will output files to the dist
directory.
To watch for changes and rebuild automatically during development:
npm run dev
When working with the Figma MCP:
- Always join a channel before sending commands to ensure the MCP server is connected to the correct Figma plugin instance.
- Get an overview of the document using
get_document_info
first. - Check the current selection with
get_selection
before making modifications intended for selected items. - Use the appropriate creation tools based on needs:
create_frame
for containers that might hold other elements or use auto-layout.create_rectangle
for basic vector shapes.create_text
for text elements.
- Verify changes using
get_node_info
or by visually inspecting Figma. - Utilize component instances (
create_component_instance
,get_instance_overrides
,set_instance_overrides
) for design consistency and efficiency. - Handle potential errors gracefully, as all commands can throw exceptions (e.g., if a node ID is not found, or parameters are invalid).
- For large designs:
- Use chunking parameters if available in tools like
scan_text_nodes
. - Monitor progress messages if provided by the tool for long operations.
- Use chunking parameters if available in tools like
- For text operations:
- Use batch operations like
set_multiple_text_contents
when possible for better performance.
- Use batch operations like
- For converting legacy annotations:
- Scan text nodes to identify markers and descriptions.
- Use
scan_nodes_by_types
to find UI elements that annotations refer to. - Match markers with their target elements.
- Categorize annotations appropriately.
- Create native annotations with
set_multiple_annotations
in batches.
- Visualize prototype interactions as FigJam connectors:
- Use
get_reactions
to extract prototype flows. - Set a default connector style using
set_default_connector
(by copying a FigJam connector first). - Generate connector lines with
create_connections
.
- Use
MIT