An experimental AI agent that controls Blender through an MCP (Multi-Command Protocol) interface — enabling advanced and autonomous manipulation of 3D scenes using natural language or agentic reasoning.
This project connects a powerful agentic AI to Blender by using MCP, a Multi-Command Protocol system, to issue and manage Blender commands. The AI can create, move, modify, and animate objects within Blender — all without human intervention after a goal is defined.
Main goal: Enable fully autonomous agents to interact with Blender’s 3D environment, making it possible to design or modify complex scenes via high-level tasks like:
- “Build a simple 3D house”
- “Add lighting and animate a camera movement around the scene”
- “Export a short render of the animation”
- AI Agent: Reasoning model (e.g. OpenAI GPT / Claude / Local LLM) that plans tasks and generates commands.
- MCP Server: Parses, validates, and executes sequences of commands for Blender.
- Blender Interface: Python script that listens to MCP commands and manipulates the 3D scene accordingly.
âś… Natural language-driven 3D scene creation
âś… Object transformation and animation
âś… Multi-step task execution with memory
âś… Logging and error handling
âś… Extendable command set
- Clone the repository
git clone https://github.com/yourusername/blender-mcp.git
cd blender-mcp-ai