a magical desktop app that puts the power of LLMs and MCP in the hands of everyone
🔮 Download the Tome Desktop App: Windows | MacOS
Tome is a desktop app that lets anyone harness the magic of LLMs and MCP. Download Tome, connect any local or remote LLM and hook it up to thousands of MCP servers to create your own magical AI-powered spellbook.
🫥 Want it to be 100% local, 100% private? Use Ollama and Qwen3 with only local MCP servers to cast spells in your own pocket universe. ⚡ Want state of the art cloud models with the latest remote MCP servers? You can have that too. It's all up to you!
🏗️ This is a Technical Preview so bear in mind things will be rough around the edges. Join us on Discord to share tips, tricks, and issues you run into. Star this repo to stay on top of updates and feature releases!
- 🧙 Streamlined Beginner Friendly Experience
- Simply download and install Tome and hook up the LLM of your choice
- No fiddling with JSON, Docker, python or node
- 🤖 AI Model Support
- Remote: Google Gemini, OpenAI, any OpenAI API-compatible endpoint
- Local: Ollama, LM Studio, Cortex, any OpenAI API-compatible endpoint
- 🔮 Enhanced MCP support
- UI to install, remove, turn on/off MCP servers
- npm, uvx, node, python MCP servers all supported out of box
- 🏪 Integration into Smithery.ai registry
- Thousands of MCP servers available via one-click installation
- ✏️ Customization of context windows and temperature
- 🧰 Native support for tool calls and reasoning models
- UI enhancements that clearly delineate tool calls and thinking messages
Tome.Demo.README.mp4
- MacOS or Windows (Linux coming soon!)
- LLM Provider of your choice: Ollama or Gemini API key are easy/free
- Download the latest release of Tome
- Install Tome
- Connect your preferred LLM provider - OpenAI, Ollama and Gemini are preset but you can also add providers like LM Studio by using http://localhost:1234/v1 as the URL
- Open the MCP tab in Tome and install your first MCP server (Fetch is an easy one to get started with, just paste
uvx mcp-server-fetch
into the server field). - Chat with your MCP-powered model! Ask it to fetch the top story on Hacker News.
We want to make local LLMs and MCP accessible to everyone. We're building a tool that allows you to be creative with LLMs, regardless of whether you're an engineer, tinkerer, hobbyist, or anyone in between.
- Tome is local first: You are in control of where your data goes.
- Tome is for everyone: You shouldn't have to manage programming languages, package managers, or json config files.
We've gotten a lot of amazing feedback in the last few weeks since releasing Tome but we've got big plans for the future. We want to break LLMs out of their chatbox, and we've got a lot of features coming to help y'all do that.
- Scheduled tasks: LLMs should be doing helpful things even when you're not in front of the computer.
- Native integrations: MCP servers are a great way to access tools and information, but we want to add more powerful integrations to interact with LLMs in unique. ways
- App builder: we believe long term that the best experiences will not be in a chat interface. We have plans to add additional tools that will enable you to create powerful applications and workflows.
- ??? Let us know what you'd like to see! Join our community via the links below, we'd love to hear from you.