Confront, Collaborate, Create: A new way to interact with LLMs.
TriMo-Chat is a local-first web interface built around a simple idea: what if you could run up to three different LLMs side by side, compare their responses to the same prompt, and then pass answers from one model directly into another?
It is designed for developers, researchers, and curious people who want to experiment with model behaviour, cross-check outputs, and build workflows across multiple AI providers — without sending data to any third-party service beyond the model APIs themselves.
- Node.js 18 or higher
- API keys for the providers you want to use (OpenAI, Anthropic, Google, Mistral, etc.)
- Or Ollama running locally — no key needed
# 1. Clone the repository
git clone https://github.com/gioppino/trimo-chat.git
cd trimo-chat
# 2. Install dependencies
npm install
# 3. Install Playwright browsers (only needed for tests)
npx playwright install --with-deps chromium
# 4. Start the development server
npm run devThen open your browser at the address shown in the terminal (usually http://localhost:5173).
The backend server starts automatically on the first available port from 3001 upward. If 3001 is already in use on your machine, it will silently try 3002, 3003, and so on — no configuration needed.
The "TriMo" philosophy is about making different AI models confront, collaborate, and check each other. Instead of treating each model as a silo:
- Compare responses to the same prompt across three models simultaneously (Broadcast Mode)
- Integrate an answer from one model into a new question for another via drag and drop
- Control and steer responses using custom, reusable prompt templates
- Verify the output of one model using the analytical capabilities of another
- Triple-column chat — interact with up to three different LLMs in a parallel layout
- Broadcast Mode — send the same message to all three models at once with a single input
- Universal model support — OpenAI, Anthropic, Google, Mistral, DeepSeek, Alibaba Qwen, and local models via Ollama
- Drag & drop workflow — drag any message between columns; on drop, choose how to use it
- Custom drop actions — define reusable prompt templates with a
{{content}}placeholder - The Dock — a persistent clipboard panel; drag messages in, drag them back out into any chat
- WYSIWYG editor — double-click any message or dock card to edit it with a full markdown editor
- Chat import / export — save and restore conversation history per column as JSON
- Local-first — API keys and settings are stored only on your machine, never committed to git
All configuration lives in the Settings panel, opened via the gear icon in the top-right corner.
The Models tab lets you choose which model runs in each of the three slots (A, B, C).
- Select a Provider from the dropdown (OpenAI, Anthropic, Google, Mistral, Ollama, …)
- Select a Model from the list, or type a custom model ID directly into the field
In the API Keys tab, paste the key for each provider you intend to use.
- Keys are stored in
server/config.jsonon your local machine and are never committed to git - The last four characters of each saved key are shown as confirmation
- Ollama does not require a key
In the Prompts tab, define reusable prompt templates that appear whenever you drag a message.
- Give the action a short Title (e.g.
Find flaws,Translate to Italian) - Write a Prompt Template and use
{{content}}as the placeholder for the dragged text
Example:
- Title:
Find flaws - Template:
Critically analyse the following text, highlighting potential flaws, logical errors, or inconsistencies:\n\n{{content}}
Any message bubble — yours or the model's — can be dragged to another column.
- Grab a message bubble (click and hold)
- Drag it over another column — the drop zone lights up
- Drop — the "Action Required" menu appears
- Choose an action — the prompt template is filled with the dragged text and sent immediately
This lets you pass a response from Model A to Model B and ask it to critique, translate, summarise, or continue it — in one gesture.
The Dock is the panel at the bottom of the screen. It works as a persistent clipboard.
- Save: drag any message down to the Dock — it becomes a named card
- Reuse: drag a card back up to any column and choose an action
- Rename: double-click a card's title
- Reorder: drag cards within the Dock
- Delete: use the trash icon
TriMo-Chat is designed to run locally on your own machine. The backend is a thin proxy between the browser and the LLM provider APIs, with no authentication layer.
A few things worth being aware of:
- Do not expose the backend to the public internet. Anyone who can reach it could use your API keys.
- API keys are sent from the browser to the local backend in each request. This is safe as long as both run on
localhost. server/config.jsonis in.gitignoreand will never be committed. If you clone this repo, you will need to enter your own keys via the Settings panel.
Released under the GNU General Public License v3.0.
Free as in freedom. Not free as in beer.



