Solvia is a distributed LLM chat application that allows you to run AI conversations on your desktop and control them remotely through a web interface.
The system consists of three main components:
┌─────────────────────────────────────────────┐
│ Sync Server (Node.js) │
│ WebSocket Message Relay │
└──────────────┬──────────────────────────────┘
│
WebSocket Messages
│
┌──────────┴────────┐ ┌─────────────────┐
│ │ │ │
▼ ▼ ▼ ▼
Desktop Client Desktop 2 Web Control Panel
(Tauri + Rust) (Tauri) (HTML/JS)
│
│ OpenAI API
▼
LLM Service
-
Desktop Client (
solvia-desktop/): Tauri application with Rust backend- Direct LLM API integration (OpenAI compatible)
- Local chat interface
- WebSocket sync with server
- Session management
-
Sync Server (
solvia-server/): Node.js WebSocket server- Message relay between desktop and web clients
- Session management
- No LLM API access (security by design)
-
Web Interface (
solvia-web/): Static HTML/CSS/JS- Remote viewing of desktop conversations
- Send messages through desktop to LLM
- Real-time sync with desktop state
- Desktop-First: All LLM interactions happen on desktop
- Remote Control: View and control desktop sessions from web browser
- Real-time Sync: WebSocket-based state synchronization
- Session-Based: Simple UUID session identification
- Security: Web clients cannot directly access LLM APIs
- Extensible: OpenAI-compatible API support
Configure required environment variables for the desktop client:
cd solvia-desktop
cp .env.example .env
# Edit .env with your API and Gmail OAuth credentialscd solvia-server
npm install
npm run devServer will start on http://localhost:3000
cd solvia-desktop
npm install
cargo install tauri-cli
npm run tauri devThe desktop app will:
- Generate a unique session ID
- Display the session ID in the UI
- Connect to the sync server
- Be ready for LLM conversations
- Open
solvia-web/index.htmlin your browser - Enter the session ID from the desktop app
- Click "Connect"
- You can now view the desktop's conversation and send messages
OPENAI_BASE_URL=https://api.openai.com/v1
OPENAI_API_KEY=sk-your-key-here
OPENAI_MODEL=gpt-4
OPENAI_FAST_MODEL=gpt-4o-mini
SYNC_SERVER_URL=ws://localhost:3000
TOOL_RESPONSE_THRESHOLD=65536
E2B_KEY=your-e2b-key-here
GMAIL_CLIENT_ID=your-gmail-oauth-client-id
GMAIL_CLIENT_SECRET=your-gmail-oauth-client-secret
GMAIL_REDIRECT_URI=http://localhost:9527/callbackThe Gmail redirect URI should match the local OAuth callback server that ships with Solvia (http://localhost:9527/callback by default).
PORT=3000
SESSION_TIMEOUT=3600000
MAX_CLIENTS_PER_SESSION=10- Framework: Tauri (Rust + React/TypeScript)
- LLM Client: async-openai
- WebSocket: tokio-tungstenite
- State Management: Arc<Mutex>
- Runtime: Node.js
- WebSocket: ws
- Session Storage: In-memory Map
- Frontend: Vanilla HTML/CSS/JavaScript
- WebSocket: Native WebSocket API
- UI: Custom CSS with responsive design
WebSocket messages use JSON format:
{
"type": "register_desktop|register_web|state_update|send_message",
"session_id": "uuid-string",
"timestamp": "ISO8601",
"payload": { /* type-specific data */ }
}register_desktop: Desktop client connectsregister_web: Web client connects to sessionstate_update: Desktop broadcasts conversation statesend_message: Web client sends message to desktop
- API Keys: Only stored on desktop, never transmitted
- Session IDs: Temporary UUIDs, no persistent authentication
- Network: WebSocket messages contain no sensitive data
- Isolation: Web clients cannot directly call LLM APIs
All LLM communications are logged according to UNIX principles:
- Desktop logs all API requests/responses
- Server logs connection events and message routing
- Errors are logged with full context
This is an MVP implementation with these limitations:
- In-memory session storage (server restarts lose sessions)
- No user authentication
- No message persistence
- Limited error recovery
- No horizontal scaling
- Persistent session storage (Redis/Database)
- User authentication and authorization
- Message history persistence
- Multiple LLM provider support
- End-to-end encryption
- Mobile app clients
- Docker deployment
- API Key Error: Check
.envfile configuration - Sync Connection Failed: Ensure server is running on correct port
- Build Errors: Install Rust and Node.js dependencies
- Port in Use: Change PORT in
.envor kill existing process - Connection Refused: Check firewall settings
- Can't Connect: Verify session ID and server URL
- No Messages: Check if desktop client is connected
- CORS Errors: Serve web files from a web server, not file://
This project is MIT licensed. See LICENSE file for details.
- Fork the repository
- Create a feature branch
- Make changes following existing code style
- Test all three components
- Submit a pull request
For questions or issues, please open a GitHub issue.