Manus Twin is a local Windows application that replicates the architecture and functionality of Manus.im. It provides a powerful AI assistant interface that supports multiple AI models including Gemini and Claude, with the ability to add custom API keys. The application includes a virtual Linux sandbox, memory modules for longer context, and an enhanced knowledge system supporting up to 100 knowledge entries.
Manus Twin follows a modular architecture with the following key components:
- Server: Express.js server that handles API requests and serves the frontend
- Model Integration: Interface for interacting with different AI models
- API Management: System for securely storing and managing API keys
- Workflow Engine: Orchestrates the execution of tasks and function calls
- Memory Module: Maintains conversation history and context across sessions
- Knowledge System: Stores and retrieves knowledge entries for context
- Virtual Linux Sandbox: Provides a secure environment for executing commands
- Windows Compatibility Layer: Ensures the application runs smoothly on Windows
- Dashboard: Main interface for interacting with the AI assistant
- Chat Interface: Where conversations with the AI take place
- Settings: Configuration options for the application
- Model Selection: Interface for choosing which AI model to use
- User inputs a message through the chat interface
- The message is sent to the backend server
- The workflow engine processes the message and determines the appropriate action
- If needed, the memory module provides context from previous conversations
- If relevant, the knowledge system provides additional context
- The model integration sends the message to the selected AI model
- The AI model generates a response
- The response is returned to the frontend and displayed to the user
- Gemini Models: Support for various Gemini models including 1.5 Flash and 2.0 experimental
- Claude Models: Support for Claude 3.7 and other Claude models
- Custom API Integration: Ability to add custom API endpoints for other AI models
- Long-term Memory: Maintains context across sessions
- Enhanced Knowledge System: Supports up to 100 knowledge entries (compared to 20 in Manus)
- Context-aware Responses: AI responses take into account previous conversations and stored knowledge
- Command Execution: Run Linux commands in a secure sandbox environment
- File Operations: Create, read, update, and delete files within the sandbox
- Shell Sessions: Interactive shell sessions for complex operations
- Local Deployment: Runs locally on Windows computers
- Desktop Integration: Creates shortcuts and registers with Windows
- Portable Version: Option to create a portable version of the application
- Windows 10 or later
- 8GB RAM minimum (16GB recommended)
- 2GB free disk space
- Internet connection for AI model access
- Download the installer from the releases page
- Run the installer and follow the on-screen instructions
- Launch the application from the Start menu or desktop shortcut
- Download the portable version from the releases page
- Extract the ZIP file to a location of your choice
- Run the
start.batfile to launch the application
- Open the application and navigate to the Settings page
- Click on "API Keys" in the sidebar
- Enter your API keys for the models you want to use:
- For Gemini: Enter your Google API key
- For Claude: Enter your Anthropic API key
- For custom providers: Enter the API key and endpoint URL
- Theme: Choose between light and dark mode
- Memory Retention: Configure how long conversations are stored
- Knowledge Management: Add, edit, or remove knowledge entries
- Sandbox Configuration: Configure the virtual Linux sandbox environment
- Launch the application
- Select the AI model you want to use from the dropdown menu
- Type your message in the input field and press Enter or click Send
- The AI will respond based on the selected model and your input
- In the chat interface, ask the AI to perform a task that requires the sandbox
- The AI will execute the necessary commands in the sandbox environment
- Results will be displayed in the chat interface
- Navigate to the Settings page
- Click on "Knowledge" in the sidebar
- Add new knowledge entries with a name, content, and optional tags
- Activate or deactivate knowledge entries as needed
/api/models: Manage AI models/api/functions: Register and execute functions/api/settings: Configure application settings/api/integration: Interact with AI models/api/management: Manage API keys/api/workflow: Control the workflow engine/api/memory: Access the memory module/api/knowledge: Manage knowledge entries/api/sandbox: Interact with the virtual Linux sandbox/api/platform: Access platform-specific functionality/api/deployment: Manage application deployment
/ws: Real-time communication for streaming responses
- Error: "Invalid API key"
- Solution: Verify that you've entered the correct API key in the Settings page
- Error: "Model not available"
- Solution: Some models may be deprecated or renamed. Try using a different model or check the provider's documentation for the latest model names.
- Issue: Slow response times
- Solution: Close other resource-intensive applications, ensure you have a stable internet connection, or try a different AI model that may be more efficient.
- Clone the repository
- Install dependencies with
npm install - Build the application with
npm run build - Start the application with
npm start
/backend: Server-side code/src: Source code/api: API routes/models: Model integration/workflow: Workflow engine/memory: Memory module/knowledge: Knowledge system/sandbox: Virtual Linux sandbox/platform: Platform-specific code
/frontend: Client-side code/src: Source code/components: React components/pages: Page layouts/utils: Utility functions
- Fork the repository
- Create a feature branch
- Make your changes
- Submit a pull request
This project is licensed under the MIT License - see the LICENSE file for details.