A multilingual interface that helps non-English speakers understand content through:
-
DeepSeek R1 Local LLM (via Ollama) - Contextual reasoning and English processing
-
Native language translation using Gemini 2.5 Pro
🚀 Real-time translation workflow
🌐 Local language support through models
⚡ Instant results with loading indicators
🔒 Fully local operation (except DeepSeek API)
1.Get User Input: Capture the user's prompt.
2.DeepSeek R1 Processing: Send the input to DeepSeek R1 for contextual analysis.
3.Retrieve Response: Receive the analyzed output from DeepSeek R1.
4.Gemini 2.5 Pro Translation: Forward the DeepSeek response to Gemini 2.5 Pro for translation.
5.Display Output: Present the translated content to the user.
- Next.js: Used for the frontend and API routes.
- Ollama: Facilitates local model execution.
- DeepSeek: Provides deep contextual understanding.
- Gemini 2.5 Pro: Enables smooth local language translation.
- React Hot Toast: Delivers user feedback through notifications.
- Node.js 18+
- Ollama: Must be running locally.
- Gemini api key from Google AI Studio
In src/app/utils/constants.ts:
Gemini API Key: Insert your Gemini API key.
Local Language: Set your desired language by modifying the LOCAL_LANGUAGE variable.
Feel free to enhance or contribute to this project. For any questions or suggestions, please contact the project maintainers.
