A production-ready, collaborative AI chat interface designed for power users. AI integrates structured workflows, prompt engineering tools, and real-time collaboration into a single, sleek application.
- Features
- Tech Stack
- Project Structure
- Getting Started
- Running Locally
- Deployment
- Contributing
- License
A powerful input area that combines multiple modalities:
- Multi-modal Input: Text, Voice (Speech-to-Text), and File Uploads (PDF/Text).
- Smart Toolbar: Quick access to Prompts, Optimization, and Regeneration.
- Auto-Expanding: Distraction-free writing experience.
Transform complex tasks into guided, step-by-step processes.
- Pre-built Templates: Job Search, Content Writing, Code Development, Email Strategy.
- Interactive Steps: Execute prompts sequentially, with context from previous steps passed forward.
- Rich Results: Markdown-formatted outputs with expand/collapse capability.
AI-powered assistant to refine your prompts before sending.
- Analysis: Detects issues like brevity, lack of context, or weak instructions.
- Suggestions: Generates 3 optimized variations (e.g., "More Creative", "More Professional").
- One-Click Apply: Instantly use the best version.
- Dynamic Suggestions: Shows prompts relevant to the current chat category (e.g., Coding prompts for "Code Writer" mode).
- Quick Access: accessible via a popover menu in the composer.
- Rooms: Create or join named rooms to chat with others.
- Live Sync: Messages update in real-time across all connected clients.
- Right Sidebar: specialized tools based on the active chat mode (e.g., SEO keywords for Writing, Code snippets for Coding).
- Framework: React (Vite)
- Styling: Tailwind CSS
- Animations: Framer Motion
- Icons: Lucide React
- State Management: Custom Hooks & Context API
- PDF Processing: PDF.js
- Runtime: Node.js
- Framework: Express.js
- AI Integration: OpenAI-compatible API Proxy (works with Llama, Groq, OpenAI)
- Real-time: Polling / WebSocket ready architecture
llama-chat-proxy/
βββ client/ # Frontend React Application
β βββ public/
β βββ src/
β β βββ components/ # UI Components
β β β βββ Composer.jsx # Main input area
β β β βββ WorkflowBuilder.jsx # Workflow logic
β β β βββ PromptOptimizer.jsx # AI optimization modal
β β β βββ Sidebar.jsx # Navigation & Rooms
β β β βββ ...
β β βββ state/ # Global State (ChatStore)
β β βββ utils/ # Helper functions
β β βββ App.jsx # Main App Component
β β βββ styles.css # Global Styles & Tailwind directives
β βββ package.json
β βββ tailwind.config.js
β βββ vite.config.js
βββ server.js # Backend Express Server
βββ package.json # Root dependencies (concurrently)
βββ .env # Environment variables- Node.js (v16 or higher)
- npm or yarn
- An API Key for your AI Provider (e.g., Groq, OpenAI, Together AI)
-
Clone the repository:
git clone https://github.com/yourusername/llama-chat-proxy.git cd llama-chat-proxy -
Install Root Dependencies:
npm install
-
Install Client Dependencies:
cd client npm install cd ..
-
Create a
.envfile in the root directory:cp .env.example .env
-
Add your API configuration to
.env:PORT=3001 # Example for Groq (Llama 3) AI_API_URL=https://api.groq.com/openai/v1/chat/completions AI_API_KEY=your_api_key_here AI_MODEL=llama3-70b-8192
To run both the backend server and the frontend client simultaneously:
# From the root directory
npm start- Frontend: http://localhost:3000
- Backend: http://localhost:3001
Note: The project uses concurrently to run both processes in a single terminal window.
- Push your code to GitHub.
- Import the
clientdirectory as the root of your project in Vercel/Netlify. - Set the Build Command to
npm run build. - Set the Output Directory to
dist. - Important: You will need to update the API endpoint in the frontend code to point to your deployed backend URL instead of
localhost:3001.
- Push your code to GitHub.
- Deploy the root directory.
- Set the Build Command to
npm install. - Set the Start Command to
node server.js. - Add your environment variables (
AI_API_KEY, etc.) in the platform's dashboard.
Contributions are welcome! Please feel free to submit a Pull Request.
- Fork the project
- Create your feature branch (
git checkout -b feature/AmazingFeature) - Commit your changes (
git commit -m 'Add some AmazingFeature') - Push to the branch (
git push origin feature/AmazingFeature) - Open a Pull Request
Distributed under the MIT License. See LICENSE for more information.