Skip to content

manastole03/TechMate-AI

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

4 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

πŸ¦™ TechMate AI Chat - Advanced AI Workflow Platform

Llama Chat Banner

A production-ready, collaborative AI chat interface designed for power users. AI integrates structured workflows, prompt engineering tools, and real-time collaboration into a single, sleek application.


πŸ“‘ Table of Contents


✨ Features

1. 🎨 Unified Composer Interface

A powerful input area that combines multiple modalities:

  • Multi-modal Input: Text, Voice (Speech-to-Text), and File Uploads (PDF/Text).
  • Smart Toolbar: Quick access to Prompts, Optimization, and Regeneration.
  • Auto-Expanding: Distraction-free writing experience.

2. ⚑ Workflow Builder

Transform complex tasks into guided, step-by-step processes.

  • Pre-built Templates: Job Search, Content Writing, Code Development, Email Strategy.
  • Interactive Steps: Execute prompts sequentially, with context from previous steps passed forward.
  • Rich Results: Markdown-formatted outputs with expand/collapse capability.

3. πŸͺ„ Prompt Optimizer

AI-powered assistant to refine your prompts before sending.

  • Analysis: Detects issues like brevity, lack of context, or weak instructions.
  • Suggestions: Generates 3 optimized variations (e.g., "More Creative", "More Professional").
  • One-Click Apply: Instantly use the best version.

4. πŸ“š Context-Aware Prompt Library

  • Dynamic Suggestions: Shows prompts relevant to the current chat category (e.g., Coding prompts for "Code Writer" mode).
  • Quick Access: accessible via a popover menu in the composer.

5. οΏ½ Real-Time Collaboration

  • Rooms: Create or join named rooms to chat with others.
  • Live Sync: Messages update in real-time across all connected clients.

6. πŸ› οΈ Category-Specific Tools

  • Right Sidebar: specialized tools based on the active chat mode (e.g., SEO keywords for Writing, Code snippets for Coding).

Screenshots

image

image

image

image

image

πŸ’» Tech Stack

Frontend

Backend

  • Runtime: Node.js
  • Framework: Express.js
  • AI Integration: OpenAI-compatible API Proxy (works with Llama, Groq, OpenAI)
  • Real-time: Polling / WebSocket ready architecture

οΏ½ Project Structure

llama-chat-proxy/
β”œβ”€β”€ client/                 # Frontend React Application
β”‚   β”œβ”€β”€ public/
β”‚   β”œβ”€β”€ src/
β”‚   β”‚   β”œβ”€β”€ components/     # UI Components
β”‚   β”‚   β”‚   β”œβ”€β”€ Composer.jsx        # Main input area
β”‚   β”‚   β”‚   β”œβ”€β”€ WorkflowBuilder.jsx # Workflow logic
β”‚   β”‚   β”‚   β”œβ”€β”€ PromptOptimizer.jsx # AI optimization modal
β”‚   β”‚   β”‚   β”œβ”€β”€ Sidebar.jsx         # Navigation & Rooms
β”‚   β”‚   β”‚   └── ...
β”‚   β”‚   β”œβ”€β”€ state/          # Global State (ChatStore)
β”‚   β”‚   β”œβ”€β”€ utils/          # Helper functions
β”‚   β”‚   β”œβ”€β”€ App.jsx         # Main App Component
β”‚   β”‚   └── styles.css      # Global Styles & Tailwind directives
β”‚   β”œβ”€β”€ package.json
β”‚   β”œβ”€β”€ tailwind.config.js
β”‚   └── vite.config.js
β”œβ”€β”€ server.js               # Backend Express Server
β”œβ”€β”€ package.json            # Root dependencies (concurrently)
└── .env                    # Environment variables

πŸš€ Getting Started

Prerequisites

  • Node.js (v16 or higher)
  • npm or yarn
  • An API Key for your AI Provider (e.g., Groq, OpenAI, Together AI)

Installation

  1. Clone the repository:

    git clone https://github.com/yourusername/llama-chat-proxy.git
    cd llama-chat-proxy
  2. Install Root Dependencies:

    npm install
  3. Install Client Dependencies:

    cd client
    npm install
    cd ..

Environment Setup

  1. Create a .env file in the root directory:

    cp .env.example .env
  2. Add your API configuration to .env:

    PORT=3001
    # Example for Groq (Llama 3)
    AI_API_URL=https://api.groq.com/openai/v1/chat/completions
    AI_API_KEY=your_api_key_here
    AI_MODEL=llama3-70b-8192

▢️ Running Locally

To run both the backend server and the frontend client simultaneously:

# From the root directory
npm start

Note: The project uses concurrently to run both processes in a single terminal window.


🚒 Deployment

Frontend (Vercel/Netlify)

  1. Push your code to GitHub.
  2. Import the client directory as the root of your project in Vercel/Netlify.
  3. Set the Build Command to npm run build.
  4. Set the Output Directory to dist.
  5. Important: You will need to update the API endpoint in the frontend code to point to your deployed backend URL instead of localhost:3001.

Backend (Render/Railway/Heroku)

  1. Push your code to GitHub.
  2. Deploy the root directory.
  3. Set the Build Command to npm install.
  4. Set the Start Command to node server.js.
  5. Add your environment variables (AI_API_KEY, etc.) in the platform's dashboard.

🀝 Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

  1. Fork the project
  2. Create your feature branch (git checkout -b feature/AmazingFeature)
  3. Commit your changes (git commit -m 'Add some AmazingFeature')
  4. Push to the branch (git push origin feature/AmazingFeature)
  5. Open a Pull Request

οΏ½ License

Distributed under the MIT License. See LICENSE for more information.

About

TechMate AI Chat - Advanced AI Workflow Platform. A collaborative AI chat interface designed for power users. AI integrates structured workflows, prompt engineering tools, and real-time collaboration into a single, sleek application.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages