One transcript. Seven AI agents. Instant clarity.
PrismAI transforms any meeting transcript into structured intelligence — summaries, action items, sentiment analysis, follow-up emails, calendar suggestions, and a meeting health score — powered by a multi-agent pipeline on Groq + LLaMA 3.3 70B.
Frontend is ready for deployment on Vercel. The backend remains on Render.
An orchestrator LLM reads your transcript and dynamically routes it to the right specialized agents, all running in parallel:
| Agent | Output |
|---|---|
| Summarizer | Concise 2-3 sentence TL;DR |
| Action Items | Who owns what, with due dates |
| Decisions | What was actually agreed or resolved |
| Sentiment | Tone score + conflict detection |
| Email Drafter | Ready-to-send follow-up email |
| Calendar Suggester | Follow-up meeting recommendation |
| Health Score | 0-100 meeting quality score with breakdown |
Plus a Chat interface to ask questions about any meeting in natural language.
- Paste a transcript directly
- Record live audio via browser microphone (Web Speech API)
- Upload an audio file — transcribed via Groq Whisper large-v3
| Layer | Tech |
|---|---|
| Frontend | React + Vite + Tailwind CSS |
| Backend | FastAPI (Python) |
| AI | Groq API — LLaMA 3.3 70B + Whisper large-v3 |
| Hosting | Vercel (frontend) + Render (backend) |
# Backend
cp backend/.env.example backend/.env # add your GROQ_API_KEY
cd backend && pip install -r requirements.txt && uvicorn main:app --reload
# Frontend (new terminal)
cd frontend && npm install && npm run devGet a free Groq API key at https://console.groq.com
- Import this repo into Vercel.
- Keep the project root at the repo root so
vercel.jsonis used. - Add
VITE_API_URLand point it at your Render backend, for examplehttps://meeting-copilot-api.onrender.com. - Deploy.
The frontend no longer depends on a GitHub Pages base path, so it works cleanly on a root Vercel domain.