This is a drop-in Next.js app you can drag into GitHub and deploy on Vercel.
- Talk to Emome page (
/voice) using ElevenLabs Agents via@elevenlabs/react - After speaking, click Convert to interaction event
- A server route calls Gemini to turn the transcript into:
- structured event fields (person, intensity, duration, valence, tags)
- a short reflection
- The Dashboard updates the Relationship Pulse (time-weighted emotional index)
- Install deps
npm install- Add env vars (copy
.env.example→.env.local)
cp .env.example .env.local- Create an ElevenLabs Agent
- In ElevenLabs Agents dashboard, create an agent for “Emome voice logging”.
- Recommended: set it as a private agent and use signed URLs (more secure).
- Put your
ELEVENLABS_AGENT_IDandELEVENLABS_API_KEYinto.env.local.
Signed URL route used:
POST /api/elevenlabs/signed-url
Docs reference:
- Signed URLs are the recommended way to secure client-side connections.
- Get a Gemini API key
- Put it in
GOOGLE_GEMINI_API_KEY.
- Run
npm run dev- This project includes a placeholder
public/logo.svg. - Replace it with your original Emome logo file if you already have one.
- Colors are defined in
tailwind.config.ts+app/globals.css.
- Add env vars in Vercel project settings:
GOOGLE_GEMINI_API_KEYELEVENLABS_API_KEYELEVENLABS_AGENT_ID
- Deploy.
Keep wording non-medical:
- Emome is a “clarity tool” / “relationship tracker” (not therapy, not diagnosis).
Good luck 😈