Connect with peers who've been through exactly what you're facing.
College is full of moments where you feel completely alone β transferring to a new campus, burning out mid-semester, navigating dating, housing, or just not knowing who to talk to. Existing resources either feel too heavy (counselors) or too impersonal (forums and Reddit threads).
The most helpful person for a struggling student is almost always another student who has been through the exact same thing. PeerPath was built to make that connection happen: quickly, warmly, and without awkwardness.
PeerPath is a web platform that matches students going through a hard moment with peers who have lived through something genuinely similar.
- Students describe their situation in their own words and select relevant tags (e.g.
transfer student,burnout,housing stress) - PeerPath analyzes the description and finds peers whose past experiences best match the student's context, emotional state, and the kind of support they need
- Returns the top 3 matches, each with a clear explanation of why they were chosen
- Every match comes with an AI-generated opening message β personal, warm, and ready to send
- Designed so reaching out never feels awkward or forced
- Students can message matched peers directly without exchanging contact info upfront
- Full chat history and unread notifications included
| Layer | Technology |
|---|---|
| Backend | Python + FastAPI |
| LLM | GPT-4o (OpenAI API) |
| Frontend | React + TypeScript + Tailwind CSS |
| Database | PostgreSQL |
| Auth | JWT-based authentication |
Student descriptions are parsed by an LLM into four structured dimensions:
| Dimension | Example |
|---|---|
context |
transfer adjustment, academic stress, dating |
struggle_type |
social isolation, procedural confusion, anxiety |
emotional_signal |
lonely, overwhelmed, frustrated |
help_needed |
reassurance, peer experience, practical tips |
These are then scored against peer profiles using a combination of rule-based field matching and LLM semantic reasoning.
Scoring Algorithm:
Fast rule-based scoring runs across all candidates first; LLM reranking is applied only to the top candidates β keeping response times low without sacrificing quality.
A dedicated LLM prompt grounds each opening message in the specific peer's past experience and the match reason, making every starter feel genuinely personal.
User Input
βββ Select Tags (multi-select)
βββ Natural language challenge description
β
Stage 1: Tag Filter
βββ Filter candidate peers by overlapping tags
β
Stage 2: LLM Problem Parser
βββ Description β Structured Challenge Profile (JSON)
β
Stage 3: Relevance Scoring
βββ Challenge Profile vs. each candidate's past_challenges
β
Stage 4: Ranking & Output
βββ tag score + LLM relevance score β top peers + explanations
{
"id": "peer_001",
"name": "Jordan",
"major": "Computer Science",
"year": "Senior",
"tags": ["transfer student", "making friends", "joining clubs"],
"past_challenges": [
{
"raw": "I transferred in sophomore year and felt disconnected socially.",
"parsed": {
"context": "transfer adjustment",
"struggle_type": "social isolation",
"emotional_signal": "lonely",
"resolution_type": "peer experience"
}
}
],
"help_topics": ["transfer adjustment", "campus social life"],
"comfort_level": "open to messages"
}You are analyzing a university student's challenge description.
Extract the following fields. You MUST choose values ONLY from the provided options.
Return ONLY a JSON object, no explanation.
"context": ["transfer adjustment", "freshman adjustment", "academic stress",
"course registration", "campus navigation", "social life",
"dating", "internship search", "research decisions",
"housing", "international student adjustment", "changing majors"]
"struggle_type": ["social isolation", "procedural confusion", "academic difficulty",
"anxiety", "burnout", "lack of information", "difficulty adapting",
"relationship confusion", "time management", "financial stress"]
"emotional_signal": ["lonely", "overwhelmed", "frustrated", "uncertain",
"anxious", "lost", "stressed", "hopeful", "confused"]
"help_needed": ["reassurance", "step-by-step advice", "peer experience",
"practical tips", "emotional support", "introductions to people",
"information about resources"]
Student input: "{user_description}"
You are comparing a student's current challenge with a peer's past experience.
Student Challenge Profile:
- Context: {student.context}
- Struggle type: {student.struggle_type}
- Emotional signal: {student.emotional_signal}
- Help needed: {student.help_needed}
Peer's Past Experience: "{peer.past_challenges[n].raw}"
Score this match from 0 to 10. Return ONLY:
{ "score": <0-10>, "reason": "<one sentence>" }
A student is reaching out to a peer for the first time.
Student situation: "{user_description}"
Peer background: "{peer.past_challenges[n].raw}"
Match reason: "{reason from Prompt 2}"
Write a warm, natural opening message (2-3 sentences) the student could send.
Do not be overly formal. Make it feel like one student talking to another.
| ID | Feature | Priority |
|---|---|---|
| PP-1 | Tag-based peer filtering | High |
| PP-2 | Challenge description input | High |
| PP-3 | LLM semantic matching | High |
| PP-4 | Peer recommendation ranking | Medium |
| PP-5 | Explanation generation | Medium |
| PP-6 | Conversation starter generation | Medium |
1. LLM Cost vs. Performance Trade-off Our initial plan was to use LLM reasoning for every candidate comparison, but this proved too token-heavy and expensive at scale. We designed a two-stage scoring algorithm β fast rule-based field matching for all candidates first, with LLM semantic reasoning applied only to the top finalists.
2. Structured LLM Output for Downstream Scoring For the similarity scoring to work, the LLM needed to return data in a consistent, structured format β not free-form text. We had to carefully design prompt templates with a fixed output schema so the parsed challenge profile could be reliably consumed by the scoring pipeline.
3. Matching Latency and User Experience The multi-step matching pipeline introduced noticeable loading time. Rather than hiding it, we leaned into it β designing a loading screen with rotating micro-messages to set honest expectations and keep users engaged while results load.
4. Scope Expansion Mid-Build Starting with a single matching page, we quickly realized users needed a way to revisit past results β leading us to design and build a full match history system with persistent storage.
- β A matching system that understands emotional context and lived experience, not just keyword overlap
- β Conversation starters that students actually want to send
- β A complete end-to-end product built at a hackathon: registration, profiles, matching, messaging, and history
The algorithm is only half the product. How a match is explained, and whether the opening message sounds human, determines whether a student actually reaches out. Getting the tone right mattered just as much as getting the ranking right. Clear explanations, intuitive design, and a friendly interaction flow significantly increase the likelihood that students use the match and reach out.
- π Anonymous matching β matched peers can choose to stay anonymous until they feel comfortable sharing
- π University email verification β requiring a
.eduemail (e.g.umich.edu) to keep the platform within the student community - π·οΈ Custom tags β letting students search, create, and contribute their own tags beyond the preset list
- π€ AI navigation agent β an in-app assistant powered by curated campus resources and support data
- π Smarter ranking over time β fine-tuning matching weights based on real outcome data and peer activity signals
- π User feedback and data expansion β collecting interaction data to expand the database and continuously refine match quality using machine learning
# Clone the repo
git clone https://github.com/your-org/peerpath.git
cd peerpath
# Backend
cd backend
pip install -r requirements.txt
uvicorn main:app --reload
# Frontend
cd frontend
npm install
npm run devSet your environment variables:
OPENAI_API_KEY=your_key_here
DATABASE_URL=postgresql://...
JWT_SECRET=your_secret_hereThis product has earned the 3rd place at the CampusAI x MDC Hackathon 2026 event. More details on Devpost.