Skip to content

ankitlade12/loopbreaker

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

8 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

LoopBreaker — AI That Detects Conversational Loops & Breaks Them

Python 3.13+ FastAPI Groq React 19 Vite License: MIT

HackHazards '26 Submission · "Stop talking in circles. Start making progress."

Detect when a conversation is going nowhere — and generate the one question that breaks the cycle.

Quick Highlights

  • Semantic Loop Detection: NLP embeddings + cosine similarity detect when the same argument repeats in different words — not keyword matching
  • Loop Pattern Classification: Categorizes loops as Deadlock, Ping-Pong, Escalation, Avoidance, or Echo Chamber
  • AI Interventions + Summary: Groq LLaMA 3.3 diagnoses unstated assumptions, generates loop-breaking questions, and writes a plain-English summary
  • Meeting Cost Calculator: Shows the dollar cost of loops — total meeting cost, $ wasted, % time wasted, projected yearly waste
  • Conversation DNA Fingerprint: Unique radial visualization — each ring is a turn, color = tone, red = looped, with A-F health grade
  • Debate Scoreboard: Grades each speaker A-F on idea novelty vs repetition with stacked bar charts
  • Audio Upload + Whisper: Upload MP3/WAV/M4A files, Groq Whisper transcribes, auto-assigns speakers, ready for analysis
  • Emotional Tone Tracking: Tracks frustration, defensiveness, urgency, and agreement across the conversation timeline
  • Speaker Analytics: Per-speaker breakdown showing who loops the most, who initiates loops, and their dominant emotional tone
  • Live Meeting Mode: Real-time transcription via Web Speech API with rolling loop detection every 30 seconds
  • Visual Analytics: Similarity heatmap, conversation path visualizer with loop arcs, escalation timeline
  • 24 Features Total: File upload, adjustable sensitivity, conversation comparison, loop resolution tracker, PDF export, dark/light theme, skeleton loading, toast notifications, keyboard shortcuts, conversation replay, mobile responsive

The Problem

The #1 killer of productive meetings, relationships, and negotiations is conversational looping — when people keep restating the same positions in slightly different words, believing they're making progress while going nowhere.

  • The average professional spends 31 hours/month in unproductive meetings (Atlassian research)
  • 71% of meetings are considered unproductive (Harvard Business Review)
  • Nobody notices loops while they're happening — they only feel the frustration afterward
Current Approach Why It Fails
Meeting transcription (Otter, Fireflies) Answers "what was said?" — doesn't detect repetition patterns
Meeting summaries (Copilot, Read.ai) Post-meeting only, no loop detection, no intervention generation
Sentiment analysis Measures how people feel, not where they're stuck
Action item extraction Only useful if the meeting actually made progress

No tool exists that detects semantic repetition patterns in conversation, visualizes them, and generates targeted interventions to break them.

The Solution

LoopBreaker is an AI-powered conversation debugger that answers: "Where is this conversation stuck, why, and what question would unstick it?"

How It Works

  1. Semantic Chunking: Transcript is split into argument units (one per speaker turn)

  2. Embedding Generation: Each chunk gets a 384-dimensional semantic embedding via all-MiniLM-L6-v2

  3. Similarity Matrix: Pairwise cosine similarity computed for all chunks:

    similarity[i][j] = cosine_similarity(embedding[i], embedding[j])
    
  4. Loop Detection: Loops identified when:

    - similarity(chunk_i, chunk_j) > 0.72
    - j - i >= 2 (not adjacent turns)
    - Connected components form loop clusters via union-find
    
  5. Pattern Classification: Each loop is categorized by speaker distribution, alternation pattern, and escalation trajectory

  6. AI Diagnosis (Groq): For each loop cluster, LLaMA 3.3 identifies the unstated assumption, generates a loop-breaking question, provides a reframe, and finds common ground

  7. Tone Analysis: Keyword/pattern-based scoring across frustration, defensiveness, urgency, agreement, and questioning dimensions

High-Level Workflow

flowchart LR
    subgraph "LoopBreaker Pipeline"
        A["Transcript Input\nPaste / File / Audio / Live Mic"] --> B["Semantic Chunker\nSpeaker Turn Parsing"]
        B --> C["Embedding Engine\nall-MiniLM-L6-v2"]
        C --> D["Similarity Matrix\nCosine Similarity"]
        D --> E["Loop Detector\nUnion-Find Clustering"]
        E --> F["AI Diagnosis\nGroq LLaMA 3.3"]
    end

    subgraph "Analysis Layer"
        G["Pattern Classifier\nDeadlock / Ping-Pong / Escalation"]
        H["Tone Analyzer\nFrustration / Defensiveness / Urgency"]
        I["Speaker Stats\nLoop Contribution Tracking"]
        J["React Dashboard\nTabbed Visualization"]
    end

    E --> G
    B --> H
    E --> I
    H --> I
    F --> J
    G --> J
    I --> J

    style A fill:#8B5CF6,color:#fff
    style B fill:#6366f1,color:#fff
    style C fill:#3b82f6,color:#fff
    style D fill:#0ea5e9,color:#fff
    style E fill:#10b981,color:#fff
    style F fill:#f59e0b,color:#fff
    style G fill:#ec4899,color:#fff
    style H fill:#ef4444,color:#fff
    style I fill:#14b8a6,color:#fff
    style J fill:#a855f7,color:#fff
Loading

Analysis Pipeline

Stage Component Input Output
Parse Chunker Raw transcript Array of ChunkUnit (speaker, text, index)
Embed Embedding Engine Chunk texts 384-dim vectors via all-MiniLM-L6-v2
Compare Similarity Matrix Embedding vectors N x N cosine similarity matrix
Detect Loop Detector Similarity matrix + threshold LoopCluster objects with severity scores
Classify Pattern Classifier Loop clusters Pattern type (Deadlock, Ping-Pong, etc.)
Diagnose Groq Client Loop cluster text Unstated assumption + loop-breaking question
Tone Tone Analyzer All chunks Per-turn tone scores + escalation metric
Score Debate Scorer Chunks + loops Per-speaker new vs repeated points + novelty grade
Stats Speaker Analyzer Chunks + loops + tones Per-speaker loop contribution + dominant tone
Summarize Groq Client Chunks + loops + efficiency Plain-English AI summary of what went wrong
Transcribe Groq Whisper Audio file (MP3/WAV/M4A) Speaker-attributed transcript text

Architecture & Technical Overview

System Architecture

graph TB
    subgraph Frontend ["Frontend — React 19 + TypeScript + Vite"]
        INPUT["Transcript Input\nPaste / Demo / File / Audio"]
        LIVE["Live Mode\nWeb Speech API"]
        VIZ["Loop Visualizer\nCanvas Path Diagram"]
        HEAT["Similarity Heatmap\nN x N Matrix"]
        DNA["Conversation DNA\nRadial Fingerprint"]
        TONE_CHART["Tone Timeline\nEscalation Chart"]
        SPEAKER["Speaker Analytics\nLoop Contribution Bars"]
        DEBATE["Debate Scoreboard\nNovelty Grades A-F"]
        PATTERN["Pattern Badges\nDeadlock / Ping-Pong / etc."]
        REPLAY["Conversation Replay\nAnimated Playback"]
        INTERV["Intervention Cards\nAI Diagnosis + Resolution"]
        COST["Meeting Cost Calculator\nDollar Impact"]
        EXPORT["Export Report\nPDF / txt / json"]
    end

    subgraph APILayer ["API Layer — FastAPI"]
        ANALYZE["/api/analyze\nFull Pipeline"]
        DETECT["/api/detect-loops\nLightweight Detection"]
        TRANSCRIBE["/api/transcribe\nAudio → Text (Whisper)"]
        HEALTH["/api/health\nHealth Check"]
    end

    subgraph CoreEngine ["Core Engine — Python"]
        CHUNKER["Chunker\nTranscript → Semantic Units"]
        EMBED["Embedding Engine\nall-MiniLM-L6-v2"]
        SIM["Similarity Matrix\nCosine Similarity (scikit-learn)"]
        DETECTOR["Loop Detector\nUnion-Find Clustering"]
        CLASSIFIER["Pattern Classifier\n5 Loop Types"]
        TONE_ENGINE["Tone Analyzer\n5 Emotional Dimensions"]
        STATS["Speaker Stats + Debate Scores\nPer-Speaker Aggregation"]
    end

    subgraph ExternalServices ["External Services"]
        GROQ["Groq API\nLLaMA 3.3 70B"]
        WHISPER["Groq Whisper\nAudio Transcription"]
        SPEECH["Web Speech API\nBrowser-Native STT"]
    end

    Frontend -->|"REST"| APILayer
    ANALYZE --> CoreEngine
    DETECT --> CoreEngine
    TRANSCRIBE --> WHISPER
    DETECTOR --> GROQ
    LIVE --> SPEECH
    SPEECH --> DETECT

    style INPUT fill:#8B5CF6,color:#fff
    style LIVE fill:#ef4444,color:#fff
    style VIZ fill:#3b82f6,color:#fff
    style HEAT fill:#0ea5e9,color:#fff
    style DNA fill:#a855f7,color:#fff
    style TONE_CHART fill:#f59e0b,color:#fff
    style SPEAKER fill:#14b8a6,color:#fff
    style DEBATE fill:#06b6d4,color:#fff
    style PATTERN fill:#ec4899,color:#fff
    style REPLAY fill:#6366f1,color:#fff
    style INTERV fill:#10b981,color:#fff
    style COST fill:#dc2626,color:#fff
    style EXPORT fill:#6b7280,color:#fff
    style ANALYZE fill:#10b981,color:#fff
    style DETECT fill:#3b82f6,color:#fff
    style TRANSCRIBE fill:#f59e0b,color:#fff
    style HEALTH fill:#6b7280,color:#fff
    style CHUNKER fill:#8B5CF6,color:#fff
    style EMBED fill:#6366f1,color:#fff
    style SIM fill:#3b82f6,color:#fff
    style DETECTOR fill:#10b981,color:#fff
    style CLASSIFIER fill:#ec4899,color:#fff
    style TONE_ENGINE fill:#ef4444,color:#fff
    style STATS fill:#14b8a6,color:#fff
    style GROQ fill:#F55036,color:#fff
    style WHISPER fill:#F55036,color:#fff
    style SPEECH fill:#4285F4,color:#fff
Loading

Data Pipeline

sequenceDiagram
    participant U as User
    participant FE as React Dashboard
    participant API as FastAPI Server
    participant CHK as Chunker
    participant EMB as Embedding Engine
    participant DET as Loop Detector
    participant CLS as Pattern Classifier
    participant TONE as Tone Analyzer
    participant GROQ as Groq API

    Note over U,GROQ: Paste & Analyze Flow
    U->>FE: Paste transcript / click demo
    FE->>API: POST /api/analyze
    API->>CHK: parse transcript
    CHK-->>API: ChunkUnit[] (speaker turns)
    API->>EMB: generate embeddings
    EMB-->>API: 384-dim vectors
    API->>API: build cosine similarity matrix
    API->>DET: detect loops (threshold=0.72)
    DET-->>API: LoopCluster[] with severity
    API->>CLS: classify loop patterns
    CLS-->>API: LoopPattern[] (Deadlock, Ping-Pong, etc.)
    API->>TONE: analyze emotional tones
    TONE-->>API: ToneData[] per turn
    API->>API: compute speaker stats
    API->>API: compute debate scores
    API->>GROQ: diagnose each loop cluster
    GROQ-->>API: Intervention[] (assumption + question)
    API->>GROQ: generate AI summary
    GROQ-->>API: Plain-English summary
    API-->>FE: Full AnalysisResult
    FE-->>U: Dashboard (visualizer, heatmap, DNA, cost, scoreboard, interventions)

    Note over U,GROQ: Audio Upload Flow
    U->>FE: Upload MP3/WAV/M4A file
    FE->>API: POST /api/transcribe (multipart)
    API->>GROQ: Whisper transcription
    GROQ-->>API: Transcribed text with segments
    API-->>FE: Speaker-attributed transcript
    FE-->>U: Transcript loaded, ready for analysis

    Note over U,GROQ: Live Meeting Flow
    U->>FE: Click "Start Listening"
    FE->>FE: Web Speech API → live transcript
    loop Every 30 seconds
        FE->>API: POST /api/detect-loops
        API->>CHK: parse accumulated turns
        API->>EMB: generate embeddings
        API->>DET: detect loops
        DET-->>API: LoopCluster[]
        API-->>FE: Loop detection results
        FE-->>U: Alert if loop detected
    end
    U->>FE: Click "Stop"
    FE-->>U: Full transcript available for analysis
Loading

Technical Deep Dive

Loop Detection Engine (core/detector.py)

Union-find based clustering of semantically similar non-adjacent turns:

Component Method Purpose
detect_loops() Main entry point Finds all high-similarity pairs, clusters them, scores severity
Similarity threshold 0.72 Tuned for conversational repetition (lower than paraphrase detection)
Minimum gap 2 turns Prevents flagging adjacent follow-ups as loops
Union-find Path compression Groups connected similar turns into loop clusters
Severity scoring min(10, avg_similarity × cluster_size × 2) Higher score = more repetitions at higher similarity

Pattern Classifier (core/pattern_classifier.py)

Each detected loop is categorized into one of five patterns:

Pattern Icon Detection Rule Description
Deadlock 🔒 Default (fixed positions) Both sides repeat without new information
Ping-Pong 🏓 2 speakers, strict alternation, 4+ chunks Two speakers trading the same two arguments
Escalation 📈 Urgency keywords increase in later half Same arguments with rising emotional intensity
Avoidance 🙈 turns_consumed > cluster_size × 2 Topic resurfacing because it's deflected, not addressed
Echo Chamber 🔁 Single speaker in cluster One person restating their own point repeatedly

Tone Analyzer (core/tone_analyzer.py)

Pattern-based emotional tracking across five dimensions:

Dimension Example Patterns Purpose
Frustration "I'm telling you", "already said", "ridiculous" Detects repetition fatigue
Defensiveness "that's not what I said", "don't blame me" Detects positional entrenchment
Urgency "need to", "deadline", "immediately" Detects pressure escalation
Agreement "you're right", "good point", "let's do" Detects forward progress
Questioning "what if", "how about", "could we" Detects constructive exploration

Escalation metric = frustration + defensiveness + urgency — plotted on the timeline to show emotional trajectory.

Groq Integration (core/groq_client.py)

For each detected loop, LLaMA 3.3 70B (via Groq's ultra-fast inference) generates:

Output Description
Unstated Assumption The hidden disagreement both parties are talking around
Loop-Breaking Question A single specific question to force the conversation onto new ground
Reframe An alternative framing of the core disagreement
Common Ground A statement both parties would agree with
Severity Rating 1-10 with explanation

Groq is optional — the app works without an API key (loop detection, heatmap, analytics all still function).

Features

Core Analysis

# Feature Description
1 Paste & Analyze Paste any conversation transcript, email thread, or chat log. Supports Speaker: text format.
2 File Upload Drag & drop .txt, .json, or .csv files. Auto-parses structured formats.
3 Adjustable Sensitivity Slider to tune similarity threshold (0.50–0.95) and minimum turn gap. See loops appear/disappear in real-time.
4 AI Summary Groq generates a 2-3 sentence plain-English summary of what went wrong and the key sticking point.
5 Loop Path Visualizer Canvas-rendered path diagram. Forward progress = horizontal. Loop arcs = curved arrows going backward.
6 Similarity Heatmap Color-coded N x N matrix. Red = loop (>0.72). Yellow = similar. Diagonal = self.
7 Speaker Analytics Per-speaker cards with loop percentage bars, initiated loop count, dominant emotional tone.
8 Emotional Escalation Timeline Line chart tracking frustration + defensiveness + urgency over time. Dots colored by dominant tone.
9 Loop Pattern Classification Each loop categorized as Deadlock, Ping-Pong, Escalation, Avoidance, or Echo Chamber.
10 Conversation Replay Animated turn-by-turn playback with loop alerts popping up at repetition points. Play/pause controls.
11 Live Meeting Mode Web Speech API real-time transcription. Rolling loop detection every 30 seconds. Instant alerts.

Advanced Features

# Feature Description
12 Loop Resolution Tracker Mark loops as "resolved" after applying the loop-breaking question. Track resolution rate.
13 Conversation Comparison Paste a second conversation in the Compare tab. Side-by-side metrics: efficiency, loops, looped turns.
14 PDF Export Styled HTML report opens in print dialog. Includes stats, AI summary, loop details, speaker table.
15 Multi-format Export Export as .txt report, .json data, or copy to clipboard.
16 Dark/Light Theme Toggle between dark and light themes. Persists across the session.
17 Skeleton Loading Shimmer skeleton cards during analysis instead of a plain spinner.
18 Toast Notifications Non-blocking alerts for "Analysis complete", "Copied to clipboard", errors, etc.
19 Keyboard Shortcuts Cmd+Enter = analyze, Cmd+K = new analysis, Esc = exit live mode.
20 Mobile Responsive Full responsive layout for all screen sizes.

Novel Features

# Feature Description
21 Meeting Cost Calculator Input participants, hourly rate, meeting duration. Shows: total meeting cost, $ wasted on loops, % time wasted, projected yearly waste.
22 Audio Upload + Whisper Upload MP3/WAV/M4A/FLAC audio files. Groq Whisper transcribes to text, auto-assigns speakers, ready for analysis.
23 Conversation DNA Fingerprint Radial visualization where each ring = one turn. Color = emotional tone. Red = looped. Creates a unique visual fingerprint per conversation. Includes A-F health grade.
24 Debate Scoreboard Scores each speaker on new ideas vs repeated points. Stacked bar (green = new, red = repeated). Novelty percentage + letter grade (A-F).

API Endpoints

Method Endpoint Description
POST /api/analyze Full analysis: chunks + similarity + loops + interventions + tones + patterns + speaker stats + debate scores
POST /api/detect-loops Lightweight: just loop detection without AI diagnosis
POST /api/transcribe Upload audio file, transcribe with Groq Whisper, return speaker-attributed text
GET /api/health Health check

Request Body (/api/analyze)

{
  "title": "Sprint Planning Meeting",
  "turns": [
    { "speaker": "Alice", "text": "We need to ship by Friday." },
    { "speaker": "Bob", "text": "Friday is impossible, the API isn't done." }
  ],
  "threshold": 0.72,
  "min_gap": 2
}

Query Parameters

Parameter Default Description
threshold 0.72 Cosine similarity threshold for loop detection (0.50–0.95)
min_gap 2 Minimum turn gap between similar chunks to count as a loop

Response Fields

Field Type Description
chunks ChunkUnit[] Parsed conversation turns
similarity_matrix float[][] N x N cosine similarity matrix
loops LoopCluster[] Detected loop clusters with severity
interventions Intervention[] Groq-generated diagnosis per loop
efficiency_score float % of turns that are forward progress
tone_data ToneData[] Per-turn emotional tone scores
loop_patterns LoopPattern[] Pattern classification per loop
speaker_stats SpeakerStat[] Per-speaker loop analytics
ai_summary string Plain-English AI summary of what went wrong
debate_scores DebateScore[] Per-speaker new vs repeated points + novelty grade

Quick Start

Prerequisites

  • Python 3.13+
  • Node.js 18+
  • uv (Python package manager)

Setup

# Clone
git clone https://github.com/ankitlade12/loopbreaker.git
cd loopbreaker

# Install backend dependencies
uv sync

# Configure Groq API key (optional — app works without it)
cp .env.example .env
# Edit .env and add your GROQ_API_KEY

Run

# Start backend (terminal 1)
uv run uvicorn loopbreaker.main:app --reload

# Start frontend (terminal 2)
cd frontend && npm install && npm run dev

Demo

Open http://localhost:5173 (frontend) · API docs at http://localhost:8000/docs

Step Action What You'll See
1 Click a demo button (Meeting Loop / Budget Debate / Hiring Discussion) Pre-loaded transcript with clear loops
2 Click Analyze or press Cmd+Enter Skeleton loader → efficiency score, AI summary, loop visualizer, interventions
3 Adjust the threshold slider See loops appear/disappear as you tune sensitivity
4 Click Mark Resolved on a loop Resolution tracker updates — track which loops have been addressed
5 Switch to Deep Analysis tab Similarity heatmap, speaker analytics, tone timeline
6 Switch to Replay tab Animated conversation playback with loop alerts
7 Switch to Compare tab Paste a second conversation for side-by-side comparison
8 Click Export PDF Styled report opens in browser print dialog
9 Try Live Meeting Mode Speak into mic, get real-time loop detection every 30s
10 Upload a .txt / .json / .csv file Drag & drop or click to upload transcripts

Project Structure

loopbreaker/
├── loopbreaker/
│   ├── main.py                    # FastAPI app entrypoint
│   ├── api/
│   │   └── routes.py              # /api/analyze, /api/detect-loops, /api/transcribe
│   ├── core/
│   │   ├── chunker.py             # Transcript → semantic chunks
│   │   ├── embeddings.py          # all-MiniLM-L6-v2 embeddings
│   │   ├── similarity.py          # Cosine similarity matrix
│   │   ├── detector.py            # Loop detection (union-find clustering)
│   │   ├── groq_client.py         # Groq LLaMA 3.3 diagnosis
│   │   ├── pattern_classifier.py  # Loop pattern classification
│   │   └── tone_analyzer.py       # Emotional tone tracking
│   └── models/
│       └── schemas.py             # Pydantic models
├── frontend/
│   ├── src/
│   │   ├── App.tsx                # Main app with tabbed dashboard
│   │   ├── store/
│   │   │   └── conversation.ts    # Zustand state management
│   │   ├── components/
│   │   │   ├── TranscriptInput.tsx     # Paste input + demo buttons + Cmd+Enter
│   │   │   ├── FileUpload.tsx          # Drag & drop file upload (.txt/.json/.csv)
│   │   │   ├── ThresholdSlider.tsx     # Adjustable sensitivity controls
│   │   │   ├── EfficiencyScore.tsx     # Conversation health meter
│   │   │   ├── AISummary.tsx           # Groq-generated plain-English summary
│   │   │   ├── LoopVisualizer.tsx      # Canvas path diagram with arcs
│   │   │   ├── InterventionCard.tsx    # AI diagnosis + resolution tracker
│   │   │   ├── SimilarityHeatmap.tsx   # N x N heatmap visualization
│   │   │   ├── SpeakerAnalytics.tsx    # Per-speaker loop stats
│   │   │   ├── ToneTimeline.tsx        # Emotional escalation chart
│   │   │   ├── LoopPatternBadge.tsx    # Pattern type cards
│   │   │   ├── ConversationReplay.tsx  # Animated playback
│   │   │   ├── LiveMode.tsx            # Web Speech API live mode
│   │   │   ├── ComparisonView.tsx      # Side-by-side conversation comparison
│   │   │   ├── ExportReport.tsx        # Export PDF/txt/json/clipboard
│   │   │   ├── MeetingCostCalculator.tsx # Dollar cost of loops
│   │   │   ├── AudioUpload.tsx         # Audio file upload + Whisper transcription
│   │   │   ├── ConversationDNA.tsx     # Radial fingerprint visualization
│   │   │   ├── DebateScoreboard.tsx    # New vs repeated points per speaker
│   │   │   ├── SkeletonLoader.tsx      # Shimmer loading skeleton
│   │   │   ├── ToastContainer.tsx      # Toast notification system
│   │   │   └── ThemeToggle.tsx         # Dark/light theme switch
│   │   └── types/
│   │       └── index.ts           # TypeScript interfaces
│   ├── package.json
│   └── vite.config.ts
├── pyproject.toml                 # uv project config
├── .env.example                   # Environment template
└── LICENSE                        # MIT

What Makes This Novel

# Innovation Why It Matters
1 Semantic loop detection First tool to detect conversational repetition via NLP embeddings — not keyword matching
2 Loop pattern classification Categorizes how conversations get stuck (Deadlock vs Ping-Pong vs Escalation vs Avoidance vs Echo Chamber)
3 AI-generated interventions Doesn't just detect the problem — generates the specific question that would break the cycle
4 AI conversation summary Plain-English summary of what went wrong, powered by Groq LLaMA 3.3
5 Emotional escalation tracking Correlates tone trajectory with loop formation — shows why people get stuck
6 Conversation path visualization Makes loops literally visible — the "aha moment" when you see the arcs going backward
7 Adjustable sensitivity Users can tune the detection threshold live and see loops appear/disappear — no other tool offers this
8 Live meeting mode Real-time loop detection during a meeting — not a post-meeting report
9 Loop resolution tracking Mark loops as resolved after applying interventions — track your progress
10 Conversation comparison Compare two conversations side-by-side to measure improvement after coaching
11 Meeting cost calculator Shows the dollar cost of loops — "$187 wasted, $15.6k/year projected" — makes the problem tangible
12 Audio transcription Upload a meeting recording, Groq Whisper transcribes it, and it's immediately analyzed for loops
13 Conversation DNA fingerprint Unique radial visualization — healthy conversations look different from loopy ones at a glance
14 Debate scoreboard Grades each speaker A-F on idea novelty — "Alice: 3 new, 4 repeated = Grade C"
15 Speaker accountability Shows who initiates loops and what % of their turns are repetitive
16 Works without AI Loop detection, heatmap, analytics all function without Groq — AI adds diagnosis on top

Tech Stack

Layer Technology Purpose
Core NLP sentence-transformers, scikit-learn Semantic embeddings + cosine similarity
AI Inference Groq API (LLaMA 3.3 70B) Ultra-fast loop diagnosis + AI summary (~200ms)
Audio Transcription Groq Whisper (whisper-large-v3) Audio file → speaker-attributed transcript
Backend FastAPI, Uvicorn, Pydantic REST API serving analysis results
Frontend React 19, TypeScript, Vite 8 Interactive dashboard with tabbed views
Styling Tailwind CSS v4 Dark-themed UI
State Zustand Lightweight global state management
Animation Framer Motion Smooth transitions and component animations
Visualization Canvas API Heatmap, path diagrams, tone timeline
Live Audio Web Speech API Browser-native real-time transcription
Package Management uv Fast Python dependency management

Environment Variables

Variable Required Description
GROQ_API_KEY No Groq API key for AI diagnosis. App works without it — loop detection still runs.

Use Cases

Domain Application
Business Detect unproductive meeting patterns, improve meeting efficiency
Negotiation Identify stalemate patterns, generate reframes
Therapy Detect patient rumination patterns, identify avoidance loops
Education Teach argumentation skills by visualizing debate patterns
Couples Counseling Identify recurring argument cycles in relationships
Diplomacy Detect deadlocks in diplomatic negotiations

License

MIT License — see LICENSE file for details.


Built for HackHazards '26. Stop talking in circles. Start making progress.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors