Skip to content

sanathjs/interview-bot

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

61 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

πŸ€– Interview Bot

An AI-powered interview assistant that represents Sanath Kumar J S in technical interviews using a personal knowledge base, multi-signal RAG search, real-time voice interaction, and a live job market skill gap analyzer.

Built with Next.js 14 Β· .NET 8 Β· PostgreSQL + pgvector Β· Groq LLM Β· HuggingFace Embeddings Β· Groq Whisper STT


✨ Features

  • 🧠 3-signal RAG search β€” Body embedding + title embedding + AI-generated question variants per chunk
  • πŸŽ™οΈ Voice input β€” Interviewer asks questions via microphone (Groq Whisper STT)
  • πŸ”Š Voice playback β€” Answers read aloud via browser TTS (Web Speech API)
  • πŸ“ Unanswered question tracking β€” Questions outside KB stored for prep
  • πŸ“š Prep dashboard β€” Review, answer, and promote unanswered questions to KB
  • πŸ“– Prepare for interview β€” Read all KB Q&A in a clean study mode with AI-generated interview angles per section
  • πŸ“‹ Session history β€” Browse past interviews with full transcripts and per-message feedback
  • πŸ“Š Confidence scoring β€” Every answer shows confidence % from weighted vector similarity
  • πŸ”’ Security β€” Prompt injection detection, system prompt protection, persona guard
  • 🎯 Skill Gap Analyzer β€” Live job search from Adzuna + Remotive, skill matching, salary insights, company rankings, auto-digest toggle
  • 🎨 5 chat themes β€” Obsidian (default), Monolith, Counsel, Signal (light mode), Slate Editorial β€” selectable from homepage
  • πŸ›‘οΈ Fallback database β€” Automatic failover from Supabase to Neon when primary DB is unreachable
  • πŸ“± Mobile responsive β€” Works on all screen sizes

πŸ—οΈ Architecture

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”     HTTP      β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚   Next.js 14 UI     β”‚ ──────────── β”‚   .NET 8 Web API     β”‚
β”‚   (Vercel)          β”‚              β”‚   (Render.com)        β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜              β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                                                β”‚
                    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
                    β”‚                           β”‚                      β”‚
          β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”€β”         β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”     β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”€β”
          β”‚  PostgreSQL 16  β”‚         β”‚   Groq Cloud   β”‚     β”‚  HuggingFace  β”‚
          β”‚  + pgvector    β”‚         β”‚  LLM + STT     β”‚     β”‚  Embeddings   β”‚
          β”‚  Primary:      β”‚         β”‚  (Free tier)   β”‚     β”‚  (Free tier)  β”‚
          β”‚    Supabase    β”‚         β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜     β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
          β”‚  Fallback:     β”‚
          β”‚    Neon         β”‚         β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”     β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
          β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜         β”‚  Adzuna API    β”‚     β”‚  Remotive API  β”‚
                                     β”‚  (Job search)  β”‚     β”‚  (Remote jobs) β”‚
                                     β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜     β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Database Failover

The DatabaseConnectionManager singleton manages automatic failover:

  • Primary: Supabase (may pause after 7 days of inactivity on free tier)
  • Fallback: Neon (serverless, never pauses)
  • On primary failure, switches to fallback for 5 minutes, then retries primary
  • /ping runs SELECT 1 to keep both Render and the DB alive
  • /health returns which DB is active: {"status":"healthy","db":"primary (Supabase)"}

πŸ“ Project Structure

interview-bot/
β”œβ”€β”€ interview-bot-ui/                  # Next.js 14 frontend
β”‚   β”œβ”€β”€ app/
β”‚   β”‚   β”œβ”€β”€ page.tsx                   # Home page + theme picker
β”‚   β”‚   β”œβ”€β”€ chat/page.tsx              # Chat interface (theme-aware)
β”‚   β”‚   β”œβ”€β”€ prep/page.tsx              # Prep dashboard (PIN protected)
β”‚   β”‚   β”œβ”€β”€ prepare/page.tsx           # KB reading mode for interview prep
β”‚   β”‚   β”œβ”€β”€ skill-gap/page.tsx         # Skill Gap Analyzer
β”‚   β”‚   └── sessions/
β”‚   β”‚       β”œβ”€β”€ page.tsx               # Session list
β”‚   β”‚       └── [id]/page.tsx          # Transcript view with feedback
β”‚   β”œβ”€β”€ components/
β”‚   β”‚   β”œβ”€β”€ Navbar.tsx                 # Admin nav bar (theme-aware)
β”‚   β”‚   β”œβ”€β”€ ThemeProvider.tsx          # React context for chat themes
β”‚   β”‚   └── chat/
β”‚   β”‚       β”œβ”€β”€ InputBar.tsx           # (theme-aware)
β”‚   β”‚       β”œβ”€β”€ MessageBubble.tsx      # (theme-aware)
β”‚   β”‚       └── TypingIndicator.tsx    # (theme-aware)
β”‚   └── lib/
β”‚       β”œβ”€β”€ api.ts                     # All fetch wrappers
β”‚       └── themes.ts                  # 5 chat theme definitions
β”‚
β”œβ”€β”€ interview-bot-api/                 # .NET 8 Web API
β”‚   β”œβ”€β”€ Controllers/
β”‚   β”‚   β”œβ”€β”€ ChatController.cs
β”‚   β”‚   β”œβ”€β”€ KnowledgeController.cs     # KB file list + chunk reader
β”‚   β”‚   β”œβ”€β”€ TranscribeController.cs
β”‚   β”‚   β”œβ”€β”€ IngestionController.cs
β”‚   β”‚   └── SkillGapController.cs
β”‚   β”œβ”€β”€ Services/
β”‚   β”‚   β”œβ”€β”€ ChatService.cs
β”‚   β”‚   β”œβ”€β”€ KnowledgeSearchService.cs
β”‚   β”‚   β”œβ”€β”€ IngestionService.cs
β”‚   β”‚   β”œβ”€β”€ EmbeddingService.cs
β”‚   β”‚   β”œβ”€β”€ DatabaseConnectionManager.cs  # Primary/fallback DB failover
β”‚   β”‚   β”œβ”€β”€ ChunkMetadataHelper.cs
β”‚   β”‚   └── SkillGapService.cs
β”‚   β”œβ”€β”€ Models/
β”‚   β”‚   β”œβ”€β”€ ChatModels.cs
β”‚   β”‚   β”œβ”€β”€ KnowledgeChunk.cs
β”‚   β”‚   └── SkillGapModels.cs
β”‚   β”œβ”€β”€ knowledge-base/                # Personal KB β€” .md files
β”‚   β”‚   β”œβ”€β”€ introduction.md
β”‚   β”‚   β”œβ”€β”€ career-journey.md
β”‚   β”‚   β”œβ”€β”€ ai-rag.md
β”‚   β”‚   β”œβ”€β”€ dotnet.md
β”‚   β”‚   β”œβ”€β”€ dotnet-interview-qa.md
β”‚   β”‚   └── ...
β”‚   └── Dockerfile                     # Used by Render.com for deployment
β”‚
└── docs/
    β”œβ”€β”€ schema.sql                     # Core DB schema
    β”œβ”€β”€ skill_gap_migration.sql        # Skill Gap tables migration
    └── RENDER_MIGRATION.md            # Step-by-step Render.com setup guide

🧠 How RAG Works β€” 3-Signal Search

Question asked
      ↓
HuggingFace BAAI/bge-base-en-v1.5 β†’ embed query (768d)
      ↓
3 parallel SQL queries (all use HNSW index):
  Signal 1: top 15 by body_embedding      <=> queryVec
  Signal 2: top 15 by title_embedding     <=> queryVec
  Signal 3: top 15 by questions_embedding <=> queryVec
      ↓
Merge all candidates by chunk_id in C#
      ↓
Compute weighted score per chunk:
  titleWeight = title_word_count >= 5 ? 0.30 : 0.15
  bodyWeight  = title_word_count >= 5 ? 0.25 : 0.40
  finalScore  = (questionsSim Γ— 0.45)
              + (titleSim     Γ— titleWeight)
              + (bodySim      Γ— bodyWeight)
      ↓
Tag overlap soft boost (+0.05) + file keyword boost (+0.05–0.08)
      ↓
Confidence gate:
  β‰₯ 0.62 β†’ HIGH   β†’ Answer from KB
  β‰₯ 0.52 β†’ MEDIUM β†’ Answer from KB
  < 0.52 β†’ LOW    β†’ Store as unanswered question
      ↓
Groq llama-3.3-70b-versatile generates answer
      ↓
Save to chat_messages with confidence score

Why 3 signals?

Signal Weight Purpose
questions_embedding 0.45 5 AI-generated question variants per chunk at ingest time. Q↔Q matching is the most accurate signal β€” solves the Q↔A vector space mismatch.
title_embedding 0.30 / 0.15 Section heading embedded alone. Adaptive: 5+ word headings get 0.30; short headings get 0.15.
body_embedding 0.25 / 0.40 Semantic content of the full chunk. Safety net for paraphrasing. Gets higher weight when title is short.

🎯 Skill Gap Analyzer

Live job market analysis comparing Sanath's profile against current job postings.

User enters role keywords + location
        ↓
POST /api/skill-gap
        ↓
Adzuna API (IN β†’ GB β†’ US fallback) + Remotive (remote) fetched in parallel
        ↓
Groq LLM extracts required / nice-to-have / trending skills from JDs in batches
        ↓
Compare extracted skills against Sanath's KB-derived skill profile
        ↓
Score each job:
  ATS Score   = keyword match % (Sanath's skills vs JD text)
  Match Score = required skill overlap %
        ↓
Return:
  - Ranked job listings with ATS + match scores
  - Matched skills βœ… | Missing skills ❌ | Trending skills πŸ”₯
  - Salary range (min / median / max from Adzuna data)
  - Top hiring companies ranked by job count
        ↓
Jobs persisted to DB (job_listings table)
User can save jobs β†’ tracked in job_applications table
Auto-digest toggle β†’ settings stored in user_settings table

Skill Gap API Endpoints

Method Path Description
POST /api/skill-gap Run full analysis β€” fetch jobs + gap report
POST /api/skill-gap/save-job Save / update job application status
GET /api/skill-gap/saved-jobs List all saved + tracked jobs
GET /api/skill-gap/settings Get user settings (auto-digest, keywords)
POST /api/skill-gap/settings Update user settings

πŸ—„οΈ Database Schema

CREATE EXTENSION IF NOT EXISTS vector;

CREATE TABLE knowledge_chunks (
    id                   SERIAL PRIMARY KEY,
    source_file          TEXT,
    section_title        TEXT,
    chunk_text           TEXT,
    chunk_index          INTEGER,
    embedding            VECTOR(768),
    topic                TEXT,
    tags                 TEXT[],
    hit_count            INTEGER DEFAULT 0,
    last_used_at         TIMESTAMPTZ,
    created_at           TIMESTAMPTZ DEFAULT NOW(),
    title_embedding      VECTOR(768),
    questions_embedding  VECTOR(768),
    questions_text       TEXT[],
    title_word_count     INT
);

CREATE TABLE interview_sessions (
    id               SERIAL PRIMARY KEY,
    session_code     TEXT UNIQUE,
    company_name     TEXT,
    interviewer_name TEXT,
    round_number     INTEGER,
    started_at       TIMESTAMPTZ DEFAULT NOW(),
    ended_at         TIMESTAMPTZ,
    status           TEXT DEFAULT 'active',
    overall_rating   SMALLINT,
    notes            TEXT
);

CREATE TABLE chat_messages (
    id                SERIAL PRIMARY KEY,
    session_id        INTEGER REFERENCES interview_sessions(id),
    sequence_number   INTEGER,
    role              TEXT,
    message_text      TEXT,
    confidence_score  FLOAT,
    answer_source     TEXT,
    fallback_provider TEXT,
    response_time_ms  INTEGER,
    was_helpful       BOOLEAN,
    created_at        TIMESTAMPTZ DEFAULT NOW()
);

CREATE TABLE unanswered_questions (
    id                  SERIAL PRIMARY KEY,
    session_id          INTEGER REFERENCES interview_sessions(id),
    message_id          INTEGER REFERENCES chat_messages(id),
    question_text       TEXT,
    question_embedding  VECTOR(768),
    times_asked         INTEGER DEFAULT 1,
    status              TEXT DEFAULT 'new',
    priority            TEXT DEFAULT 'low',
    sanath_answer       TEXT,
    kb_chunk_id         INTEGER REFERENCES knowledge_chunks(id),
    question_category   TEXT,
    sanath_answered_at  TIMESTAMPTZ,
    updated_at          TIMESTAMPTZ,
    first_asked_at      TIMESTAMPTZ DEFAULT NOW(),
    last_asked_at       TIMESTAMPTZ DEFAULT NOW()
);

CREATE TABLE session_analytics (
    id                   SERIAL PRIMARY KEY,
    session_id           INTEGER REFERENCES interview_sessions(id) UNIQUE,
    total_questions      INTEGER DEFAULT 0,
    answered_from_kb     INTEGER DEFAULT 0,
    unanswered_count     INTEGER DEFAULT 0,
    avg_confidence_score FLOAT,
    top_kb_files         JSONB,
    weak_topic_areas     JSONB,
    duration_minutes     INTEGER
);

-- Skill Gap tables (run docs/skill_gap_migration.sql)
CREATE TABLE job_listings ( ... );
CREATE TABLE job_applications ( ... );
CREATE TABLE user_settings ( ... );

-- HNSW indexes
CREATE INDEX ON knowledge_chunks USING hnsw (embedding           vector_cosine_ops);
CREATE INDEX ON knowledge_chunks USING hnsw (title_embedding     vector_cosine_ops);
CREATE INDEX ON knowledge_chunks USING hnsw (questions_embedding vector_cosine_ops);

πŸ“ Knowledge Base

14 .md files in interview-bot-api/knowledge-base/. answering-guidelines.md excluded from search.

File Chunks Topic
introduction.md 7 Who I am, career summary, specialization
career-journey.md 5 Toyota Tsusho, Capgemini, Euromonitor, Ingenio
recent-project.md 11 RAG systems, JWT migration, integrations
ai-rag.md 13 3 production RAG pipelines, tech decisions
dotnet-interview-qa.md 20 20 Q&A pairs: DI, async/await, GC, EF Core, Polly
dotnet.md 6 Years, strongest areas, design patterns
leadership.md 4 3 leadership stories + philosophy
general-hr.md 9 Strengths, weakness, salary, notice, education
my-approach.md 7 System design examples
arrays-strings.md 7 DSA: two pointers, sliding window, hashmap
trees.md 6 BST, traversal, DFS vs BFS
dynamic-programming.md 4 DP patterns and approach
complexity-cheatsheet.md 4 Big-O reference
answering-guidelines.md β€” EXCLUDED β€” internal prompt instructions

Total indexed: 101 chunks across 13 files


πŸš€ Getting Started (Local)

# 1. Clone
git clone https://github.com/sanathjs/interview-bot.git
cd interview-bot

# 2. Backend
cd interview-bot-api
cp appsettings.example.json appsettings.json
dotnet restore && dotnet run          # http://localhost:5267

# 3. Frontend (new terminal)
cd interview-bot-ui
cp .env.example .env.local
npm install && npm run dev            # http://localhost:3000

# 4. Ingest KB
curl -X POST http://localhost:5267/api/ingest -H "X-Admin-Key: your-key"

βš™οΈ Configuration

Backend β€” appsettings.json

{
  "DATABASE_URL": "Host=localhost;...",
  "FALLBACK_DATABASE_URL": "Host=ep-xxx.neon.tech;Port=5432;Database=neondb;Username=...;Password=...;SSL Mode=Require;Trust Server Certificate=true",
  "ADMIN_INGEST_KEY": "your-secret-key",
  "LlmProvider": "groq",
  "HuggingFace": { "ApiKey": "hf_..." },
  "Groq": {
    "ApiKey": "gsk_...",
    "Model": "llama-3.3-70b-versatile",
    "BaseUrl": "https://api.groq.com/openai/v1"
  },
  "Adzuna": {
    "AppId":  "your_adzuna_app_id",
    "AppKey": "your_adzuna_app_key"
  }
}

Frontend β€” .env.local

NEXT_PUBLIC_API_URL=http://localhost:5267
NEXT_PUBLIC_PREP_PIN=1234

Render Environment Variables

DATABASE_URL             = (Supabase pooler connection string)
FALLBACK_DATABASE_URL    = (Neon connection string β€” optional but recommended)
ADMIN_INGEST_KEY         = your-secret-key
LlmProvider          = groq
Groq__ApiKey         = gsk_...
Groq__Model          = llama-3.3-70b-versatile
Groq__BaseUrl        = https://api.groq.com/openai/v1
HuggingFace__ApiKey  = hf_...
Adzuna__AppId        = your_adzuna_app_id
Adzuna__AppKey       = your_adzuna_app_key
ASPNETCORE_URLS      = http://+:10000
PORT                 = 10000

Render Docker services use port 10000 by default.

Vercel Environment Variables

NEXT_PUBLIC_API_URL  = https://your-app.onrender.com
NEXT_PUBLIC_PREP_PIN = your-pin

πŸ”‘ API Keys Required

Service Purpose Get it at Cost
Groq LLM chat + Whisper STT console.groq.com Free
HuggingFace BAAI/bge-base-en-v1.5 embeddings huggingface.co/settings/tokens Free
Adzuna Job search API (India + global fallback) developer.adzuna.com Free (1000 calls/day)
Remotive Remote job search remotive.com/api Free, no key needed

πŸ“‘ API Endpoints

Chat & Sessions

Method Path Description Auth
POST /api/chat RAG chat β€” answer + confidence + sources + botSequenceNumber None
GET /api/sessions List sessions with stats None
GET /api/sessions/{id}/detail Full transcript with per-message feedback (was_helpful) None
PATCH /api/sessions/{code}/details Update interviewer name + company (UPSERT) None
POST /api/sessions/{code}/end End session None
PATCH /api/messages/{seqNum}/feedback Save thumbs up/down (by DB sequence_number) None
GET /api/unanswered Prep dashboard questions None
PATCH /api/unanswered/{id}/answer Save answer None
POST /api/unanswered/{id}/promote Add answer to KB None
DELETE /api/unanswered/{id} Delete question None
POST /api/transcribe Audio β†’ Groq Whisper β†’ text None
POST /api/ingest Re-ingest all KB files X-Admin-Key header
GET /ping Keep-alive β€” runs SELECT 1 to prevent Supabase pause None
GET /health Diagnostic β€” shows active DB (primary/fallback) None

Knowledge Base (Prepare)

Method Path Description Auth
GET /api/knowledge/files List all KB files with display names + chunk counts None
GET /api/knowledge/files/{file} All sections from a file β€” title, body, question variants None

Skill Gap

Method Path Description Auth
POST /api/skill-gap Fetch jobs + run full gap analysis None
POST /api/skill-gap/save-job Save / update job application status None
GET /api/skill-gap/saved-jobs List all saved + tracked jobs None
GET /api/skill-gap/settings Get user settings None
POST /api/skill-gap/settings Update user settings None

🌐 Deployment

Free Stack β€” $0/month forever

Layer Platform Cost Notes
Frontend Vercel Free forever Auto-deploys on git push
Backend Render.com Free forever Spins down after 15 min idle
Database (primary) Supabase Free forever 500MB limit, pauses after 7 days idle
Database (fallback) Neon Free forever Serverless, never pauses, auto-failover
Embeddings HuggingFace Free forever
LLM + STT Groq Free forever
Job Search Adzuna + Remotive Free forever 1000 calls/day
Keep-alive cron-job.org Free forever Pings /health every 10 min
Total $0/month

Cold start: Render free tier spins down after 15 min idle. First request takes ~30s. Set up a cron-job.org ping to https://your-app.onrender.com/health every 10 minutes to prevent this completely. The /health endpoint runs SELECT 1 which also keeps Supabase from pausing.

Production Stack (future)

Layer Platform Cost
Frontend Vercel Free
Backend Render Starter ~$7/mo (always on)
Database Supabase Pro ~$25/mo
Embeddings OpenAI text-embedding-3-small ~$0.01/mo
LLM gpt-4o-mini ~$1–3/mo

Deploy Checklist

# 1. Run Skill Gap migration in Supabase SQL Editor
#    Copy contents of docs/skill_gap_migration.sql β†’ run in Supabase

# 2. Set up Render (see docs/RENDER_MIGRATION.md for full steps)

# 3. Push code β€” Render + Vercel auto-deploy on push
git add .
git commit -m "your message"
git push origin main

# 4. Re-ingest KB after any .md changes
curl -X POST https://your-app.onrender.com/api/ingest \
  -H "X-Admin-Key: your-key"

# 5. Update NEXT_PUBLIC_API_URL in Vercel β†’ Settings β†’ Environment Variables
#    Set to your Render URL

πŸ› οΈ Development Notes

Database & Infrastructure

  • DatabaseConnectionManager (singleton) wraps all DB access with automatic primaryβ†’fallback switching and 5-min retry cooldown
  • FALLBACK_DATABASE_URL is optional β€” if not set, app works with primary only (no failover)
  • /ping runs SELECT 1 via DatabaseConnectionManager.ProbeAsync() β€” keeps both Render and Supabase alive
  • /health returns JSON with status + db (which database is active)
  • UpdateSessionDetailsAsync uses UPSERT β€” creates session row if PATCH arrives before first chat message
  • session_analytics has no auto-trigger β€” stats fall back to live subqueries from chat_messages

Chat & Feedback

  • ChatResponse includes botSequenceNumber β€” the DB sequence_number of the bot's reply
  • Frontend passes sequenceNumber (not React msg-N IDs) to PATCH /api/messages/{seqNum}/feedback
  • GetTranscriptByIdAsync returns was_helpful per message β€” shows feedback in session transcript
  • Prompt injection blocked pre-LLM via IsPromptInjection() β€” 24 phrases detected

Themes

  • 5 chat themes defined in lib/themes.ts: Obsidian, Monolith, Counsel, Signal (light), Slate Editorial
  • ThemeProvider (React context) wraps the app; theme saved in localStorage as ib_theme
  • All chat components use useTheme() hook β€” const C = useTheme() replaces hardcoded color objects
  • Theme picker accessible from the palette icon (top-right of homepage)

Knowledge Base

  • answering-guidelines.md excluded from search at ingest time β€” do not rename it
  • Re-ingest required after any KB change β€” ~5 min for 101 chunks
  • KB headings should be written as the exact question an interviewer would ask
  • /api/knowledge/files returns all files with display names (e.g., dotnet-interview-qa.md β†’ "DotNet Interview Q&A")
  • /api/knowledge/files/{file} strips the "Topic: X\nSection: Y\n\n" prefix from chunk_text for clean reading

Voice & Input

  • Voice input: MediaRecorder β†’ Groq Whisper β†’ auto-sends transcribed text
  • Voice playback: window.speechSynthesis (browser TTS, no API needed)

Admin & Auth

  • Prep dashboard is PIN-protected via NEXT_PUBLIC_PREP_PIN env variable
  • Admin gear icon visible only when localStorage.ib_role === "admin"
  • Prepare tab visible in admin nav and admin dashboard homepage

Deployment

  • Adzuna fetches IN β†’ GB β†’ US; falls back to international if India returns fewer than 10 jobs
  • job_listings.external_id is unique β€” re-running analysis updates scores, no duplicates
  • Render uses port 10000 β€” set ASPNETCORE_URLS=http://+:10000 and PORT=10000
  • Keep Render warm: cron-job.org pings /health every 10 minutes

🎨 Chat Themes

5 selectable themes β€” user picks from the palette icon on the homepage. Choice persists in localStorage.

Theme Palette Mode Vibe
Obsidian (default) Amber on deep black Dark Warm, the original
Monolith Platinum + Electric Blue Dark Cold intellect, zero noise
Counsel Charcoal + Gold, Serif Dark Senior partner's office, gravitas
Signal Navy + White + Green Light Trust-first, corporate-safe
Slate Editorial Near-Black + Coral Dark Editorial confidence, shipped product feel

Themes affect: chat background, message bubbles, avatars, input bar, typing indicator, navbar, follow-up buttons, source banners, bold/list text highlighting, and the live dot color.


πŸ—ΊοΈ Roadmap

Phase 2 β€” Resume & Cover Letter

  • Upload PDF resume β†’ ingest into KB as resume.md
  • POST /api/skill-gap/resume β†’ Groq tailors resume to JD β†’ .docx download
  • POST /api/skill-gap/cover β†’ Groq writes cover letter β†’ .docx download
  • ATS score with keyword breakdown per job

Phase 3 β€” Tracking & Automation

  • Job application status board (saved β†’ applied β†’ interview β†’ offer)
  • IHostedService daily background job at 9am
  • Auto-digest β€” top 10 matched jobs saved daily when enabled

Future

  • End session UI β€” star rating + notes modal
  • Analytics dashboard β€” KB hit rate, weak topics, confidence trends
  • ElevenLabs voice cloning (~$6/mo)
  • Upgrade embeddings to OpenAI text-embedding-3-small

πŸ” Security

  • appsettings.json and .env.local gitignored
  • Old exposed keys rotated; git history purged via filter-branch
  • appsettings.example.json and .env.example have no real values
  • Prep dashboard PIN-protected via env variable
  • API endpoints have no auth currently β€” add JWT for production
  • Prompt injection blocked pre-LLM: 24 phrases in ChatService.cs
  • System prompt never revealed β€” LLM-level security rules in Groq system message

πŸ“„ License

MIT β€” feel free to fork and adapt for your own interview assistant.


Built with ❀️ by Sanath Kumar J S

About

An AI-powered interview assistant that: - Represents Sanath in technical interviews using a personal knowledge base - Answers questions accurately using RAG (Retrieval Augmented Generation) - Stores unanswered questions for later preparation

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages