An intelligent application that leverages Generative AI to analyze and score resumes against job descriptions in real-time. This project uses a FastAPI backend hosted on Hugging Face Spaces and a reactive Next.js frontend deployed on Vercel.
-
Frontend (Vercel): https://nebula-resume.vercel.app/
-
Backend API Docs (Hugging Face): https://letapreemas-nebula-resume-api.hf.space/docs
-
Concurrent Batch Processing: Upload and analyze multiple resumes against a job description simultaneously.
-
Real-Time Progress Streaming: See the status of each analysis—from text extraction to final scoring—live in the UI.
-
Multi-Layered AI Analysis:
-
Hard Keyword Matching: Fuzzy search for technical keywords.
-
Semantic Similarity Scoring: Uses Google's Gemini embeddings for a strict, requirement-focused semantic match.
-
LLM-Powered Verdict: A final verdict and actionable suggestions are generated by a Gemini language model.
-
-
Persistent Job Descriptions: Save, manage, and reuse job descriptions via an integrated database.
-
Decoupled Architecture: A robust FastAPI backend and a sleek, modern Next.js frontend for a scalable and maintainable solution.
The application is built with a decoupled frontend and backend architecture:
+--------------------------+ +--------------------------------------+
| Next.js Frontend | | FastAPI Backend on HF Spaces |
| (Hosted on Vercel) | | |
| | | +--------------------------------+ |
| - File Uploads | HTTP | | API Endpoints (/analyze) | |
| - Real-time SSE Display | <------> | +--------------------------------+ |
| - Job Management UI | | +--------------------------------+ |
| | | | LangGraph Workflow Engine | |
+--------------------------+ | | - Text Extraction | |
| | - Normalization | |
| | - AI Comparisons (Gemini) | |
| +--------------------------------+ |
+--------------------------------------+
Backend (Hugging Face Spaces):
-
Framework: FastAPI
-
Orchestration: LangGraph
-
AI / LLMs: LangChain with Google Gemini (LLM & Embeddings)
-
Text Processing: NLTK,
thefuzz -
Database: SQLite
Frontend (Vercel):
-
Framework: Next.js
-
Language: TypeScript
-
UI: React, Shadcn/ui, Tailwind CSS
-
Real-time:
@microsoft/fetch-event-sourcefor POST request streaming (SSE)
Follow these instructions to set up and run the project locally.
-
Node.js (v18.0 or later)
-
Python (v3.9 or later)
-
A Google AI API Key (for Gemini).
You can set up the backend using a traditional virtual environment + pip, or using Poetry (recommended if you use Poetry in the project).
Option A — Virtualenv + pip
# 1. Clone the repository
git clone https://github.com/urffsamhunt/nebula.git
cd nebula/backend
# 2. Create and activate a virtual environment
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
# 3. Install Python dependencies
pip install -r requirements.txt
# 4. Set up environment variables
# Create a .env file in the `backend` directory and add your API key:
echo "GOOGLE_API_KEY='your_google_api_key_here'" > .env
# 5. Download NLTK data
python -c "import nltk; nltk.download('punkt'); nltk.download('stopwords'); nltk.download('wordnet')"
# 6. Run the backend server
uvicorn main:app --reload --port 8000
Option B — Poetry (recommended)
# 1. Clone the repository
git clone https://github.com/urffsamhunt/nebula.git
cd nebula/backend
# 2. Install Poetry if you don't have it (see https://python-poetry.org/docs/)
# Example (Linux/macOS):
# curl -sSL https://install.python-poetry.org | python3 -
# 3. Install dependencies (creates a virtualenv automatically)
poetry install
# 4. Activate the Poetry shell (optional)
poetry shell
# 5. Set up environment variables
# Create a .env file in the `backend` directory and add your API key:
# echo "GOOGLE_API_KEY='your_google_api_key_here'" > .env
# 6. Download NLTK data
poetry run python -c "import nltk; nltk.download('punkt'); nltk.download('stopwords'); nltk.download('wordnet')"
# 7. Run the backend server
poetry run uvicorn main:app --reload --port 8000
The API server will be running at https://${API_BASE}.
# 1. From the repository root (this project contains the Next.js app at the root)
npm install
# 2. Set up environment variables (Next.js expects .env.local at the project root)
# Point the client to the backend API (use NEXT_PUBLIC_ prefix so it's exposed to the browser):
echo "NEXT_PUBLIC_API_BASE=https://${API_BASE}" > .env.local
# 3. Run the frontend development server
npm run dev