Skip to content

RamuGanta/InterviewAI-Backend

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

InterviewAI Backend

AI-powered mock interview API built with FastAPI and OpenAI GPT-4. This service generates tailored interview questions based on a candidate's profile and provides real-time conversational interviews with AI-driven feedback.

Live API: interviewai-backend-z8rg.onrender.com API Docs: interviewai-backend-z8rg.onrender.com/docs Frontend Repo: InterviewAI-Frontend


Architecture

Streamlit Frontend ──► FastAPI Backend ──► OpenAI GPT-4
                            │
                        SQLAlchemy
                            │
                         SQLite DB

The backend handles all AI logic and session management, keeping the OpenAI API key secure on the server side. The frontend communicates exclusively through REST endpoints.


Tech Stack

Layer Technology
API Framework FastAPI
AI/LLM OpenAI GPT-4
Database SQLite + SQLAlchemy
Containerization Docker
Deployment Render (Docker Web Service)

Project Structure

InterviewAI-Backend/
├── 7_final.py           # Main FastAPI application (routes, session logic, OpenAI calls)
├── database.py          # SQLAlchemy models and database connection
├── llm.py               # LLM prompt construction and OpenAI API wrapper
├── requirements.txt     # Python dependencies
├── Dockerfile           # Production Docker image
├── docker-compose.yml   # Local development setup
└── .gitignore

API Endpoints

Health Check

GET /

Returns server status and active session count.

{
  "status": "running",
  "active_sessions": 0
}

Start Interview

POST /start_interview

Request Body:

{
  "name": "Ramu",
  "experience": "3 years ML",
  "skills": "Python, ML, Deep Learning",
  "position": "Data Scientist",
  "company": "Amazon"
}

Response: AI-generated interview question tailored to the candidate's profile, role, and target company.

Continue Interview

Submit answers and receive follow-up questions throughout the interview session.

Get Feedback

After completing the interview, receive a performance summary with scoring and improvement suggestions.

Full interactive docs available at /docs (Swagger UI) when the server is running.


Getting Started

Prerequisites

  • Python 3.10+
  • OpenAI API key

Local Setup

  1. Clone the repo:
git clone https://github.com/RamuGanta/InterviewAI-Backend.git
cd InterviewAI-Backend
  1. Install dependencies:
pip install -r requirements.txt
  1. Set environment variables:

Create a .env file in the project root:

OPENAI_API_KEY=your_openai_api_key_here
  1. Run the server:
uvicorn 7_final:app --host 0.0.0.0 --port 8000 --reload

The API will be available at http://localhost:8000 and docs at http://localhost:8000/docs.

Docker

docker build -t interviewai-backend .
docker run -p 8000:8000 --env-file .env interviewai-backend

Or with Docker Compose:

docker-compose up

Deployment

Deployed on Render as a Docker Web Service.

  • Service type: Docker Web Service
  • Health check: GET /
  • Environment variables: OPENAI_API_KEY is stored securely in Render's environment settings (never committed to source control)

CORS Configuration

The backend allows cross-origin requests from the Streamlit frontend:

app.add_middleware(
    CORSMiddleware,
    allow_origins=["*"],
    allow_methods=["*"],
    allow_headers=["*"],
)

Security

  • API keys are stored only in environment variables, never in source code
  • .env is included in .gitignore
  • GitHub push protection is enabled to prevent accidental secret exposure
  • All OpenAI calls are proxied through the backend — the frontend never touches the API key

Related


Author

Ramu GantaLinkedIn · GitHub

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors