AI-powered mock interview API built with FastAPI and OpenAI GPT-4. This service generates tailored interview questions based on a candidate's profile and provides real-time conversational interviews with AI-driven feedback.
Live API: interviewai-backend-z8rg.onrender.com API Docs: interviewai-backend-z8rg.onrender.com/docs Frontend Repo: InterviewAI-Frontend
Streamlit Frontend ──► FastAPI Backend ──► OpenAI GPT-4
│
SQLAlchemy
│
SQLite DB
The backend handles all AI logic and session management, keeping the OpenAI API key secure on the server side. The frontend communicates exclusively through REST endpoints.
| Layer | Technology |
|---|---|
| API Framework | FastAPI |
| AI/LLM | OpenAI GPT-4 |
| Database | SQLite + SQLAlchemy |
| Containerization | Docker |
| Deployment | Render (Docker Web Service) |
InterviewAI-Backend/
├── 7_final.py # Main FastAPI application (routes, session logic, OpenAI calls)
├── database.py # SQLAlchemy models and database connection
├── llm.py # LLM prompt construction and OpenAI API wrapper
├── requirements.txt # Python dependencies
├── Dockerfile # Production Docker image
├── docker-compose.yml # Local development setup
└── .gitignore
GET /
Returns server status and active session count.
{
"status": "running",
"active_sessions": 0
}POST /start_interview
Request Body:
{
"name": "Ramu",
"experience": "3 years ML",
"skills": "Python, ML, Deep Learning",
"position": "Data Scientist",
"company": "Amazon"
}Response: AI-generated interview question tailored to the candidate's profile, role, and target company.
Submit answers and receive follow-up questions throughout the interview session.
After completing the interview, receive a performance summary with scoring and improvement suggestions.
Full interactive docs available at
/docs(Swagger UI) when the server is running.
- Python 3.10+
- OpenAI API key
- Clone the repo:
git clone https://github.com/RamuGanta/InterviewAI-Backend.git
cd InterviewAI-Backend- Install dependencies:
pip install -r requirements.txt- Set environment variables:
Create a .env file in the project root:
OPENAI_API_KEY=your_openai_api_key_here
- Run the server:
uvicorn 7_final:app --host 0.0.0.0 --port 8000 --reloadThe API will be available at http://localhost:8000 and docs at http://localhost:8000/docs.
docker build -t interviewai-backend .
docker run -p 8000:8000 --env-file .env interviewai-backendOr with Docker Compose:
docker-compose upDeployed on Render as a Docker Web Service.
- Service type: Docker Web Service
- Health check:
GET / - Environment variables:
OPENAI_API_KEYis stored securely in Render's environment settings (never committed to source control)
The backend allows cross-origin requests from the Streamlit frontend:
app.add_middleware(
CORSMiddleware,
allow_origins=["*"],
allow_methods=["*"],
allow_headers=["*"],
)- API keys are stored only in environment variables, never in source code
.envis included in.gitignore- GitHub push protection is enabled to prevent accidental secret exposure
- All OpenAI calls are proxied through the backend — the frontend never touches the API key
- Frontend: InterviewAI-Frontend — Streamlit UI for the interview experience
- Earlier Prototype: Interview-and-Feedback-tool — Single-file Streamlit + OpenAI version