- Introduction
- Features
- Project Architecture
- Installation
- Environment Variables
- Usage
- Example
- Tech Stack
Interview AI is a tool designed to aid interviewers by dynamically generating relevant follow-up questions based on the candidate's responses. This project ensures that the interviewer never runs out of insightful questions, keeping the interview flowing smoothly and enhancing its depth.
- Real-Time Follow-Up Questions: Using AI-powered models, this app listens to interview responses, converts them from speech-to-text, and suggests contextually relevant follow-up questions in real-time.
- Large Language Model Integration: Hosts an LLM for text processing, accessible via a Docker container.
- Streamlit Interface: Interviewers can view generated questions and other insights in an intuitive Streamlit dashboard.
- Docker Deployment: Containerizes the model and deploys it using Docker.
- Speech-to-Text Conversion: Captures the candidate’s spoken answers and converts them into text format.
- Vector Database: Stores candidate responses and retrieves similar past responses to generate follow-up questions.
- LLM Model: Processes responses and suggests relevant questions based on context.
- Streamlit Interface: Displays the generated questions to the interviewer in real-time.
- S3 Storage: Used to store audio as text data as part of the interview logs.
- Clone the repository:
git clone https://github.com/UBH-Fall-2024/ub-hacking-create-your-repo-here-transformers.git
- Set up the Docker environment:
docker build -t speech-to-text
- Start the Docker container:
docker run -p 8000:8000 speech-to-text
- Install dependencies for local Python development (optional):
pip install -r requirements.txt
- MODEL_PATH: Path to the Vosk model for speech-to-text recognition.
- LOG_PATH: Directory path for logging the fails index.
- Update these in a .env file or within Docker environment variables as needed.
- Start the Service: Run the Docker container as described in the Installation section.
- Make Requests: Access the service API at
http://localhost:8000for speech-to-text transcription.
- Example transcription request and response:
curl http://localhost:8000/ask
- Speech-to-Text: Vosk
- Vector Database: FAISS
- Large Language Model: Llama-3.2-3B-Instruct
- UI Framework: Streamlit
- Cloud Storage: Amazon S3
- Containerization: Docker
