A web-based AI agent that provides academic advising for De La Salle University's Computer Engineering program, powered by Pinecone vector database for semantic search.
- Interactive web interface for asking questions about CpE courses
- Powered by a CodeAgent with RAG (Retrieval-Augmented Generation)
- Pinecone Vector Database for semantic search and scalability
- Beautiful, responsive UI with gradient design
- Real-time chat interface
- Fast similarity-based course lookup
- Python 3.8+
- Pinecone account (free tier available)
- Hugging Face API token
-
Install dependencies:
pip install smolagents langchain langchain-core langchain-community flask pinecone-client sentence-transformers
-
Set up Pinecone:
- Create a free account at https://www.pinecone.io/
- Create an index named
cpe-curriculumwith dimension 384 - Copy your API key
-
Configure environment variables:
- Copy
.env.exampleto.env - Add your Pinecone API key and Hugging Face token
- Copy
-
Run the application:
python Main.py
-
Access the web interface:
- Open
http://localhost:5000in your browser
- Open
- Type your question in the input field (e.g., "What are the prerequisites for THSCP4A?")
- Click "Ask the Adviser" to get a response
- The AI will search the Pinecone vector database for the most relevant courses
API.py: Agent logic, Pinecone integration, and toolsMain.py: Flask web application with HTML/CSS frontendchecklist.json: Curriculum data (automatically loaded into Pinecone)PINECONE_SETUP.md: Detailed Pinecone setup guide.env.example: Environment configuration template
User Query
↓
Embedding Generation (all-MiniLM-L6-v2)
↓
Pinecone Vector Search
↓
Top 3 Similar Courses Retrieved
↓
LLM Agent Processes Results
↓
Response to User
- Backend: Python, Flask, SmolAgents, LangChain
- Vector Database: Pinecone
- Embeddings: Sentence-Transformers (all-MiniLM-L6-v2)
- Frontend: HTML5, CSS3, JavaScript
- AI Model: Meta Llama 3.3 70B via Hugging Face Inference API
✅ Semantic search based on meaning, not just keywords
✅ Highly scalable for large course catalogs
✅ Fast query response times
✅ Managed service (no infrastructure to maintain)
✅ Supports metadata filtering
See PINECONE_SETUP.md for complete configuration guide.