A production-ready, Full-Stack Academic Management System that integrates Agentic AI to automate student support and administrative tasks. Unlike traditional portals, this system uses RAG (Retrieval-Augmented Generation) to provide instant, context-aware answers regarding university regulations, course syllabi, and student performance.
- Student & Faculty Portals: Comprehensive CRUD for course enrollment, grade management, and attendance tracking.
- AI Academic Assistant: An integrated chatbot capable of:
- Contextual Q&A: Answering questions based on uploaded university PDFs (using RAG).
- Performance Analysis: Summarizing student grades and suggesting focus areas.
- Action Execution: Navigating the UI or initiating administrative requests via natural language.
- Vectorized Search: Hybrid search capabilities using SQL and Semantic Search.
- Real-time Notifications: AI-generated summaries of campus announcements and academic updates.
The system is built with a focus on Separation of Concerns and Scalability:
- Clean Architecture: Separation between the Domain, Use Cases, and Infrastructure layers.
- Repository Pattern: Abstracting data access to allow seamless switching between database providers (e.g., migrating from SQLite to PostgreSQL).
- Strategy Pattern: Implemented in the LLM service to toggle between different models (GPT-4o, Llama 3, Claude 3.5) based on cost and task complexity.
- RAG Pipeline: Utilizing
pgvectorfor efficient document retrieval, ensuring the AI stays within the scope of university-verified data.
- Frontend: Next.js 14+ (App Router), Tailwind CSS, Shadcn/UI.
- Backend: FastAPI (Python) for high-performance asynchronous AI processing.
- Database: PostgreSQL with pgvector for relational and vector data.
- AI/LLM: LangChain / LangGraph, OpenAI API.
- Authentication: Clerk or NextAuth.js.
- DevOps: Docker, GitHub Actions (CI/CD).
Challenge: Initial queries to the vector database and subsequent LLM processing were taking upwards of 5 seconds. Solution: Implemented Semantic Caching with Redis to store common queries and utilized Streaming Responses via FastAPI to deliver text to the UI as it is generated, improving the perceived performance significantly.
Challenge: Ensuring the AI does not leak one student's grades to another student. Solution: Integrated Row-Level Security (RLS) in PostgreSQL and injected strict user-specific metadata filters into the vector search queries.
- Docker & Docker Compose
- Python 3.10+
- Node.js 18+
- Clone the repository:
git clone [https://github.com/youruser/smart-campus-os.git](https://github.com/youruser/smart-campus-os.git)
- Environment Setup:
Create a
.envfile in the root directory and add:DATABASE_URL=your_postgres_url OPENAI_API_KEY=your_key
- Run with Docker:
docker-compose up --build
Author: [Fellipe Ferreira Lopes ] Role: AI Software Engineer Portfolio: [] LinkedIn: [https://www.linkedin.com/in/fellipeferreiral/]