Skip to content

ConferInc/MCP-server

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 

Repository files navigation

RAG MCP Server & Next.js Frontend

A full-stack Retrieval-Augmented Generation (RAG) application built with a Model Context Protocol (MCP) server backend and a modern Next.js frontend. This project enables users to chat with their documents (PDFs) using OpenAI's models and a vector database for context retrieval.

🚀 Features

  • MCP Server Backend (Node.js/TypeScript)

    • Implements the Model Context Protocol (MCP).
    • Document Ingestion: Parses PDFs using pypdf (Python bridge) and chunking.
    • Vector Search: utilizing Qdrant for high-performance similarity search.
    • Embeddings: OpenAI text-embedding-3-small.
    • Tools: ingest_document (for admin/setup) and query_knowledge_base (for RAG).
  • Next.js Frontend

    • Chat Interface: Clean, responsive UI built with Tailwind CSS and shadcn/ui.
    • Streaming: Real-time token streaming using Vercel AI SDK (v6).
    • Input Management: Robust handling of user inputs and tool invocations.

📂 Project Structure

├── backend/                  # MCP Server Implementation
│   ├── src/
│   │   ├── lib/              # OpenAI & Qdrant clients
│   │   ├── tools/            # MCP Tools (ingest.ts, query.ts)
│   │   ├── mcp-server.ts     # Main server entry point
│   │   └── index.ts          # Server startup
│   ├── scripts/
│   │   ├── manual_ingest.ts  # Script to manually trigger ingestion
│   │   └── parse_doc.py      # Python script for PDF parsing
│   ├── data/                 # Directory for storing raw documents
│   └── package.json
│
├── frontend/                 # Next.js App Router Application
│   ├── src/
│   │   ├── app/
│   │   │   ├── api/chat/     # Route Handler for AI SDK
│   │   │   └── page.tsx      # Main Chat UI Component
│   │   └── components/       # UI Components (shadcn/ui)
│   └── package.json
└── README.md

🛠️ Prerequisites

  • Node.js (v18 or higher)
  • Python (v3.10+) with pip
  • Docker (Desktop or Engine) for running Qdrant
  • OpenAI API Key

⚡ Setup Instructions

1. Clone the Repository

git clone <your-repo-url>
cd <repo-name>

2. Backend Setup

  1. Navigate to the backend:
    cd backend
    npm install
  2. Install Python dependencies (for PDF parsing):
    pip install pypdf
    # If pip fails, try: python -m pip install pypdf
  3. Configure Environment Variables: Copy .env.example to .env and fill in your keys.
    OPENAI_API_KEY=sk-...  # Your OpenAI Key
    QDRANT_URL=http://localhost:6333
    PORT=3001
    MODE=sse

3. Frontend Setup

  1. Navigate to the frontend:
    cd frontend
    npm install
  2. Configure Environment Variables: Create .env.local:
    OPENAI_API_KEY=sk-...  # Required for AI SDK
    MCP_SERVER_URL=http://localhost:3001/sse

4. Start the Vector Database (Qdrant)

You must have Docker running.

docker run -d -p 6333:6333 -v $(pwd)/qdrant_storage:/qdrant/storage:z qdrant/qdrant

Note: If you encounter an error starting Docker, ensure Docker Desktop is running.

5. Ingest Documents

Place your PDF files in backend/src/tools/PDF/ (or update the path). Run the manual ingestion script:

cd backend
npx tsx scripts/manual_ingest.ts

6. Run the Application

You need to run both servers (use two terminals).

Terminal 1 (Backend):

cd backend
npm run dev

Terminal 2 (Frontend):

cd frontend
npm run dev

Access the chat interface at http://localhost:3000.

CURRENT ISSUES: 1)The chatbot front interface is working but not being able to retrieve data and access the embeddings. Currently trying to fix this issue. 2)- RAG Connection: Accessing the query_knowledge_base tool depends on a local Qdrant instance. If Qdrant is not running (e.g., Docker issues), the chat will fall back to general knowledge.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors