Skip to content

khoale-dev-code/Docugenius-Project

Repository files navigation

πŸš€ DocuGenius AI

Intelligent Document Processing Assistant

Next.js TypeScript Ollama Tailwind CSS License

Transform your documents into intelligent conversations

✨ Features β€’ πŸš€ Quick Start β€’ πŸ“– Documentation β€’ 🀝 Contributing


πŸ“– About

DocuGenius AI is a cutting-edge document processing platform that leverages the power of AI to help you extract, summarize, and interact with your documents in natural language. Built with modern web technologies and powered by Ollama's local AI models.

🎯 Why DocuGenius?

  • πŸ”’ Privacy First - All processing happens locally, your data never leaves your machine
  • ⚑ Lightning Fast - Optimized for speed with efficient AI processing
  • 🎨 Beautiful UI - Modern, intuitive interface built with Tailwind CSS v4
  • 🌐 Multi-Format - Support for PDF, DOCX, and TXT files
  • πŸ’¬ Interactive - Chat naturally with your documents
  • πŸ†“ Completely Free - No subscriptions, no API keys, no hidden costs

✨ Features

πŸ“„ Smart Document Processing

  • Multi-format support (PDF, DOCX, TXT)
  • Drag & drop interface
  • Real-time progress tracking
  • File size up to 10MB
  • Batch processing ready

πŸ€– AI-Powered Intelligence

  • Automatic summarization
  • Context-aware Q&A
  • Multiple AI models support
  • Health monitoring
  • Performance optimization

🎨 Modern User Experience

  • Glassmorphism design
  • Smooth animations
  • Responsive layout
  • Dark mode ready
  • Accessibility focused

⚑ Developer Experience

  • TypeScript for type safety
  • Hot reload development
  • Component library
  • API documentation
  • Easy deployment

πŸš€ Quick Start

Prerequisites

Before you begin, ensure you have:

  • Node.js 18.0+ installed (Download)
  • Ollama installed (Download)
  • npm or yarn package manager

Installation

# 1️⃣ Clone the repository
git clone https://github.com/khoale-dev-code/Docugenius-Project.git
cd Docugenius-Project

# 2️⃣ Install dependencies
npm install

# 3️⃣ Set up Ollama AI
ollama pull phi3:mini
ollama serve

# 4️⃣ Configure environment
cp .env.example .env.local

# 5️⃣ Start development server
npm run dev

πŸŽ‰ That's it! Open http://localhost:3000 in your browser.


πŸ“š Documentation

πŸ—οΈ Project Structure

docugenius-project/
β”œβ”€β”€ πŸ“ src/
β”‚   β”œβ”€β”€ πŸ“ app/                     # Next.js App Router
β”‚   β”‚   β”œβ”€β”€ πŸ“ api/                 # API Routes
β”‚   β”‚   β”‚   β”œβ”€β”€ chat/               # Chat endpoint
β”‚   β”‚   β”‚   β”œβ”€β”€ summarize/          # Summarization
β”‚   β”‚   β”‚   β”œβ”€β”€ upload-and-process/ # File processing
β”‚   β”‚   β”‚   └── health/             # Health check
β”‚   β”‚   β”œβ”€β”€ globals.css             # Global styles
β”‚   β”‚   └── page.tsx                # Home page
β”‚   β”œβ”€β”€ πŸ“ components/              # React Components
β”‚   β”‚   β”œβ”€β”€ FileUploader.tsx        # Main upload component
β”‚   β”‚   └── πŸ“ ui/                  # UI Components
β”‚   β”‚       └── Button.tsx          # Button component
β”‚   β”œβ”€β”€ πŸ“ lib/                     # Utilities
β”‚   β”‚   └── services/
β”‚   β”‚       └── ollamaService.ts    # Ollama integration
β”‚   └── πŸ“ types/                   # TypeScript types
β”œβ”€β”€ πŸ“ public/                      # Static assets
β”œβ”€β”€ πŸ“„ package.json
β”œβ”€β”€ πŸ“„ tsconfig.json
└── πŸ“„ tailwind.config.ts

πŸ”Œ API Reference

POST /api/upload-and-process - Upload and process document

Request:

FormData {
  file: File
}

Response:

{
  success: boolean
  data: string          // Extracted content
  fileName: string
  fileSize: number
  metadata?: {
    pages?: number
  }
}

Example:

const formData = new FormData()
formData.append('file', file)

const response = await fetch('/api/upload-and-process', {
  method: 'POST',
  body: formData
})
POST /api/summarize - Generate AI summary

Request:

{
  content: string
  filename: string
}

Response:

{
  success: boolean
  summary: string
  chunksCount: number
  message: string
}
POST /api/chat - Chat with document

Request:

{
  question: string
  documentContent: string
}

Response:

{
  success: boolean
  answer: string
  message: string
}
GET /api/health - Check system health

Response:

{
  status: 'healthy' | 'unhealthy'
  responseTime: string
  performance: 'good' | 'slow' | 'very_slow'
  model: string
}

πŸ› οΈ Technology Stack

Category Technologies
Frontend Next.js 15, React 18, TypeScript 5.0, Tailwind CSS v4
AI Engine Ollama (phi3:mini, llama2, mistral)
File Processing pdf-parse, mammoth, Buffer API
UI Components Lucide React, Custom Components
Development ESLint, Prettier, Hot Reload
Deployment Vercel, Netlify, Docker

🎯 Usage Guide

1. πŸ“€ Upload Your Document

Drag & Drop

  • Simply drag your file into the upload zone
  • Instant feedback and validation
  • Visual drop indicators

Click to Browse

  • Click the upload area
  • Select file from your computer
  • Support multiple formats

2. πŸ€– AI Processing

The system automatically:

  • βœ… Extracts text content
  • βœ… Analyzes document structure
  • βœ… Generates intelligent summary
  • βœ… Prepares for Q&A

3. πŸ’¬ Interactive Chat

  • πŸ’‘ Ask questions about the content
  • 🎯 Get context-aware answers
  • ⚑ Use sample questions for quick start
  • πŸ“Š View conversation history

πŸ’‘ Pro Tips

For Best Results:

  • Keep questions concise (under 10 words)
  • Ask specific questions instead of general ones
  • Wait 5-10 seconds between questions
  • Use the suggested question buttons

🚒 Deployment

Deploy to Vercel (Recommended)

Deploy with Vercel

npm install -g vercel
vercel

Deploy to Netlify

npm run build
# Upload the 'out' directory to Netlify

Docker Deployment

FROM node:18-alpine
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build
EXPOSE 3000
CMD ["npm", "start"]
docker build -t docugenius .
docker run -p 3000:3000 docugenius

πŸ› Troubleshooting

πŸ”΄ Ollama Connection Error

Problem: Cannot connect to Ollama service

Solution:

# Check if Ollama is running
ollama serve

# Verify models are installed
ollama list

# Pull required model
ollama pull phi3:mini

# Test connection
curl http://localhost:11434/api/tags
⚠️ Timeout Errors

Problem: Requests timing out

Solutions:

  • Reduce file size (keep under 5MB)
  • Use shorter questions
  • Check internet connection
  • Restart Ollama service
  • Try a smaller AI model
πŸ”§ Build Errors

Problem: Build fails

Solution:

# Clear cache and reinstall
rm -rf node_modules package-lock.json .next
npm install

# Check TypeScript errors
npm run type-check

# Verify Node.js version
node --version  # Should be 18.0+

🀝 Contributing

We love contributions! Here's how you can help:

🌟 Ways to Contribute

  • πŸ› Report bugs - Open an issue
  • πŸ’‘ Suggest features - Share your ideas
  • πŸ“– Improve docs - Help others learn
  • πŸ”§ Submit PRs - Add new features
  • ⭐ Star the repo - Show your support

πŸ“ Contribution Steps

# 1. Fork the repository
# 2. Create your feature branch
git checkout -b feature/AmazingFeature

# 3. Commit your changes
git commit -m 'Add some AmazingFeature'

# 4. Push to the branch
git push origin feature/AmazingFeature

# 5. Open a Pull Request

βœ… Guidelines

  • Follow the existing code style
  • Write clear commit messages
  • Add tests for new features
  • Update documentation
  • Ensure all tests pass

πŸ“Š Performance Metrics

Metric Value
First Load < 2s
Time to Interactive < 3s
Lighthouse Score 95+
Bundle Size < 500KB
API Response < 1s

πŸ—ΊοΈ Roadmap

  • Basic document upload
  • AI summarization
  • Chat interface
  • Health monitoring
  • Multi-language support
  • Dark mode
  • Voice input
  • Export conversations
  • Batch processing
  • Cloud storage integration

πŸ“„ License

This project is licensed under the MIT License - see the LICENSE file for details.

MIT License - Free to use, modify, and distribute

πŸ‘¨β€πŸ’» Author

Khoa Le

GitHub Email LinkedIn


πŸ™ Acknowledgments

Special thanks to:

  • Ollama - For the amazing local AI engine
  • Next.js - For the powerful React framework
  • Tailwind CSS - For the beautiful styling
  • Lucide - For the elegant icons
  • Vercel - For seamless deployment
  • Open Source Community - For inspiration and support

⭐ Star History

Star History Chart


πŸ’– Support This Project

If you find this project helpful, please consider:

⭐ Starring the repository πŸ› Reporting bugs πŸ’‘ Suggesting features πŸ”„ Sharing with others


Built with ❀️ by developers, for developers

"Transform documents into conversations, powered by AI"

⬆️ Back to Top

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors