A local AI-powered code companion. Keep your code on your machine while exploring code translation, reviews, and debugging with LLMs. A learning project exploring local AI integration in developer workflows.
CodePapi AI is an experimental, open-source project that brings Large Language Models (LLMs) to your local development environment. Translate code between languages, get AI-powered code reviews, and explore debugging workflows—all without sending your code to external services.
Note: This is a hobby/learning project. While functional, it's not optimized for production use. Performance depends heavily on your hardware and model choice. Expect AI responses to take 10-60+ seconds depending on code size and hardware.
✅ Private — Your code stays on your machine (no cloud uploads)
✅ Open-Source — Inspect the full codebase
✅ Free — MIT licensed, no subscriptions
✅ Learning Tool — Explore local LLM integration in practice
Convert code between supported languages: JavaScript, TypeScript, Python, Go, Rust, Java, C++, PHP, Ruby, Swift, and C#. Quality depends on model accuracy and code complexity.
Get AI-generated feedback on:
- Performance optimization ideas
- Potential security issues
- Code quality observations
- Best practice suggestions
Note: AI suggestions should be reviewed carefully and aren't a substitute for human code review.
The Diff View shows AI-suggested fixes side-by-side with original code. Always test fixes before committing.
Code processing happens locally using Qwen2.5-Coder via Ollama—nothing leaves your machine.
Before you begin, ensure you have the following installed:
- Docker & Docker Compose (easiest way to get started)
- Alternatively: Node.js 18+ and Ollama running locally
# Clone the repository
git clone https://github.com/codepapi/codepapi-ai.git
cd codepapi-ai
# Start the entire stack with one command
docker-compose up -d
⚠️ First Run: The first startup downloads the AI model (~1.5GB). Ensure stable internet and available disk space.
After starting the containers, pull the required model:
docker exec ollama ollama pull qwen2.5-coder:1.5bInitial Request Times: Expect 10-90 seconds for initial responses depending on:
- Your CPU/GPU specs
- Code size
- Available system memory
- Background processes
Once the models are downloaded and containers are running:
- 🖥️ Frontend: Open http://localhost in your browser
- 🔌 API: Backend runs at http://localhost:3000
- 🤖 AI Engine: Ollama API available at http://localhost:11434
- Paste or type code into the left editor
- Select a source language
- Choose an action:
- Translate: Pick a target language
- Review: Get feedback on code
- Check Bugs: See suggested fixes
- Click "Run AI" and wait for results
- Copy or review the output
Tips:
- Smaller code snippets get faster responses
- Review AI suggestions before using them in production
- Results vary based on code complexity and quality
Check out the project in action: Watch on YouTube
| Component | Technology | Purpose |
|---|---|---|
| AI Engine | Ollama + Qwen2.5-Coder | Local LLM inference |
| Orchestration | LangChain.js | AI workflow management |
| Backend | NestJS (Node.js) | REST API & business logic |
| Frontend | React + TailwindCSS + Lucide | Modern, responsive UI |
| Editor | Monaco Editor | VS Code-powered code editing |
| Quality | Biome | Fast linting & formatting |
Want to support more programming languages? It's easy!
See the Frontend Documentation for detailed instructions on adding languages to frontend/src/constants/languages.ts.
We use Biome for linting and formatting. Before submitting a PR, run:
npm run biome:lint # Check for issues
npx @biomejs/biome check --apply . # Auto-fix issuescodepapi-ai/
├── backend/ # NestJS API server
│ └── src/converter/ # Code conversion logic
├── frontend/ # React UI application
│ └── src/constants/ # Language definitions
├── docker-compose.yml # Full stack orchestration
└── README.md # This file
This is a hobby project, so keep it relaxed. Have ideas? Found a bug? Want to improve something?
- See CONTRIBUTING.md for details on setup and submitting changes
- Be nice: See CODE_OF_CONDUCT.md — just basic respect
No strict requirements, no bureaucracy. Just open a PR or issue and let's build together!
As an experimental AI project, CodePapi AI follows responsible practices:
- No telemetry: We don't collect usage analytics
- Local processing: All code stays on your machine
- No training: Your code never trains models
- Open source: Full code inspection available
- Clear limitations: We're honest about what works and what doesn't
- Review all AI suggestions before implementing
- Don't rely solely on AI output for security-critical code
- Test thoroughly in your environment
- Report security issues privately
This is an experimental project with real limitations:
- Speed: Not fast. Responses take 10-90+ seconds per request
- Quality: AI output varies. Some translations work well, others need manual fixes
- Hardware-dependent: Performance heavily depends on your CPU/GPU and available RAM
- Model limitations: Qwen2.5-Coder is a smaller model; results aren't comparable to larger proprietary models
- Error handling: Limited error checking and validation
- Production use: Not suitable for mission-critical workflows without thorough testing
Found a bug? Have a cool idea? Just want to chat about it?
- Issues: Report bugs or request features
- Discussions: Ask questions, share ideas, get help
- See CONTRIBUTING.md if you want to contribute code
Found a security vulnerability? Please email [oshiesam@gmail.com] with:
- Description of the issue
- Steps to reproduce
- Potential impact
Please allow 48 hours before public disclosure.
See frontend/README.md for detailed customization guides.
- Docker & Docker Compose (recommended) or
- Node.js 18+ + Ollama (for local development)
- Minimum 2GB RAM recommended (Qwen2.5-Coder model size)
- Stable internet for initial model download
- macOS, Linux, or Windows (with WSL2)
- Frontend Guide — UI customization and adding languages
- Backend Guide — API development and extending converters
- Docker Compose Configuration — Service orchestration
Distributed under the MIT License. See LICENSE for details.
- Report bugs: GitHub Issues
- Ask questions: GitHub Discussions
- Documentation: CONTRIBUTING.md, CODE_OF_CONDUCT.md
A learning project exploring local LLMs in development workflows