Unified invoice parsing system with Next.js frontend, Express backend, and Python LLM service.
βββββββββββββββββββ      βββββββββββββββββββ      βββββββββββββββββββ
β   Next.js       βββββββΆβ    Express      βββββββΆβ   Python LLM    β
β   Frontend      β      β    Backend      β      β   (FastAPI)     β
β   Port: 3000    β      β   Port: 3001    β      β   Port: 8000    β
βββββββββββββββββββ      βββββββββββββββββββ      βββββββββββββββββββ
                                                            β
                                                            βΌ
                                                    βββββββββββββββββββ
                                                    β     Ollama      β
                                                    β   Port: 11434   β
                                                    βββββββββββββββββββ
- Node.js 18+
- Python 3.9+
- Ollama installed and running
make setupThis will:
- Install all dependencies (npm + pip)
- Check if Ollama is running
- Set up environment variables
# Copy your existing projects into:
# - frontend/     (your Next.js app)
# - backend/      (your Express app)
# - llm-service/  (your Python FastAPI app)make devThat's it! All three services will start simultaneously.
| Command | Description | 
|---|---|
| make help | Show all available commands | 
| make dev | Start all services in dev mode π | 
| make status | Check service status π | 
| make restart | Restart all services π | 
| make info | Show project information βΉοΈ | 
| Command | Description | 
|---|---|
| make dev-next | Start Next.js only | 
| make dev-express | Start Express only | 
| make dev-python | Start Python only | 
| Command | Description | 
|---|---|
| make install | Install all dependencies | 
| make install-next | Install Next.js dependencies | 
| make install-express | Install Express dependencies | 
| make install-python | Install Python dependencies | 
| Command | Description | 
|---|---|
| make build | Build all services | 
| make build-next | Build Next.js | 
| make build-express | Build Express | 
| Command | Description | 
|---|---|
| make clean | Clean all build artifacts | 
| make clean-next | Clean Next.js only | 
| make clean-express | Clean Express only | 
| make clean-python | Clean Python cache | 
| Command | Description | 
|---|---|
| make ports-check | Check if ports are available | 
| make ports-kill | Kill processes on ports | 
| make logs | Show logs from all services | 
| make test | Run all tests | 
| make lint | Lint all code | 
| Command | Description | 
|---|---|
| make ollama-check | Check if Ollama is running | 
| make ollama-pull | Pull qwen2.5vl model | 
| make ollama-list | List installed models | 
Edit .env file:
# Frontend
NEXT_PUBLIC_API_URL=http://localhost:3001
# Backend
PORT=3001
LLM_SERVICE_URL=http://localhost:8000
# Python LLM
PORT=8000
OLLAMA_HOST=http://localhost:11434
MODEL_NAME=qwen2.5vl:latestinvoice-parser-system/
βββ Makefile              # All commands
βββ package.json          # Root npm config
βββ .env                  # Environment variables
βββ README.md
β
βββ frontend/             # Next.js app
β   βββ package.json
β   βββ next.config.js
β   βββ src/
β
βββ backend/              # Express API
β   βββ package.json
β   βββ server.js
β   βββ src/
β
βββ llm-service/          # Python FastAPI
β   βββ requirements.txt
β   βββ main.py
β   βββ .env
β
βββ shared/               # Shared code
    βββ types/
    βββ utils/
make ports-kill# Check status
make status
# Check ports
make ports-check
# Restart everything
make restart# Start Ollama
ollama serve
# Check if running
make ollama-checkmake clean
make install
make dev# Morning - start working
make dev
# Check if everything is running
make statusmake install
make devmake lint
make testmake build
make start- Frontend: http://localhost:3000
- Backend API: http://localhost:3001
- LLM Service: http://localhost:8000
- Ollama: http://localhost:11434
MIT