Freshtify is an AI-driven system that automatically estimates supermarket shelf stock levels from images.
It integrates a React + Tailwind frontend with a FastAPI backend and a hybrid AI pipeline (GroundingDino, SAM2, Depth-Anything-v2, and Gemini).
- Automate shelf monitoring using computer vision and AI.
- Provide real-time stock visualization and low-stock alerts.
- Store results as JSON for analytics.
- Upload shelf images and get instant AI-based stock estimation.
- Interactive dashboard showing stock trends by category and time.
- Automatic low-stock alerts (below 30% threshold).
- Real-time backend–frontend synchronization.
- Multi-model AI pipeline: detection, segmentation, depth, and refinement.
- Frontend: React + Vite + TailwindCSS for the web dashboard.
- Backend: FastAPI for API routing, AI inference, and data exchange.
- AI Layer: GroundingDino (Detection), SAM2 (Segmentation), Depth-Anything-v2 (Depth Estimation), and Gemini (Refinement).
- Deployment: Dockerized services running on TensorDock / AWS / GCP.
cd backend
python -m venv venv
source venv/bin/activate
pip install -r requirements.txt
python start_server.py- API docs can be accessed through the link: http://localhost:8000/docs
cd frontend
npm install
npm run dev- Frontend runs at http://localhost:5173
Freshtify/
├── backend/ # FastAPI backend
│ ├── README.md # Backend API file structure
├── backend_model/ # AI Pipeline models
├── dataset/ # Training / testing image data
├── docs/ # Documentation, diagrams (e.g., architecture.png)
├── front_end/ # React + Vite frontend dashboard
│ └── README.md # Frontend file structure
├── result_images/ # Output visualization results from AI model
├── docker-compose.yml # Multi-service deployment config
├── env_example # Example environment variables
├── main.py # Main AI entry point
└── README.md # Root documentation file
- Reliable accuracy between AI estimation and actual shelf stock.
- Average processing time: 30–40 seconds per image for local machine and 15-20 seconds when we deploy publicly.
- Fully documented API and modular FastAPI service.
- Integrate cloud database (PostgreSQL / Firebase).
- Support multi-camera live tracking.
- Deploy full system on AWS / GCP.
A modern web application for AI-powered stock estimation and freshness analysis of produce items. Built with React Router v7, TypeScript, and TailwindCSS.
- Image Upload & Analysis: Upload images for AI-powered stock estimation
- Real-time Dashboard: Visualize stock levels and freshness data
- Alert System: Monitor and manage stock alerts
- Model Selection: Choose between different AI models for analysis
- Responsive Design: Modern UI with TailwindCSS and shadcn/ui components
- Server-side Rendering: Fast initial page loads with React Router SSR
- Hot Module Replacement: Lightning-fast development experience
- Framework: React Router v7
- Language: TypeScript
- Styling: TailwindCSS v4
- UI Components: Radix UI & shadcn/ui
- Charts: Recharts
- HTTP Client: Axios
- Icons: Lucide React
- Build Tool: Vite
front_end/
├── app/ # Application source code
│ ├── routes/ # Route components
│ │ ├── _layout.tsx # Layout wrapper for nested routes
│ │ ├── index.tsx # Home page
│ │ ├── upload.tsx # Image upload page
│ │ ├── dashboard.tsx # Dashboard with analytics
│ │ └── alert.tsx # Alerts management page
│ │
│ ├── components/ # Reusable React components
│ │ ├── Header.tsx # Navigation header
│ │ ├── Footer.tsx # Footer component
│ │ ├── ModelSelector.tsx # AI model selection component
│ │ ├── SectionToggle.tsx # Section toggle component
│ │ ├── StatusPill.tsx # Status indicator component
│ │ ├── TimeToggle.tsx # Time filter toggle
│ │ └── ui/ # shadcn/ui components
│ │ ├── button.tsx
│ │ ├── card.tsx
│ │ ├── dialog.tsx
│ │ ├── dropdown-menu.tsx
│ │ ├── input.tsx
│ │ ├── label.tsx
│ │ ├── navigation-menu.tsx
│ │ ├── select.tsx
│ │ └── table.tsx
│ │
│ ├── lib/ # Utility libraries
│ │ ├── api.ts # API client functions
│ │ └── utils.ts # Helper utilities
│ │
│ ├── assets/ # Static assets
│ │ ├── avatars/ # Team member avatars
│ │ ├── sampleImages/ # Sample images for demo
│ │ └── teamlogo.png # Team logo
│ │
│ ├── welcome/ # Welcome page assets
│ │ ├── welcome.tsx
│ │ ├── logo-dark.svg
│ │ └── logo-light.svg
│ │
│ ├── root.tsx # Root application component
│ ├── routes.ts # Route configuration
│ └── app.css # Global styles
│
├── public/ # Public static files
│ └── favicon.ico
│
├── build/ # Production build output
│ ├── client/ # Client-side assets
│ └── server/ # Server-side code
│
├── components.json # shadcn/ui configuration
├── Dockerfile # Docker configuration
├── env.example # Environment variables template
├── package.json # Dependencies and scripts
├── react-router.config.ts # React Router configuration
├── tsconfig.json # TypeScript configuration
├── vite.config.ts # Vite configuration
└── README.md # This file
- Node.js 18+
- npm or pnpm or yarn
npm install- Copy the environment variables template:
cp env.example .env- Update the
.envfile with your configuration:
VITE_API_URL=http://localhost:8000
# Add other environment variables as neededStart the development server with Hot Module Replacement:
npm run devThe application will be available at http://localhost:5173
npm run dev- Start development servernpm run build- Build for productionnpm run start- Start production server (on port 12355)npm run typecheck- Run TypeScript type checking
Create an optimized production build:
npm run buildThis generates:
build/client/- Static assets (HTML, CSS, JS)build/server/- Server-side code
The application can be deployed to any platform that supports Node.js or Docker:
- Cloud Platforms: AWS ECS, Google Cloud Run, Azure Container Apps
- PaaS: Heroku, Railway, Fly.io, Render
- Edge: Cloudflare Pages, Vercel, Netlify
- VPS: Digital Ocean, Linode, Vultr
To run the production build locally:
npm run startThe server will start on port 12355 (configurable via PORT environment variable).
This project uses:
- TailwindCSS v4 for utility-first styling
- shadcn/ui for pre-built accessible components
- Radix UI for unstyled, accessible component primitives
- class-variance-authority for component variants
- clsx & tailwind-merge for conditional class composition
Use the shadcn/ui CLI to add new components:
npx shadcn@latest add [component-name]/- Home page with overview/upload- Upload images for analysis/dashboard- View analytics and stock data/alert- Manage alerts and notifications
The frontend communicates with the backend API defined in app/lib/api.ts. Update the base URL in your environment variables:
// app/lib/api.ts
const API_BASE_URL = import.meta.env.VITE_API_URL || "http://localhost:8000";Built with ❤️ using React Router v7 and modern web technologies.
A FastAPI-based backend service for automatically estimating supermarket stock levels using integrated AI models. This system combines detection, segmentation, depth estimation, and Gemini refinement for accurate stock level analysis.
- Integrated AI Pipeline: Detection → Segmentation → Depth Estimation → Gemini Refinement
- Multiple Image Processing: Process multiple images with T0, T1, T2... grouping
- Section-Based Analysis: Detect individual sections for each product type
- Real-Time Processing: Fast processing with detailed logging
- RESTful API: Clean, documented API endpoints with automatic OpenAPI documentation
- Frontend Integration: Seamless integration with React frontend
- GPU Support: Optimized for GPU acceleration when available
- Modular Architecture: Extensible design for adding new features
backend/
├── app/
│ ├── api/
│ │ └── routes/
│ │ ├── health.py # Health check endpoints
│ │ └── stock_estimation.py # Main stock estimation endpoints
│ ├── core/
│ │ ├── config.py # Configuration management
│ │ └── logging_config.py # Logging setup
│ ├── models/
│ │ └── schemas.py # Pydantic models for API schemas
│ ├── services/
│ │ ├── ai_engine.py # AI model integration
│ │ └── file_processor.py # File upload and processing
│ ├── utils/
│ │ └── helpers.py # Utility functions
│ └── main.py # FastAPI application entry point
├── logs/ # Application logs
├── model_cache/ # AI model cache directory
├── outputs/ # Output files
├── uploads/ # Uploaded files
└── requirements.txt # Python dependencies
- Python 3.8+
- CUDA-compatible GPU (recommended for AI models)
- 8GB+ RAM (16GB+ recommended)
- Backend model files in
backend_model/folder
-
Navigate to backend folder:
cd backend -
Create virtual environment (recommended):
python -m venv venv source venv/bin/activate # On Windows: venv\Scripts\activate
-
Install dependencies:
pip install -r requirements.txt
-
Set up environment variables:
cp env.example .env # Edit .env with your configuration if needed -
Create necessary directories (if not exist):
mkdir -p uploads outputs model_cache logs
-
Set up API keys (for Gemini model):
# Edit backend_model/.env GEMINI_API_KEY=your_api_key_here
- PORT: 8000 (default)
- HOST: 0.0.0.0 (accessible from all interfaces)
- Allowed Origins:
Copy env.example to .env and modify as needed. Default settings are:
- Port: 8000
- Debug mode: enabled
- File size limit: 50MB
python start_server.pypython -m uvicorn app.main:app --reload --host 0.0.0.0 --port 8000python start_simple.pyOnce the server is running, access the interactive API documentation:
- Swagger UI: http://localhost:8000/docs
- ReDoc: http://localhost:8000/redoc
curl http://localhost:8000/api/v1/healthcurl http://localhost:8000/api/v1/modelscurl http://localhost:8000/api/v1/productscurl -X POST "http://localhost:8000/api/v1/estimate-stock-integrated" \
-F "file=@supermarket_shelf.jpg" \
-F "products=potato section,onion,eggplant section,tomato,cucumber" \
-F "confidence_threshold=0.7"curl -X POST "http://localhost:8000/api/v1/estimate-stock-multiple" \
-F "files=@image1.jpg" \
-F "files=@image2.jpg" \
-F "products=potato section,onion,eggplant section,tomato,cucumber" \
-F "confidence_threshold=0.7"GET /api/v1/health- Basic health checkGET /api/v1/health/detailed- Detailed system information
POST /api/v1/estimate-stock- Estimate stock levels from single file (legacy)POST /api/v1/estimate-stock-integrated- Recommended for single image with integrated AI pipelinePOST /api/v1/estimate-stock-multiple- Recommended for multiple images with T0, T1 groupingGET /api/v1/models- Get available AI modelsGET /api/v1/products- Get supported product types
The system currently supports estimation for:
- Potato Section: Potato display sections
- Onion: Individual onions
- Eggplant Section: Eggplant display sections
- Tomato: Individual tomatoes
- Cucumber: Individual cucumbers
- Low Stock: < 30% of shelf capacity (Low)
- Normal Stock: 30% - 80% of shelf capacity (Medium)
- Overstocked: > 80% of shelf capacity (High)
{
"success": true,
"message": "Stock estimation completed successfully",
"processing_time": 2.34,
"timestamp": "2024-01-15T10:30:00Z",
"results": [
{
"product": "potato section section 1",
"stock_percentage": 0.65,
"stock_status": "normal",
"confidence": 0.87,
"bounding_box": null,
"reasoning": "AI model detected potato section section 1 with 65% stock level"
}
],
"model_used": "integrated-ai-pipeline",
"image_metadata": {
"filename": "supermarket_shelf.jpg"
}
}{
"success": true,
"message": "Stock estimation completed successfully for 2 images",
"processing_time": 4.56,
"timestamp": "2024-01-15T10:30:00Z",
"results": {
"T0": [
{
"product": "potato section section 1",
"stock_percentage": 0.65,
"stock_status": "normal",
"confidence": 0.87,
"bounding_box": null,
"reasoning": "AI model detected potato section section 1 with 65% stock level"
}
],
"T1": [
{
"product": "onion section 1",
"stock_percentage": 0.45,
"stock_status": "normal",
"confidence": 0.82,
"bounding_box": null,
"reasoning": "AI model detected onion section 1 with 45% stock level"
}
]
},
"model_used": "integrated-ai-multiple",
"image_metadata": {
"image_count": 2,
"images_processed": ["T0", "T1"]
}
}- Detection: Object detection using YOLO
- Segmentation: Segment detection using SAM2
- Depth Estimation: Calculate depth for fullness estimation
- Stock Calculation: Compute stock percentage for each section
- Gemini Refinement (optional): Refine results using Gemini model
The backend runs main.py directly without modification:
- User uploads images (T0.jpg, T1.jpg, etc.)
- Backend calls
main.pyvia subprocess main.pyprocesses images using integrated AI pipeline- Backend parses output using regex pattern matching
- Results are grouped by T0, T1, T2...
The print_result() method outputs:
potato section - section 1: 85.2%
potato section - section 2: 72.5%
onion - section 1: 45.8%
This output is parsed and grouped by image for frontend display.
The backend is designed to work seamlessly with the React frontend:
- Upload Images: Frontend sends images to
/estimate-stock-multiple - Backend Processing: Main.py processes images with AI pipeline
- Results Grouping: Results are grouped by T0, T1, T2...
- Frontend Display: Frontend displays results in timeline view
-
Port Already in Use
# Change PORT in .env or use a different port python -m uvicorn app.main:app --reload --port 8001 -
Module Not Found
# Install dependencies pip install -r requirements.txt -
CUDA Out of Memory
- Reduce batch size in configuration
- Use CPU-only mode
-
Gemini API Key Missing
- Add
GEMINI_API_KEYtobackend_model/.env - System will gracefully fallback without Gemini refinement
- Add
Application logs are stored in the logs/ directory:
app.log- Application logs with rotation- Console output for development
- Processing Time:
- Single image: ~1-2 minutes
- Multiple images (2): ~2-3 minutes
- Model Loading: Models are cached after first load
- Memory Usage: ~4-8GB depending on models
python test_api.pyblack app/
flake8 app/This project is part of the Freshtify Stock Level Estimation system.
# 1. Navigate to backend folder
cd backend
# 2. Install dependencies
pip install -r requirements.txt
# 3. Start server
python start_server.py
# 4. Open browser
# http://localhost:8000/docs- Current Version: v1
- Base Path:
/api/v1 - Supported Formats: JSON, Multipart Form Data
- Pre-requisites
- Step 1: Install Docker
- Step 2: Install NVIDIA Container Toolkit
- Step 3: Configure Docker for NVIDIA Runtime
- Step 4: Verify Installation
- Step 5: Build and Run Docker Containers
- Step 6: Set Up Domain and Nginx
Before you start, make sure you have:
- A VPS or host running Ubuntu 22.04 or Debian 12
- An NVIDIA GPU (for CUDA support)
- Docker (rootless or with root privileges)
Run the following commands to install Docker:
curl -fsSL https://get.docker.com -o get-docker.sh
sh get-docker.sh
If you want to use Docker as a non-root user, install rootless Docker:
dockerd-rootless-setuptool.sh install
Follow the NVIDIA Container Toolkit installation guide for your system:
https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html
sudo apt-get update && sudo apt-get install -y --no-install-recommends \
curl \
gnupg2
curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg \
&& curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list | \
sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | \
sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list
sudo sed -i -e '/experimental/ s/^#//g' /etc/apt/sources.list.d/nvidia-container-toolkit.list
sudo apt-get update
export NVIDIA_CONTAINER_TOOLKIT_VERSION=1.18.0-1
sudo apt-get install -y \
nvidia-container-toolkit=${NVIDIA_CONTAINER_TOOLKIT_VERSION} \
nvidia-container-toolkit-base=${NVIDIA_CONTAINER_TOOLKIT_VERSION} \
libnvidia-container-tools=${NVIDIA_CONTAINER_TOOLKIT_VERSION} \
libnvidia-container1=${NVIDIA_CONTAINER_TOOLKIT_VERSION}
sudo nvidia-ctk runtime configure --runtime=docker
This updates /etc/docker/daemon.json to include the NVIDIA runtime.
sudo systemctl restart docker
If you are using rootless Docker:
nvidia-ctk runtime configure --runtime=docker --config=$HOME/.config/docker/daemon.json
systemctl --user restart docker
sudo nvidia-ctk config --set nvidia-container-cli.no-cgroups --in-place
Test if the NVIDIA runtime works correctly:
docker run --rm --gpus all nvidia/cuda:12.8.1-devel-ubuntu24.04 nvidia-smi
If successful, you’ll see your GPU details displayed.
From your frontend folder:
cd front_end
docker build -t fe:latest-sv .
From your project root:
docker build -t backend:latest-sv -f backend/Dockerfile .
Create docker-compose.yml at your project root:
services:
backend:
image: backend:latest-sv
container_name: ai-stock-backend
ports:
- "8000:8000"
environment:
- HOST=0.0.0.0
- PORT=8000
- DEBUG=true
- ENABLE_GPU=true
- CUDA_VISIBLE_DEVICES=0
env_file:
- .env
volumes:
- ./backend/uploads:/app/uploads
- ./backend/outputs:/app/outputs
- ./backend/model_cache:/app/model_cache
- ./backend/logs:/app/logs
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: 1
capabilities: [gpu]
restart: unless-stopped
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8000/api/v1/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
networks:
- ai-stock-network
frontend:
image: fe:latest-sv
container_name: ai-stock-frontend
ports:
- "12355:12355"
depends_on:
- backend
restart: unless-stopped
networks:
- ai-stock-network
networks:
ai-stock-network:
driver: bridge
volumes:
model_cache:
driver: local
uploads:
driver: local
outputs:
driver: localdocker-compose up -d
Check container status:
docker ps
Access your app at:
http://<your-server-ip>:12355
Update your DNS A record to point your domain (e.g., example.com) to your server’s public IP.
sudo apt update
sudo apt install nginx
sudo nano /etc/nginx/sites-available/your-domain.com
Add this configuration:
server {
listen 80;
server_name your-domain.com www.your-domain.com;
location / {
proxy_pass http://localhost:12355;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
sudo ln -s /etc/nginx/sites-available/your-domain.com /etc/nginx/sites-enabled/
sudo nginx -t
sudo systemctl restart nginx
Your application is now accessible via your domain.
Use Cloudflare to enable free SSL:
- Point your domain’s nameservers to Cloudflare.
- Enable SSL in Cloudflare’s dashboard.
If you cannot open ports (no root access), use Cloudflare Tunnel:
https://developers.cloudflare.com/cloudflare-one/networks/connectors/cloudflare-tunnel/get-started/create-remote-tunnel/
This method exposes your application securely to the internet.
This project is built by the Chill guys team. Team member information and avatars are located in app/assets/avatars/.
