A comprehensive, production-ready parking monitoring and management system powered by Artificial Intelligence (Computer Vision) for real-time parking spot detection and Automatic License Plate Recognition (ALPR). The solution integrates IoT hardware (ESP32), a high-performance FastAPI backend, a modern React web dashboard, and a cross-platform mobile application built with React Native/Expo.
- Features
- System Architecture
- Technology Stack
- Prerequisites
- Installation
- Configuration (.env)
- Running the System
- API Documentation
- IoT Integration (ESP32)
- Project Structure
- Troubleshooting
- Real-Time Spot Detection: Uses a trained CNN model (ResNet/Custom) to classify parking spots as "Available" or "Occupied" from live video feeds (IP/RTSP cameras or video files).
- Automatic License Plate Recognition (ALPR): Integration with
fast-alpr(YOLO-based) for automated plate reading at entry and exit gates. - Reservation Validation: Automatically verifies whether a vehicle parked in a reserved spot holds a valid authorization.
- Access Control: Automatic logging of vehicle entries and exits via gate cameras.
- Session Management: Automated calculation of parking duration and fees.
- Reservation System: Allows users to reserve specific spots for a given time period.
- Payments: Supports payment processing via Card, MBWay, or Cash.
- Web Dashboard (Frontend): A modern React-based interface for real-time visualization of parking lot status.
- Admin Panel: Spot management, access log viewing, and financial statistics.
- WebSocket: Instant state updates pushed to clients without page refresh.
The mobile app provides a full-featured experience for end users:
- Login / Registration: User authentication.
- Dashboard: Quick overview of parking statistics (available/occupied spots).
- Reservations: Reserve specific spots by zone and time period.
- History: View past sessions and payment records.
- Payments: Process payments for active parking sessions.
The system consists of four main modules that communicate with each other:
graph TD
subgraph IoT_Hardware
ESP32_In[ESP32 Entry Gate] -->|POST Image| API
ESP32_Out[ESP32 Exit Gate] -->|POST Image| API
Camera[ESP32 Lot Camera] -->|RTSP Stream| CV_Engine
end
subgraph Backend_Server
API[FastAPI Server]
CV_Engine[Computer Vision Engine]
WS[WebSocket Manager]
CV_Engine -->|Update State| WS
API -->|CRUD| DB[(PostgreSQL)]
API -->|Upload| Storage[Supabase Storage]
end
subgraph Frontend_Client
Web[React Web App]
Web -->|HTTP| API
Web -->|WS| WS
end
subgraph Mobile_Client
Mobile[React Native App]
Mobile -->|HTTP| API
end
- Python 3.13
- FastAPI: High-performance async web framework.
- Uvicorn: ASGI server.
- AsyncPG: Asynchronous PostgreSQL driver.
- PyTorch & Torchvision: Deep Learning model inference.
- OpenCV: Image processing pipeline.
- Fast-ALPR: License plate detection and OCR.
- React: UI component library.
- Vite: Fast build tool and dev server.
- TailwindCSS (via index.css): Utility-first styling.
- Axios: HTTP client.
- React Native: Cross-platform mobile framework.
- Expo SDK ~54: Development platform and toolchain.
- expo-haptics: Haptic feedback.
- expo-linear-gradient: Gradient support for splash screen.
- react-native-toast-message: Toast notifications.
- @react-native-async-storage/async-storage: Local data persistence.
- PostgreSQL: Relational database.
- Supabase: Image storage (optional but recommended).
- Docker (Optional): Containerization support.
Before getting started, ensure you have the following installed:
- Python 3.10+ (3.13 recommended)
- Node.js 18+ and npm
- PostgreSQL 13+
- Git
- Expo CLI (for mobile development):
npm install -g expo-cli - Expo Go app on your mobile device (iOS/Android) for testing
-
Clone the repository:
git clone https://github.com/AlexPT2k22/AI_SE2.git cd AI_SE2 -
Create and activate a virtual environment:
# Windows python -m venv .venv .\.venv\Scripts\activate # Linux/Mac python3 -m venv .venv source .venv/bin/activate
-
Install dependencies:
pip install -r requirements.txt
Note: If you encounter issues with
fast-alprortorch, refer to the official documentation of those libraries for platform-specific installation instructions (CUDA vs CPU).
-
Navigate to the frontend directory:
cd frontend -
Install Node dependencies:
npm install
-
Navigate to the mobile directory:
cd mobile -
Install dependencies:
npm install
-
Configure the API URL: In the
App.jsfile, update theAPI_URLconstant to point to your backend server:const API_URL = 'http://YOUR_IP:8000';
Use your machine's local network IP address (e.g., 192.168.1.100) instead of localhost when testing on physical devices.
-
Create the database in PostgreSQL:
CREATE DATABASE aiparking;
-
Run the table creation script: Use the
tables.txtfile (SQL content) to create the required tables (parking_sessions,parking_payments,parking_web_users,parking_manual_reservations).psql -d aiparking -f tables.txt
Create a .env file in the project root (AI_SE2/) with the following variables:
| Variable | Description | Default / Example |
|---|---|---|
| DATABASE | ||
DATABASE_URL |
PostgreSQL connection URL | postgresql://user:pass@localhost:5432/aiparking |
| SUPABASE (Optional) | Image storage | |
SUPABASE_URL |
Supabase project URL | https://xyz.supabase.co |
SUPABASE_KEY |
API key (Service Role/Anon) | eyJ... |
SUPABASE_BUCKET |
Bucket name | parking-images |
SUPABASE_PUBLIC_BUCKET |
Whether the bucket is public | false |
| GENERAL SETTINGS | ||
VIDEO_SOURCE |
Video file path or RTSP URL | video.mp4, rtsp://..., or 0 (webcam) |
SPOTS_FILE |
JSON file with spot coordinates | parking_spots.json |
MODEL_FILE |
Trained model file (.pth) | spot_classifier.pth |
DEVICE |
Inference device | auto (uses CUDA if available), cpu, cuda |
SPOT_THRESHOLD |
Minimum confidence for occupancy | 0.7 |
PARKING_RATE_PER_HOUR |
Hourly rate (€) | 1.50 |
SESSION_SECRET |
Secret key for HTTP sessions | dev-secret-change-me |
| ALPR (License Plates) | ||
ENABLE_ALPR |
Enable plate recognition | true |
ALPR_WORKERS |
ALPR processing threads | 1 |
ALPR_DETECTOR_MODEL |
Detection model | yolo-v9-s-608-license-plate-end2end |
ALPR_OCR_MODEL |
OCR model | cct-s-v1-global-model |
This file defines the polygon coordinates for each parking spot. It can be generated using the mark_parking_spots.py helper script.
{
"reference_size": {"width": 1920, "height": 1080},
"spots": [
{
"name": "A1",
"points": [{"x": 100, "y": 200}, ...],
"reserved": false,
"authorized_plates": []
}
]
}
⚠️ IMPORTANT: Before running the system, you must configure the parking spots for your specific lot. The includedparking_spots.jsonfile is only a sample and will not work with your camera or video feed.
If using an ESP32-CAM, capture a reference frame:
# First, update the camera IP address in the script (ESP32_URL)
python capture_esp32_frame.pyThis generates the esp32_reference_frame.jpg file.
If using a video file, you can skip this step and use the video directly in the next step.
Use the interactive tool to draw polygons for each parking spot:
# From an image (ESP32 capture or screenshot)
python mark_parking_spots.py --source esp32_reference_frame.jpg --output parking_spots.json --show
# From a video file (uses the first frame)
python mark_parking_spots.py --source video.mp4 --output parking_spots.json --show
# From a specific video frame
python mark_parking_spots.py --source video.mp4 --frame 100 --output parking_spots.json --showInterface Controls:
| Key | Action |
|---|---|
| Left Click | Add a point (4 points = 1 spot) |
| Right Click | Remove the last point |
| Enter | Confirm the current spot and move to the next |
| S | Save the JSON file |
| Q / ESC | Quit |
Visualize the spots overlaid on the video to confirm they are correctly positioned:
python visualize_spots_on_video.py --video video.mp4 --spots parking_spots.jsonOnce the spots are configured, start the backend as described in the Running the System section.
It is recommended to open three terminal windows:
# From the project root (with the virtual environment activated)
uvicorn main:app --reload --host 0.0.0.0 --port 8000The server will start at http://localhost:8000. Swagger API documentation is available at /docs.
# From the frontend/ directory
npm run devThe web application will be available (typically) at http://localhost:5173.
# From the mobile/ directory
npm start
# or
npx expo startScan the QR code with the Expo Go app on your phone, or press a to open the Android emulator / i for the iOS simulator.
Key available endpoints:
GET /parking: Current status of all parking spots (JSON).GET /video_feed: MJPEG video stream with real-time annotations.WS /ws: WebSocket for spot state change events.
POST /api/entry: Registers a vehicle entry. Acceptscamera_idandimage(file). Returnssession_id.POST /api/exit: Registers a vehicle exit. Acceptscamera_idandimage(file). Calculates the amount due.
GET /api/reservations: Lists active reservations.POST /api/reservations: Creates a new reservation (requires authentication).DELETE /api/reservations/{spot}: Cancels a reservation.
POST /api/payments: Records a payment for a session.GET /api/sessions: Retrieves session history.
The system expects IoT devices (gate cameras) to send HTTP POST multipart/form-data requests to the entry and exit endpoints.
Example workflow:
- A vehicle approaches the gate.
- The ESP32 captures an image.
- The ESP32 sends a POST request to
http://SERVER_IP:8000/api/entrywith the image. - The server processes ALPR, creates a session, and returns a success response.
- The ESP32 opens the barrier.
Endpoints to manually override the state of a parking spot (useful for testing without a camera):
# Force a spot as AVAILABLE
curl -X POST http://localhost:8000/api/debug/spot -H "Content-Type: application/json" -d "{\"spot\": \"spot01\", \"occupied\": false}"
# Force a spot as OCCUPIED
curl -X POST http://localhost:8000/api/debug/spot -H "Content-Type: application/json" -d "{\"spot\": \"spot01\", \"occupied\": true}"
# Reset a spot to automatic AI detection
curl -X DELETE http://localhost:8000/api/debug/spot/spot01Responses:
// POST - Success
{"message": "Spot spot01 set as available", "spot": "spot01", "occupied": false}
// DELETE - Success
{"message": "Spot spot01 reset to automatic detection"}# Customize spot label prefix and starting index
python mark_parking_spots.py --source frame.jpg --output parking_spots.json --label-prefix "spot" --start-index 1
# Result: spot01, spot02, spot03...
# Export annotated video (without preview window)
python visualize_spots_on_video.py --video video.mp4 --spots parking_spots.json --output runs/video_annotated.mp4 --no-previewAI_SE2/
├── frontend/ # React/Vite web frontend source code
│ ├── src/
│ │ ├── components/ # Reusable UI components
│ │ ├── pages/ # Application pages
│ │ └── styles/ # CSS stylesheets
│ ├── package.json
│ └── vite.config.js
├── mobile/ # React Native/Expo mobile app source code
│ ├── App.js # Main application (single-file)
│ ├── package.json
│ └── app.json # Expo configuration
├── esp32_firmware/ # Arduino code for ESP32 devices
│ ├── center_camera/ # Lot monitoring camera
│ └── entry_gate/ # Entry/exit gate cameras
├── main.py # Main application (FastAPI)
├── alpr.py # ALPR wrapper module
├── spot_classifier.py # PyTorch CNN model definition
├── supabaseStorage.py # Supabase upload service
├── requirements.txt # Python dependencies
├── parking_spots.json # Parking spot configuration
├── tables.txt # Database schema
├── .env # Environment variables
└── ...
ImportError: fast_alpr: Ensurefast-alpris installed correctly. On Windows, additional steps or WSL2 may be required if the compiled C++ libraries are not available.- Database connection error: Verify that the PostgreSQL service is running and that the
DATABASE_URLin your.envfile is correct. - Video not opening: Check the
VIDEO_SOURCEpath. For webcam, try index0or1. For files, ensure the path is absolute or relative to the project root.
- Frontend cannot connect to backend: Ensure the frontend is configured to point to
localhost:8000(via the proxy invite.config.jsor a VITE environment variable).
- App cannot connect to backend: Use your machine's local network IP (e.g.,
192.168.1.100) instead oflocalhost. Ensure both your phone and computer are on the same Wi-Fi network. - Expo Go not loading: Make sure your firewall is not blocking the Expo ports (19000, 19001, 8081).
- Haptics not working: Haptic feedback only works on physical devices, not on emulators or simulators.
