Advanced flood area detection using deep learning (UNet & UNet++) with a modern web interface.
- Dual Model Analysis: Compare UNet and UNet++ predictions
- Interactive UI: Modern, responsive design for desktop and mobile
- Real-time Processing: Get results in 1-3 seconds
- Comprehensive Metrics: Flood area percentage, pixel counts, and model agreement
- Visual Overlays: Color-coded segmentation masks
- Disagreement Analysis: See where models differ
- Production Ready: Deployed to Render with Docker
βββββββββββββββ βββββββββββββββ βββββββββββββββ
β Next.js β βββ> β FastAPI β βββ> β PyTorch β
β Frontend β β Backend β β Models β
βββββββββββββββ βββββββββββββββ βββββββββββββββ
Frontend:
- Next.js 15 (App Router)
- React 18
- Tailwind CSS
- TypeScript
Backend:
- FastAPI
- PyTorch 2.1 (CPU)
- Segmentation Models PyTorch
- Pillow, NumPy, OpenCV-Headless
Models:
- UNet (ResNet34 encoder)
- UNet++ (ResNet34 encoder)
- Trained on flood segmentation dataset (290 images)
- Test IoU: 80.35% (UNet), 81.48% (UNet++)
- Python 3.11+
- Node.js 18+
- Clone repository:
git clone https://github.com/yourusername/flood-segmentation.git
cd flood-segmentation- Setup Backend:
cd backend
# Create virtual environment
python -m venv venv
source venv/bin/activate # Windows: venv\Scripts\activate
# Install dependencies
pip install -r requirements.txt
# Run backend (models will be loaded from ../Models/)
uvicorn app.main:app --reload --port 8000Backend will run on http://localhost:8000
- Setup Frontend:
# In new terminal, from project root
npm install
# Create environment file
echo "NEXT_PUBLIC_API_URL=http://localhost:8000" > .env.local
# Run frontend
npm run devFrontend will run on http://localhost:3000
- Test:
Open http://localhost:3000 and upload a flood image!
flood-segmentation/
βββ app/ # Next.js pages (App Router)
β βββ page.tsx # Main upload page
β βββ layout.tsx # Root layout
β βββ globals.css # Global styles
βββ components/ # React components
β βββ UploadZone.tsx # Drag & drop upload
β βββ ImagePreview.tsx # Image preview
β βββ LoadingState.tsx # Loading animation
β βββ ImageTabs.tsx # Tabbed image viewer
β βββ AnalysisPanel.tsx # Statistics & insights
β βββ ResultsViewer.tsx # Complete results view
βββ lib/ # Utilities
β βββ api.ts # API client
β βββ types.ts # TypeScript types
β βββ utils.ts # Helper functions
βββ backend/ # FastAPI backend
β βββ app/
β β βββ main.py # FastAPI app
β β βββ models.py # Model loading
β β βββ preprocessing.py # Image preprocessing
β β βββ postprocessing.py# Analysis generation
β β βββ utils.py # Helpers
β βββ Dockerfile # Docker configuration for Render
β βββ requirements.txt # Python dependencies
βββ Models/ # Pre-trained model weights
β βββ unet_baseline_best.pth
β βββ unetplus.pth
βββ render.yaml # Render deployment configuration
βββ README.md # This file
This application is configured for easy deployment to Render using Docker.
- Fork/clone this repository to your GitHub account
- Go to Render Dashboard
- Click New + β Blueprint
- Connect your GitHub repository
- Render will auto-detect
render.yamland configure everything - Click Apply to deploy
- Go to Render Dashboard
- Click New + β Web Service
- Connect your GitHub repository
- Configure:
- Runtime: Docker
- Docker Build Context:
.(root) - Dockerfile Path:
backend/Dockerfile
- Add environment variables:
CORS_ORIGINS:*(or your frontend URL)PYTHONUNBUFFERED:1
- Click Deploy
| Variable | Description | Default |
|---|---|---|
PORT |
Server port (auto-set by Render) | 8000 |
CORS_ORIGINS |
Allowed CORS origins | * |
PYTHONUNBUFFERED |
Python output buffering | 1 |
MODEL_PATH_UNET |
Custom UNet model path | /Models/unet_baseline_best.pth |
MODEL_PATH_UNETPP |
Custom UNet++ model path | /Models/unetplus.pth |
From training on flood segmentation dataset:
| Model | Test IoU | Test Dice | Pixel Accuracy |
|---|---|---|---|
| UNet | 80.35% | 89.06% | 91.11% |
| UNet++ | 81.48% | 89.77% | 91.58% |
GET /healthResponse:
{
"status": "healthy",
"models_loaded": true,
"device": "cpu"
}POST /api/segment
Content-Type: multipart/form-data
Body:
file: <image-file>Response:
{
"success": true,
"data": {
"unet": {
"flood_percent": 32.45,
"flood_pixels": 21234,
"total_pixels": 65536,
"summary": "..."
},
"unetpp": { ... },
"comparison": { ... },
"images": {
"original": "data:image/png;base64,...",
"unet_overlay": "data:image/png;base64,...",
"unetpp_overlay": "data:image/png;base64,...",
"disagreement": "data:image/png;base64,..."
}
}
}Contributions are welcome! Please follow these steps:
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
This project is licensed under the MIT License - see the LICENSE file for details.
Built with β€οΈ using Next.js, FastAPI, and PyTorch