The Replicable AI for Microplanning (Ramp) deep learning model is a semantic segmentation one which detects buildings from satellite imagery and delineates the footprints in low-and-middle-income countries (LMICs) using satellite imagery and enables in-country users to build their own deep learning models for their regions of interest. The architecture and approach were inspired by the Eff-UNet model outlined in this CVPR 2020 Paper.
MLHub model id:
model_ramp_baseline_v1. Browse on Radiant MLHub.
Please review the model architecture, license, applicable spatial and temporal extents and other details in the model documentation.
|RAM||4 GB RAM||View Ramp model card|
First clone this Git repository. Please note: this repository uses
Git Large File Support (LFS) to include the
model checkpoint file. Either install
git lfs support for your git client,
use the official Mac or Windows GitHub client to clone this repository.
git clone https://github.com/radiantearth/model_ramp_baseline.git cd model_ramp_baseline/
After cloning the model repository, you can use the Docker Compose runtime files as described below.
Please note: these command-line examples were tested on Linux and MacOS. Windows and WSL users may need to substitute appropriate commands, especially for setting environment variables.
Pull pre-built image from Docker Hub (recommended):
# cpu docker pull docker.io/radiantearth/model_ramp_baseline:1 # optional, for NVIDIA gpu docker pull docker.io/radiantearth/model_ramp_baseline:1-gpu
Or build image from source:
# cpu docker build -t radiantearth/model_ramp_baseline:1 -f Dockerfile_cpu . # for NVIDIA gpu docker build -t radiantearth/model_ramp_baseline:1-gpu -f Dockerfile_gpu .
Prepare your input and output data folders. The
data/folder in this repository contains some placeholder files to guide you.
data/folder must contain:
input/chipsimagery chips for inferencing. For example, Maxar ODP imagery
- File name:
- File Format: GeoTIFF, 256x256
- Coordinate Reference System: WGS84, EPSG:4326
- Bands: 3 bands per file:
- Band 1 Type=Byte, ColorInterp=Red
- Band 2 Type=Byte, ColorInterp=Green
- Band 3 Type=Byte, ColorInterp=Blue
- File name:
/input/checkpoint.tfthe model checkpoint folder in tensorflow format. Please note: the model checkpoint is included in this repository.
output/folder is where the model will write inferencing results.
OUTPUT_DATAenvironment variables corresponding with your input and output folders. These commands will vary depending on operating system and command-line shell:
# change paths to your actual input and output folders export INPUT_DATA="/home/my_user/model_ramp_baseline/data/input/" export OUTPUT_DATA="/home/my_user/model_ramp_baseline/data/output/"
Run the appropriate Docker Compose command for your system:
docker-composedepending on your system.
# cpu docker compose up model_ramp_baseline_v1_cpu # NVIDIA gpu driver docker compose up model_ramp_baseline_v1_gpu
Wait for the
docker composeto finish running, then inspect the
OUTPUT_DATAfolder for results.
Please review the model output format and other technical details in the model documentation.