Skip to content

geospatialgroup/grave

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

13 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Million clandestine gravesites over southeastern China’s land surfaces revealed by satellite images

System Requirements

To run this project, please ensure you have the necessary dependencies installed. All required packages and their versions are specified in the requirements.txt file.

You can install them using:

pip install -r requirements.txt

Installation Guide

Follow the steps below to set up the project environment and get started:

Clone the repository:

git clone https://github.com/geospatialgroup/grave.git
cd grave

Create a virtual environment (optional):

python -m venv venv
source venv/bin/activate  # On Windows use: venv\Scripts\activate

Install the required dependencies:

pip install -r requirements.txt

Installation Time

The installation process usually takes around 5 to 15 minutes, depending on your internet speed and system performance. This includes downloading dependencies listed in requirements.txt and setting up the environment.

For GPU-enabled systems, installation of CUDA and related drivers may add extra time if not already installed.

Demo and Instructions for Use

🔧 Training Options

Essential Parameters

Parameter Description Default
--data Dataset config file coco.yaml
--cfg Model configuration file yolo.yaml
--weights Pretrained weights path ''
--epochs Number of training epochs 100
--batch-size Total batch size 16
--imgsz Input image size 640
--device Training device (cpu/cuda) '' (auto)
--workers Data loading workers 8

Advanced Options

Parameter Description
--resume Resume from last.pt
--optimizer Optimizer (SGD/Adam/AdamW/LION)
--cos-lr Use cosine LR scheduler
--label-smoothing Label smoothing epsilon
--freeze Freeze layers (e.g., --freeze 10)
--sync-bn Use SyncBatchNorm (DDP mode)
--multi-scale Vary image sizes (+/- 50%)

⚙️ Hyperparameter Configuration

Default hyperparameters are in data/hyps/hyp.scratch-high.yaml. Key parameters:

# Optimizer
lr0: 0.01       # Initial learning rate
lrf: 0.01       # Final learning rate (lr0 * lrf)
momentum: 0.937 # SGD momentum/Adam beta1
weight_decay: 0.0005  # Optimizer weight decay

# Loss
box: 0.05       # Box loss gain
cls: 0.5        # Class loss gain
obj: 1.0        # Object loss gain

# Augmentation
hsv_h: 0.015    # Image HSV-Hue augmentation
hsv_s: 0.7      # Image HSV-Saturation augmentation
hsv_v: 0.4      # Image HSV-Value augmentation
degrees: 0.0    # Image rotation (+/- deg)

🔄 Multi-GPU Training

Data Parallel (DP)

python train.py --device 0,1  # Uses 2 GPUs

Distributed Data Parallel (DDP)

python -m torch.distributed.run --nproc_per_node 2 train.py --device 0,1

🧪 Hyperparameter Evolution

To evolve hyperparameters for 300 generations:

python train.py --evolve 300

This will:

  1. Create evolve.csv with optimization results
  2. Generate hyp_evolve.yaml with optimized parameters
  3. Create evolution plots

💾 Model Saving

Checkpoints are saved to runs/train/exp/weights/:

  • best.pt: Best model (highest mAP)
  • last.pt: Last model
  • epoch*.pt: Periodic saves (if --save-period > 0)

Prediction

Basic Detection

python detect.py --weights runs/train/yolov9-e3/weights/best.pt --source data/images --img-size 512

Webcam Detection

python detect.py --source 0 --view-img

🛠 Usage

Basic Commands
Command Description
--weights Path to model weights
--source Input source (file/dir/URL/webcam)
--img-size Inference size (default: 512)
--conf-thres Confidence threshold (default: 0.53)
--view-img Display results
--save-txt Save results as text files

Examples

Detect images in a folder:

python detect.py --source data/images --weights best.pt

Real-time webcam detection:

python detect.py --source 0 --view-img --weights best.pt

Video detection with custom confidence:

python detect.py --source input.mp4 --conf-thres 0.6 --weights best.pt

⚙ Configuration

Modify data.yaml to configure:

  • Class names
  • Dataset paths
  • Training parameters

📊 Results

Results are saved to runs/detect/exp by default, containing:

  • Annotated images/videos
  • Text files with detection coordinates
  • Cropped objects (if enabled)

Verification

Basic Validation

python val.py --weights yolov9.pt --data coco.yaml --batch-size 32 --device 0

Speed Benchmark

python val.py --task speed --data coco.yaml --weights yolov9.pt --batch-size 1

🛠 Basic Usage

Command Structure
python val.py --weights [MODEL_PATH] --data [DATASET_YAML] --batch-size [BATCH_SIZE] --device [DEVICE]

Common Examples

Validate on COCO dataset:

python val.py --weights yolov9.pt --data data/coco.yaml --img 640 --batch 32

Test model speed:

python val.py --task speed --data data/coco.yaml --weights yolov9.pt --img 640 --batch 1

Validate with custom dataset:

python val.py --weights best.pt --data data/custom.yaml --img 512

⚙ Advanced Options

Key Parameters

Parameter Default Description
--weights yolo.pt Model weights path
--data data/coco.yaml Dataset configuration file
--batch-size 32 Validation batch size
--imgsz 640 Inference size (pixels)
--conf-thres 0.001 Confidence threshold
--iou-thres 0.7 NMS IoU threshold
--task val Task to run (val/test/speed/study)
--device cpu Device to use (cpu or 0,1,2,3)
--save-json False Save COCO-JSON results
--save-txt False Save results as TXT files

Full Configuration Example

python val.py \
    --weights runs/train/exp/weights/best.pt \
    --data data/custom.yaml \
    --batch-size 16 \
    --imgsz 512 \
    --conf-thres 0.01 \
    --iou-thres 0.6 \
    --device 0 \
    --save-txt \
    --save-json \
    --name custom_val

📊 Output Interpretation

Results are saved to runs/val/exp by default and include:

  1. Metrics Output:

    Class      Images  Instances      P      R   mAP50  mAP50-95
    all         5000      36380   0.55   0.49    0.51      0.35
    person      5000       4692   0.61   0.55    0.58      0.41
    car         5000       4372   0.72   0.63    0.67      0.48
    
  2. Output Files:

    • labels/: Per-image detection results (TXT format)
    • val_batchX_pred.jpg: Sample detection visualizations
    • confusion_matrix.png: Classification performance
    • predictions.json: COCO-format results (if enabled)
  3. Speed Metrics:

    Speed: 2.1ms pre-process, 5.3ms inference, 1.2ms NMS per image at shape (32, 3, 640, 640)
    

Analysis Overview

The code for the result analysis part is written in Matlab tools with a .m suffix.

Among them, the calculate_density.m file is used to calculate the density of graves on each 1km grid; the grave_county.m file is based on the regression method to analyze the linear relationship between the density of graves at the county level and the independent variable; the lgb_.py file is written in python and is used to analyze the nonlinear relationship between the density of graves on the 1km grid and the independent variable.

The software versions used in the experiment are Python 3.12.4 and Matlab R2018a. The pythons required to run lgb_.py include lightgbm, sklearn, scipy and hdf5storage. Please replace the absolute data path in the code to ensure that the code runs properly.

Visualization Overview

All visual figures used in this project are organized under the Figures/ directory and divided into five thematic parts.

Tools Used

The following software tools were used for figure production and post-processing:

Tool Version Purpose
ArcGIS 10.3 / 10.7 / 10.8 Spatial data visualization, shapefile editing
Origin 2022A / 2025 Statistical plotting (bar charts, line graphs, etc.)
Adobe Illustrator 2022 Figure refinement, layout adjustment, publication-ready vector export

File Description

Figure 1/ ~ Figure 5/: Corresponding to Figures 1 to 5 in the main text, each folder contains the code and data for generating the corresponding pictures.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages