To run this project, please ensure you have the necessary dependencies installed. All required packages and their versions are specified in the requirements.txt file.
You can install them using:
pip install -r requirements.txt
Follow the steps below to set up the project environment and get started:
git clone https://github.com/geospatialgroup/grave.git
cd grave
python -m venv venv
source venv/bin/activate # On Windows use: venv\Scripts\activate
pip install -r requirements.txt
The installation process usually takes around 5 to 15 minutes, depending on your internet speed and system performance. This includes downloading dependencies listed in requirements.txt and setting up the environment.
For GPU-enabled systems, installation of CUDA and related drivers may add extra time if not already installed.
| Parameter | Description | Default |
|---|---|---|
--data |
Dataset config file | coco.yaml |
--cfg |
Model configuration file | yolo.yaml |
--weights |
Pretrained weights path | '' |
--epochs |
Number of training epochs | 100 |
--batch-size |
Total batch size | 16 |
--imgsz |
Input image size | 640 |
--device |
Training device (cpu/cuda) | '' (auto) |
--workers |
Data loading workers | 8 |
| Parameter | Description |
|---|---|
--resume |
Resume from last.pt |
--optimizer |
Optimizer (SGD/Adam/AdamW/LION) |
--cos-lr |
Use cosine LR scheduler |
--label-smoothing |
Label smoothing epsilon |
--freeze |
Freeze layers (e.g., --freeze 10) |
--sync-bn |
Use SyncBatchNorm (DDP mode) |
--multi-scale |
Vary image sizes (+/- 50%) |
Default hyperparameters are in data/hyps/hyp.scratch-high.yaml. Key parameters:
# Optimizer
lr0: 0.01 # Initial learning rate
lrf: 0.01 # Final learning rate (lr0 * lrf)
momentum: 0.937 # SGD momentum/Adam beta1
weight_decay: 0.0005 # Optimizer weight decay
# Loss
box: 0.05 # Box loss gain
cls: 0.5 # Class loss gain
obj: 1.0 # Object loss gain
# Augmentation
hsv_h: 0.015 # Image HSV-Hue augmentation
hsv_s: 0.7 # Image HSV-Saturation augmentation
hsv_v: 0.4 # Image HSV-Value augmentation
degrees: 0.0 # Image rotation (+/- deg)python train.py --device 0,1 # Uses 2 GPUspython -m torch.distributed.run --nproc_per_node 2 train.py --device 0,1To evolve hyperparameters for 300 generations:
python train.py --evolve 300This will:
- Create
evolve.csvwith optimization results - Generate
hyp_evolve.yamlwith optimized parameters - Create evolution plots
Checkpoints are saved to runs/train/exp/weights/:
best.pt: Best model (highest mAP)last.pt: Last modelepoch*.pt: Periodic saves (if--save-period> 0)
python detect.py --weights runs/train/yolov9-e3/weights/best.pt --source data/images --img-size 512python detect.py --source 0 --view-img| Command | Description |
|---|---|
--weights |
Path to model weights |
--source |
Input source (file/dir/URL/webcam) |
--img-size |
Inference size (default: 512) |
--conf-thres |
Confidence threshold (default: 0.53) |
--view-img |
Display results |
--save-txt |
Save results as text files |
Detect images in a folder:
python detect.py --source data/images --weights best.ptReal-time webcam detection:
python detect.py --source 0 --view-img --weights best.ptVideo detection with custom confidence:
python detect.py --source input.mp4 --conf-thres 0.6 --weights best.ptModify data.yaml to configure:
- Class names
- Dataset paths
- Training parameters
Results are saved to runs/detect/exp by default, containing:
- Annotated images/videos
- Text files with detection coordinates
- Cropped objects (if enabled)
python val.py --weights yolov9.pt --data coco.yaml --batch-size 32 --device 0python val.py --task speed --data coco.yaml --weights yolov9.pt --batch-size 1python val.py --weights [MODEL_PATH] --data [DATASET_YAML] --batch-size [BATCH_SIZE] --device [DEVICE]Validate on COCO dataset:
python val.py --weights yolov9.pt --data data/coco.yaml --img 640 --batch 32Test model speed:
python val.py --task speed --data data/coco.yaml --weights yolov9.pt --img 640 --batch 1Validate with custom dataset:
python val.py --weights best.pt --data data/custom.yaml --img 512| Parameter | Default | Description |
|---|---|---|
--weights |
yolo.pt |
Model weights path |
--data |
data/coco.yaml |
Dataset configuration file |
--batch-size |
32 | Validation batch size |
--imgsz |
640 | Inference size (pixels) |
--conf-thres |
0.001 | Confidence threshold |
--iou-thres |
0.7 | NMS IoU threshold |
--task |
val |
Task to run (val/test/speed/study) |
--device |
cpu |
Device to use (cpu or 0,1,2,3) |
--save-json |
False | Save COCO-JSON results |
--save-txt |
False | Save results as TXT files |
python val.py \
--weights runs/train/exp/weights/best.pt \
--data data/custom.yaml \
--batch-size 16 \
--imgsz 512 \
--conf-thres 0.01 \
--iou-thres 0.6 \
--device 0 \
--save-txt \
--save-json \
--name custom_valResults are saved to runs/val/exp by default and include:
-
Metrics Output:
Class Images Instances P R mAP50 mAP50-95 all 5000 36380 0.55 0.49 0.51 0.35 person 5000 4692 0.61 0.55 0.58 0.41 car 5000 4372 0.72 0.63 0.67 0.48 -
Output Files:
labels/: Per-image detection results (TXT format)val_batchX_pred.jpg: Sample detection visualizationsconfusion_matrix.png: Classification performancepredictions.json: COCO-format results (if enabled)
-
Speed Metrics:
Speed: 2.1ms pre-process, 5.3ms inference, 1.2ms NMS per image at shape (32, 3, 640, 640)
The code for the result analysis part is written in Matlab tools with a .m suffix.
Among them, the calculate_density.m file is used to calculate the density of graves on each 1km grid; the grave_county.m file is based on the regression method to analyze the linear relationship between the density of graves at the county level and the independent variable; the lgb_.py file is written in python and is used to analyze the nonlinear relationship between the density of graves on the 1km grid and the independent variable.
The software versions used in the experiment are Python 3.12.4 and Matlab R2018a. The pythons required to run lgb_.py include lightgbm, sklearn, scipy and hdf5storage. Please replace the absolute data path in the code to ensure that the code runs properly.
All visual figures used in this project are organized under the Figures/ directory and divided into five thematic parts.
The following software tools were used for figure production and post-processing:
| Tool | Version | Purpose |
|---|---|---|
| ArcGIS | 10.3 / 10.7 / 10.8 | Spatial data visualization, shapefile editing |
| Origin | 2022A / 2025 | Statistical plotting (bar charts, line graphs, etc.) |
| Adobe Illustrator | 2022 | Figure refinement, layout adjustment, publication-ready vector export |
Figure 1/ ~ Figure 5/: Corresponding to Figures 1 to 5 in the main text, each folder contains the code and data for generating the corresponding pictures.