Skip to content

WdBlink/Multi-Source-remote-sensing-detection-model

Repository files navigation

Multi-Source-remote-sensing-detection-model

New

GitHub stars

Intro

This project is based on Multispectral Object Detection with Transformer.

Abstract

Multi-source data can provide models with more visual information for object recognition tasks, thereby improving the accuracy and speed of object recognition, which has broad application prospects in the remote sensing field. This project can take two custom 3-channel image data as input and output predicted bounding boxes for object detection. It can also read pre-trained models to perform inference directly on large remote sensing grid data, output shapfile, making it convenient to visualize the results in GIS software.

Installation

Python>=3.6.0 is required with all requirements.txt installed including PyTorch>=1.7 (The same as yolov5 https://github.com/ultralytics/yolov5 ).

Clone the repo

git clone https://github.com/WdBlink/Multi-Source-remote-sensing-detection-model.git

Install requirements

cd Multi-Source-remote-sensing-detection-modelcd Multi-Source-remote-sensing-detection-model
pip install -r requirements.txt

Open source dataset

-[FLIR] [Google Drive] [Baidu Drive] extraction code:qwer

A new aligned version.

-[LLVIP] download

-[VEDAI] download

You need to convert all annotations to YOLOv5 format.

Refer: https://github.com/ultralytics/yolov5/wiki/Train-Custom-Data

Custom dataset

  1. It is recommended to use ArcGis Pro for dataset labeling and use the deep learning dataset export plugin to export the dataset.
  2. Format conversion:
# arcgis to voc
python data/dataset_distribute.py --source_folder path_to_arcgis_exported_dataset --target_folder path_to_voc_format_dataset

# voc to COCO
python VOC2COCO.py --source_folder path_to_voc_format_dataset

Run

Download the pretrained weights

yolov5 weights (pre-train)

-[yolov5s] google drive

-[yolov5m] google drive

-[yolov5l] google drive

-[yolov5x] google drive

CFT weights

-[LLVIP] google drive

-[FLIR] google drive

Change the data cfg

some example in data/multispectral/

Change the model cfg

some example in models/transformer/

note!!! Use xxxx_transfomerx3_dataset.yaml recommend.

Train Test and Detect

train:

python train.py

test:

python test.py

detect:

python detect_twostream.py

detect on Tiff:

python predict_tile.py

Results

results

References

YOLOv5

Multispectral Object Detection with Transformer

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published