Skip to content

marzi333/conquerShot-backend

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

90 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

conquerShot-backend

This is the repository of all backend implementation of the project ConquerShot for hackaTUM 2022, contributed by Mariz Samir Awad, Johannes Getzner, Zhuoling Li and Melanie Maier. The project is submitted for the challenge "Open Digital Earth Reconstruction" which is sponsored by Huawei.

ML Models for OSM Feature Classification

You can find all ML-related implementation under ./mlmodels , feel free to check out there:

cd ./mlmodels

1. Set up the Environment

Install all required packages and dependencies via:

pip install -r requirements.txt

2. Data Preparation

The training and evaluation of the model is basically implemented with torchvision. In order to load and proceed data with the library, the data should be organized as follows:

<dataset_name>
 ├── train
 │   ├── footway
 │   │   └── xxx.jpg
 │   └── primary
 │       └── xxx.jpg
 ├── val
 │   ├── footway
 │   │   └── xxx.jpg
 │   └── primary
 │       └── xxx.jpg
 └── issues
     ├── footway
     │   └── xxx.jpg
     └── primary
         └── xxx.jpg

For training the binary classifier for OSM features, we are provided the Huawei Challenge Dataset with following structure:

<huawei_dataset>
 ├── train
 │   ├── xxx.jpg
 |   ├── ...
 |   ├── labels.csv
 ├── val
 │   └── xxx.jpg
 |   └── ...
 |   └── labels.csv
 ├── issues
 │   └── xxx.jpg
 |   └── ...
 |   └── issues.csv

The following command is used to transform the Huawei dataset into the required structure (<split> can be train, val, issues, etc. ):

python prepare_data.py --split <split>

For training the classfier for road/non-road, we additionally sample the data from MS-COCO dataset , with ensuring that a fair distribution of the training data (road and non-road). More details see here.

3. Training the model

The model is based on a pre-trained backbone from the torchvision library, see documentation. In our case, the <backbone_name> is set by default as resnet18, but feel free to try out other resnet variants like resnet50, resnet101, etc.

Fine-tuning the model for road classification based on <backbone_name>:

python fine_road_cls.py --backbone <backbone_name>

as well as fine-tuning the model for OSM classification based on <backbone_name> :

python finetune_osm_cls.py --backbone <backbone_name>

The fine-tuned models will be saved under ./checkpoints/. We also upload our trained models (in pth. format):

4. Evaluating the model

You can evaluate the model on the test set via following:

python evaluate.py --batch_size <batch_size> \
				   --input_size <input_size> \
				   --phase <phase> \
				   --cls_type <cls_type>

Note that <cls_type> specifies the classifier type to be evaluated. It can be either osm_cls or road_cls.

The results will be output as csv.files containing the data in two columns (img_id and prediction), and stored under ../results/.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 4

  •  
  •  
  •  
  •  

Languages