With a given two images of the same place from the different dates we need to figure out binary mask with changes
- Put data as shown below in tree structure of folders
- Put pretrained models from Selim in a /models/pretrained folder. We need those for siamese net.
- Change paths to the proper one in the config/ files
- Run the training proccess with train.py. The first argument should be path to the config file
python train.py config/config_unet++_resnext50.json
python train.py config/config_siamese_seresnext50.json
-
After that you'll have:
- saved models in /models/saved folder, logs of training
- logs of training proccess in /logs
- predicted non binary masks in /predicted_masks
-
In the /notebooks/final_submission.ipynb generated a final submission file via averaging outputs from those two models
-
I splited initial large image into small ones applying after that augmentation
-
Trained Unet++ with resnext50 backbone using Segmentation models on 1 channel image difference
-
Trained Siamese net with seresnext50 backbone using models architecture from xview2_solution on RGB channel images
├── LICENSE
├── Makefile <- Makefile with commands like `make data` or `make train`
├── README.md
├── data
│ ├── Images <- Original 4 channel images from first and second dates.
│ ├── Images_composit <- Composed 8 channel images from original images.
│ ├── mask <- Binary masks of images differences.
│ ├── Rucode.xls <- Table to match images from the same location.
│ └── sample_submission.csv <- Sample submission file.
│
├── models <- Trained and serialized models
│ ├── pretrained <- Pretrained models for siamese models from Selim.
│ └── saved <- Trained on the competition data models.
|
├── notebooks <- Jupyter notebooks.
│
├── logs <- Logs of training process: loss, IoU
|
├── config <- Configuration files for each type of model
|
├── predicted_masks <- Predicted probability masks in pickle format
│
├── requirements.txt <- The requirements file for reproducing the analysis environment,
│ `pip install -r requirements.txt`
│
├── setup.py <- makes project pip installable (pip install -e .) so src can be imported
├── train.py <- Training process with evaluation on the end.
├── eval.py <- Mask evaluation using the trained model.
├── src <- Source code for use in this project.
│ ├── __init__.py <- Makes src a Python module
│ │
│ ├── models.py <- Siamese models from Selim
│ ├── dataset.py <- Dataset class for satellite images
│ └── utils.py <- Small preprocess functions like normalization and decoding
│
└── tox.ini <- tox file with settings for running tox; see tox.readthedocs.io
Project based on the cookiecutter data science project template. #cookiecutterdatascience