This repo contains code and data samples from our encoder-decoder model, discussed in "A Deep Learning Approach To Object Affordance Segmentation" ICASSP, 2020, and extended in a journal that is under review in IEEE Access.
The following are the minimum requirements to replicate the paper experiments:
- Python 3.7.2
- PyTorch 1.0.1
- CUDA 9.1
- Visdom (follow the steps here)
python train.py --train_path path/to/dataset
If you use any code or model from this repo, please cite the following:
@inproceedings{thermos2020affordance,
author = "Spyridon Thermos and Petros Daras and Gerasimos Potamianos",
title = "A Deep Learning Approach To Object Affordance Segmentation",
booktitle = "Proc. International Conference on Acoustics Speech and Signal Processing (ICASSP)",
year = "2020"
}
Our code is released under MIT License (see LICENSE file for details)