A clean and readable Pytorch implementation of USAR
Code is tested on python 3.7.x with pytorch 1.0.1, it hasn't been tested with previous versions.
Follow the instructions in pytorch.org for your current setup
First, you will need to download and setup a dataset. Recommended using Phc-U373'' and
DIC-HeLa'' (http://celltrackingchallenge.net/2d-datasets/) dataset. Unzip the file and put it in the datasets folder.
mkdir datasets
python train.py --cuda
This command will start a training session using the images under the ./datasets/ directory. You are free to change those hyperparameters.
If you don't own a GPU remove the --cuda option, although I advise you to get one!
You need to adjust the hyperparameters to get good segmentation results.
Examples of the generated outputs are saved under the ./result/ directory.
Most codes are borrowed from the project of ``VEGAN: Unsupervised Meta-learning of Figure-Ground Segmentation via Imitating Visual Effects'' (https://arxiv.org/abs/1812.08442)