Skip to content

Unofficial Implementation of VisNet: Deep Convolutional Neural Networks for Forecasting Atmospheric Visibility

License

Notifications You must be signed in to change notification settings

JaniceLC/PyTorch-VisNet

Repository files navigation

VisNet PyTorch Implementation

We provide a PyTorch implementation for VisNet. Paper Link: VisNet: Deep Convolutional Neural Networks for Forecasting Atmospheric Visibility

Note: The current software works well with PyTorch 0.41+.

You may find useful information in training/test tips and frequently asked questions. To implement custom models and datasets, check out our templates. To help users better understand and adapt our codebase, we provide an overview of the code structure of this repository.

Model Evaluation

dataset test accuracy Remarks
FROSI 0.6652

Prerequisites

  • Linux or macOS
  • Python 3
  • CPU or NVIDIA GPU + CUDA CuDNN

Getting Started

Installation

  • Clone this repo:
git clone https://github.com/JaniceLC/VisNet_Pytorch.git
cd VisNet_Pytorch
  • Install [PyTorch](http://pytorch.org and) 0.4+ and other dependencies (e.g., torchvision, visdom and dominate).
    • For pip users, please type the command pip install -r requirements.txt.
    • For Conda users, we provide a installation script ./scripts/conda_deps.sh. Alternatively, you can create a new Conda environment using conda env create -f environment.yml.
    • For Docker users, we provide the pre-built Docker image and Dockerfile. Please refer to our Docker page.

train/test

  • To view training results and loss plots, run python -m visdom.server and click the URL http://localhost:8097.
  • Train a model:
#!./scripts/train_visnet.sh
python train_vis.py --lr 0.00001 --gpu_ids 1 \
--batch_size 1 --name maps_visnet_1 \
--dataroot ./datasets/datasets/FROSI/Fog \
--TBoardX $TB --save_epoch_freq 1 \
--niter 1 --niter_decay 0 --model visnet --dataset_mode frosi &> ./outputmd/output_visnet_1.md &

To see more intermediate results, check out ./checkpoints/maps_cyclegan/web/index.html.

  • Test the model:
#!./scripts/test_cyclegan.sh
python test.py --dataroot ./datasets/datasets/FROSI/Fog --name maps_visnet_1 --model visnet
  • The test results will be saved to a html file here: ./results/checkpoint_name/latest_test/index.html.

  • For your own experiments, you might want to specify --netG, --norm, --no_dropout to match the generator architecture of the trained model.

  • If you would like to apply a pre-trained model to a collection of input images, please use --model test option. See ./scripts/test_single.sh for how to apply a model to Facade label maps (stored in the directory facades/testB).

Download pix2pix/CycleGAN datasets and create your own datasets.

Best practice for training and testing your models.

Before you post a new question, please first look at the above Q & A and existing GitHub issues.

Custom Model and Dataset

If you plan to implement custom models and dataset for your new applications, we provide a dataset template and a model template as a starting point.

To help users better understand and use our code, we briefly overview the functionality and implementation of each package and each module.

Pull Request

You are always welcome to contribute to this repository by sending a pull request.

Acknowledgments

Our code is inspired by pytorch-CycleGAN-and-pix2pix.

About

Unofficial Implementation of VisNet: Deep Convolutional Neural Networks for Forecasting Atmospheric Visibility

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages