Skip to content

Our attempt at improving current outpainting methods using a local & global discriminator and applying residual blocks

Notifications You must be signed in to change notification settings

etarthur/Outpainting

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

54 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Our work builds upon the context encoder baseline model for image outpainting proposed in Image Outpaintng and Harmonization using Generative Adversarial Networks. This project was for the class Deep Learning by Professor Jacob Whitehill at Worcester Polytechnic Institute.

Summary

We generate a 192x192 image from the given ground truth of the same size, masked to only show 128x128 of the target. We qualitatively evaluate improvements to the generative network and discriminator including implementing super-resolution upscaling techniques.

Examples

Example of outpainting models

Usage

Our models are separated in their respective folders, but each use a train and val folder in the repository root for training. The dataset zips linked to these respective folders contain images from the MIT Places365-Standard dataset.

  • Run train.py of each model to train the network
  • Evaluate custom input image by running forward.py input.jpg output.jpg

About

Our attempt at improving current outpainting methods using a local & global discriminator and applying residual blocks

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages