Skip to content

Generating deep dream visualisations using activation feature maps of a pretrained Google Inception-v3 Convolutional Network

Notifications You must be signed in to change notification settings

darshanbagul/HalluciNetwork

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Deep_Dream

Implementing deep dream using pretrained Google Inception-v3

Dependencies

Usage

Once you have your dependencies installed via pip, run the demo script in terminal via

python deep_dream.py --image=<IMAGE_PATH> --layer=<LIST_OF_INCEPTION_LAYERS>

Arguments:

-i or --image 		

The path of input image

-l or --layers 		

List of all layers of inception-v3 network to test for generating deep dream images.

eg.- mixed3a_pool_reduce_pre_relu, mixed4b_pool_reduce_pre_relu, etc.

Refer the file 'Inception_layers.txt' for names of layers in the Inception-v3 architecture

Results

Experimenting across different activation layers/feature maps in the Inception-V3 architecture, I generated some interesting visualisations. I generated a GIF animation of the results for same image across different layers to view the projections of specific activatons of a given feature map on the input image. Let us visualise the results below:

Brad Pitt

Image

Monalisa

Image

Natural Scene

Image

Sky

Image

Credits

  1. Google Research blog on Deep Dream
  2. Siraj Raval's excellent video on Deep Dream in TensorFlow

About

Generating deep dream visualisations using activation feature maps of a pretrained Google Inception-v3 Convolutional Network

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages