Skip to content
Generating deep dream visualisations using activation feature maps of a pretrained Google Inception-v3 Convolutional Network
Python
Branch: master
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
data Initial Commit Aug 24, 2017
images Adding generated outputs Aug 24, 2017
Inception_layers.txt Adding generated outputs Aug 24, 2017
README.md Update README.md Aug 24, 2017
deep_dream.py Adding generated outputs Aug 24, 2017
helper.py Adding generated outputs Aug 24, 2017

README.md

Deep_Dream

Implementing deep dream using pretrained Google Inception-v3

Dependencies

Usage

Once you have your dependencies installed via pip, run the demo script in terminal via

python deep_dream.py --image=<IMAGE_PATH> --layer=<LIST_OF_INCEPTION_LAYERS>

Arguments:

-i or --image 		

The path of input image

-l or --layers 		

List of all layers of inception-v3 network to test for generating deep dream images.

eg.- mixed3a_pool_reduce_pre_relu, mixed4b_pool_reduce_pre_relu, etc.

Refer the file 'Inception_layers.txt' for names of layers in the Inception-v3 architecture

Results

Experimenting across different activation layers/feature maps in the Inception-V3 architecture, I generated some interesting visualisations. I generated a GIF animation of the results for same image across different layers to view the projections of specific activatons of a given feature map on the input image. Let us visualise the results below:

Brad Pitt

Image

Monalisa

Image

Natural Scene

Image

Sky

Image

Credits

  1. Google Research blog on Deep Dream
  2. Siraj Raval's excellent video on Deep Dream in TensorFlow
You can’t perform that action at this time.