Skip to content

Commit

Permalink
Merge pull request #2 from insikk/resnet
Browse files Browse the repository at this point in the history
Resnet
  • Loading branch information
insikk committed Nov 20, 2017
2 parents 7cb4b6d + b86212e commit 21f164e
Show file tree
Hide file tree
Showing 104 changed files with 93,773 additions and 319 deletions.
8 changes: 8 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
# model weight
*.ckpt

# Python junk
__pycache__
.ipynb_checkpoints
*.npy
*.pyc
21 changes: 18 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,12 +1,18 @@
# Grad-CAM-tensorflow

This is tensorflow version of demo for Grad-CAM. I used vgg16 for demo because this model is very popular CNN model.
However grad-cam can be used with any CNN model. Just modify convolution layer in my demo code.
This is tensorflow version of demo for Grad-CAM. I used ResNet-v1-101, ResNet-v1-50, and vgg16 for demo because this models are very popular CNN model.
However grad-cam can be used with any other CNN models. Just modify convolution layer in my demo code.

![Preview](https://github.com/insikk/Grad-CAM-tensorflow/blob/master/image_preview.png?raw=true)

See [python notebook](https://github.com/insikk/Grad-CAM-tensorflow/blob/master/gradCAM_tensorflow_demo.ipynb) to see demo of this repository.
>To use the VGG networks in this demo, the npy files for [VGG16 NPY](https://mega.nz/#!YU1FWJrA!O1ywiCS2IiOlUCtCpI6HTJOMrneN-Qdv3ywQP5poecM) has to be downloaded.
>To use VGG networks in this demo, the npy files for [VGG16 NPY](ftp://mi.eng.cam.ac.uk/pub/mttt2/models/vgg16.npy) has to be downloaded.
>To use ResNet-v1-50 or ResNet-v1-101, download weight from https://github.com/tensorflow/models/tree/master/research/slim

**Any Contributions are Welcome**


## [Origial Paper] Grad-CAM: Gradient-weighted Class Activation Mapping

Expand All @@ -16,6 +22,15 @@ Ramprasaath R. Selvaraju, Abhishek Das, Ramakrishna Vedantam, Michael Cogswell,

![Overview](http://i.imgur.com/JaGbdZ5.png)

# Requirements

* GPU Memory: 6GB or higher to run VGG16, and ResNet101 (You may able to run ResNet50 with less than 6GB)

# Setup

```
export PYTHONPATH=$PYTHONPATH:`pwd`/slim
```

## Acknowledgement

Expand Down
286 changes: 286 additions & 0 deletions gradCAM_tensorflow_ResNet101_demo.ipynb

Large diffs are not rendered by default.

289 changes: 289 additions & 0 deletions gradCAM_tensorflow_ResNet50_demo.ipynb

Large diffs are not rendered by default.

261 changes: 261 additions & 0 deletions gradCAM_tensorflow_VGG16_demo.ipynb

Large diffs are not rendered by default.

284 changes: 0 additions & 284 deletions gradCAM_tensorflow_demo.ipynb

This file was deleted.

15 changes: 8 additions & 7 deletions model/vgg16.py
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ def __init__(self, vgg16_npy_path=None, trainable=True):
vgg16_npy_path = path
print(path)

self.data_dict = np.load(vgg16_npy_path, encoding='latin1').item()
self.data_dict = np.load(vgg16_npy_path, encoding='latin1').item()
self.trainable = trainable
print("npy file loaded")

Expand All @@ -33,15 +33,15 @@ def build(self, rgb, train_mode=None):
rgb_scaled = rgb * 255.0

# Convert RGB to BGR
red, green, blue = tf.split(3, 3, rgb_scaled)
red, green, blue = tf.split(rgb_scaled, 3, 3)
assert red.get_shape().as_list()[1:] == [224, 224, 1]
assert green.get_shape().as_list()[1:] == [224, 224, 1]
assert blue.get_shape().as_list()[1:] == [224, 224, 1]
bgr = tf.concat(3, [
bgr = tf.concat([
blue - VGG_MEAN[0],
green - VGG_MEAN[1],
red - VGG_MEAN[2],
])
], 3)
assert bgr.get_shape().as_list()[1:] == [224, 224, 3]

self.conv1_1 = self.conv_layer(bgr, "conv1_1")
Expand Down Expand Up @@ -70,7 +70,7 @@ def build(self, rgb, train_mode=None):
self.fc6 = self.fc_layer(self.pool5, "fc6")
assert self.fc6.get_shape().as_list()[1:] == [4096]
self.relu6 = tf.nn.relu(self.fc6)

if train_mode is not None:
self.relu6 = tf.cond(train_mode, lambda: tf.nn.dropout(self.relu6, 0.5), lambda: self.relu6)
elif self.trainable:
Expand All @@ -79,7 +79,7 @@ def build(self, rgb, train_mode=None):

self.fc7 = self.fc_layer(self.relu6, "fc7")
self.relu7 = tf.nn.relu(self.fc7)

if train_mode is not None:
self.relu7 = tf.cond(train_mode, lambda: tf.nn.dropout(self.relu7, 0.5), lambda: self.relu7)
elif self.trainable:
Expand Down Expand Up @@ -134,4 +134,5 @@ def get_bias(self, name):
return tf.Variable(self.data_dict[name][1], name="biases")

def get_fc_weight(self, name):
return tf.Variable(self.data_dict[name][0], name="weights")
return tf.Variable(self.data_dict[name][0], name="weights")

Loading

0 comments on commit 21f164e

Please sign in to comment.