Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Data shape for saliency map #31

Closed
Priende opened this issue Oct 4, 2020 · 2 comments
Closed

Data shape for saliency map #31

Priende opened this issue Oct 4, 2020 · 2 comments
Assignees
Labels
question Further information is requested

Comments

@Priende
Copy link

Priende commented Oct 4, 2020

First of all, thank you so much for sharing this package. I am trying to create a saliency map, but have been unable to, and I am hoping that you might be able to offer some advice.

Instead of using images as the input for classification, I am using 2-dimensional arrays of numbers. To create salience maps for 3 samples, my input data (X) with has the shape: (3, 1, 33, 128), where 3 is the number of samples, 33 is the number of rows, and 128 is the number of columns, because that is the necessary shape of the input data during model training. When I create a saliency map with the attentions example code and run print(saliency_map.shape) it shows (3, 1, 33), so it looks like the column information is lost. If I instead first transpose the input shape to (3, 33, 128, 1) to match the shape of the image data in the attentions example, I get the following error:

NotFoundError` Traceback (most recent call last)
<ipython-input-172-58f751d2ff4e> in <module>()
     20 from tf_keras_vis.saliency import Saliency
     21 from tf_keras_vis.utils import normalize
---> 22 saliency_map = saliency(loss, X_train)
     23 saliency_map = normalize(saliency_map)
13 frames
/usr/local/lib/python3.6/dist-packages/six.py in raise_from(value, from_value)
NotFoundError: No algorithm worked! `[Op:Conv2D]

My code is as follows:

from tensorflow.keras import backend as K
from tf_keras_vis.saliency import Saliency
from tf_keras_vis.utils import normalize
import tensorflow as tf
import keras

from tensorflow.keras.models import load_model
model = load_model('my_model.h5')

model.summary()

def model_modifier(m):
    m.layers[-1].activation = tf.keras.activations.linear
    return m

def loss(output):
    return (output[0][0], output[1][0], output[2][0])

saliency = Saliency(model,
                model_modifier=model_modifier)
from tensorflow.keras import backend as K
from tf_keras_vis.saliency import Saliency
from tf_keras_vis.utils import normalize
saliency_map = saliency(loss, X_train)
saliency_map = normalize(saliency_map)

And my model, using model.summary(), looks like:

Layer (type)                 Output Shape              Param #   
=================================================================
input_3 (InputLayer)         [(None, 1, 33, 128)]      0         
_________________________________________________________________
conv2d_2 (Conv2D)            (None, 8, 33, 128)        512       
_________________________________________________________________
batch_normalization_6 (Batch (None, 8, 33, 128)        32        
_________________________________________________________________
depthwise_conv2d_2 (Depthwis (None, 16, 1, 128)        528       
_________________________________________________________________
batch_normalization_7 (Batch (None, 16, 1, 128)        64        
_________________________________________________________________
activation_4 (Activation)    (None, 16, 1, 128)        0         
_________________________________________________________________
average_pooling2d_4 (Average (None, 16, 1, 32)         0         
_________________________________________________________________
dropout_4 (Dropout)          (None, 16, 1, 32)         0         
_________________________________________________________________
separable_conv2d_2 (Separabl (None, 16, 1, 32)         512       
_________________________________________________________________
batch_normalization_8 (Batch (None, 16, 1, 32)         64        
_________________________________________________________________
activation_5 (Activation)    (None, 16, 1, 32)         0         
_________________________________________________________________
average_pooling2d_5 (Average (None, 16, 1, 4)          0         
_________________________________________________________________
dropout_5 (Dropout)          (None, 16, 1, 4)          0         
_________________________________________________________________
flatten (Flatten)            (None, 64)                0         
_________________________________________________________________
dense (Dense)                (None, 2)                 130       
_________________________________________________________________
softmax (Activation)         (None, 2)                 0         
=================================================================
Total params: 1,842
Trainable params: 1,762
Non-trainable params: 80

@keisen keisen self-assigned this Oct 5, 2020
@keisen keisen added the question Further information is requested label Oct 5, 2020
@keisen
Copy link
Owner

keisen commented Oct 5, 2020

Hi @Priende , thank you for enjoying tf-keras-vis.

I recommend you to use keepdims option in Saliency.
In default, Saliency work for image data, so channels dimension in input data is eliminate.
When keepdims is True, Saliency will work to reach your goal.

keepdims=False,

If I instead first transpose the input shape to (3, 33, 128, 1) to match the shape of the image data in the attentions example, I get the following error:

Although I don't know specific cause, It seems the error is NOT caused by tf-keras-vis.

@Priende
Copy link
Author

Priende commented Oct 6, 2020

Perfect, thank you!

@Priende Priende closed this as completed Oct 6, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants