You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
First of all, thank you so much for sharing this package. I am trying to create a saliency map, but have been unable to, and I am hoping that you might be able to offer some advice.
Instead of using images as the input for classification, I am using 2-dimensional arrays of numbers. To create salience maps for 3 samples, my input data (X) with has the shape: (3, 1, 33, 128), where 3 is the number of samples, 33 is the number of rows, and 128 is the number of columns, because that is the necessary shape of the input data during model training. When I create a saliency map with the attentions example code and run print(saliency_map.shape) it shows (3, 1, 33), so it looks like the column information is lost. If I instead first transpose the input shape to (3, 33, 128, 1) to match the shape of the image data in the attentions example, I get the following error:
NotFoundError` Traceback (most recent call last)
<ipython-input-172-58f751d2ff4e> in <module>()
20 from tf_keras_vis.saliency import Saliency
21 from tf_keras_vis.utils import normalize
---> 22 saliency_map = saliency(loss, X_train)
23 saliency_map = normalize(saliency_map)
13 frames
/usr/local/lib/python3.6/dist-packages/six.py in raise_from(value, from_value)
NotFoundError: No algorithm worked! `[Op:Conv2D]
My code is as follows:
from tensorflow.keras import backend as K
from tf_keras_vis.saliency import Saliency
from tf_keras_vis.utils import normalize
import tensorflow as tf
import keras
from tensorflow.keras.models import load_model
model = load_model('my_model.h5')
model.summary()
def model_modifier(m):
m.layers[-1].activation = tf.keras.activations.linear
return m
def loss(output):
return (output[0][0], output[1][0], output[2][0])
saliency = Saliency(model,
model_modifier=model_modifier)
from tensorflow.keras import backend as K
from tf_keras_vis.saliency import Saliency
from tf_keras_vis.utils import normalize
saliency_map = saliency(loss, X_train)
saliency_map = normalize(saliency_map)
Hi @Priende , thank you for enjoying tf-keras-vis.
I recommend you to use keepdims option in Saliency.
In default, Saliency work for image data, so channels dimension in input data is eliminate.
When keepdims is True, Saliency will work to reach your goal.
If I instead first transpose the input shape to (3, 33, 128, 1) to match the shape of the image data in the attentions example, I get the following error:
Although I don't know specific cause, It seems the error is NOT caused by tf-keras-vis.
First of all, thank you so much for sharing this package. I am trying to create a saliency map, but have been unable to, and I am hoping that you might be able to offer some advice.
Instead of using images as the input for classification, I am using 2-dimensional arrays of numbers. To create salience maps for 3 samples, my input data (X) with has the shape: (3, 1, 33, 128), where 3 is the number of samples, 33 is the number of rows, and 128 is the number of columns, because that is the necessary shape of the input data during model training. When I create a saliency map with the attentions example code and run print(saliency_map.shape) it shows (3, 1, 33), so it looks like the column information is lost. If I instead first transpose the input shape to (3, 33, 128, 1) to match the shape of the image data in the attentions example, I get the following error:
My code is as follows:
And my model, using model.summary(), looks like:
The text was updated successfully, but these errors were encountered: