Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How can I get hidden layer representation of the given data? #41

Closed
erogol opened this issue Apr 8, 2015 · 44 comments
Closed

How can I get hidden layer representation of the given data? #41

erogol opened this issue Apr 8, 2015 · 44 comments

Comments

@erogol
Copy link

erogol commented Apr 8, 2015

After training I want to extract the hidden layer representation of the given data instead of the final probabilities. How can I do it with Keras?

@fchollet
Copy link
Member

fchollet commented Apr 9, 2015

One simple way to do it is to use the weights of your model to build a new model that's truncated at the layer you want to read. Then you can run the ._predict(X_batch) method to get the activations for a batch of inputs.

Example:

# this is your initial model
model = Sequential()
model.add(Dense(20, 64, init='uniform'))
model.add(Activation('tanh'))
model.add(Dense(64, 1, init='uniform'))
model.add(Activation('softmax'))

# we train it
model.compile(loss='mse', optimizer='sgd')
model.fit(X_train, y_train, nb_epoch=20, batch_size=16)

# we build a new model with the activations of the old model
# this model is truncated after the first layer
model2 = Sequential()
model2.add(Dense(20, 64, weights=model.layers[0].get_weights()))
model2.add(Activation('tanh'))

activations = model2._predict(X_batch)

Note: I haven't tested it.

Another way to do it would be to define a Theano function to get the layer's output:

import theano
get_activations = theano.function([model.layers[0].input], model.layers[1].output(train=False), allow_input_downcast=True)
activations = get_activations(X_batch) # same result as above

Note: also untested.

@erogol
Copy link
Author

erogol commented Apr 9, 2015

Thanks... :)

@xypan1232
Copy link

Hi,

I try to use activations = model2._predict(X_batch), but it seems there is no _predict function, is this a bug? Thanks.

File "/usr/local/lib/python2.7/dist-packages/Keras-0.1.2-py2.7.egg/keras/models.py", line 469, in predict
return self._predict_loop(self._predict, X, batch_size, verbose)[0]
AttributeError: 'Sequential' object has no attribute '_predict'

@fchollet
Copy link
Member

fchollet commented Sep 3, 2015

You need to compile your model first.

On 2 September 2015 at 12:17, xypan1232 notifications@github.com wrote:

Hi,

I try to use activations = model2._predict(X_batch), but it seems there is
no _predict function, is this a bug? Thanks.

File
"/usr/local/lib/python2.7/dist-packages/Keras-0.1.2-py2.7.egg/keras/models.py",
line 469, in predict
return self._predict_loop(self._predict, X, batch_size, verbose)[0]
AttributeError: 'Sequential' object has no attribute '_predict'


Reply to this email directly or view it on GitHub
#41 (comment).

@damaha
Copy link

damaha commented Sep 8, 2015

I create the theano function to get the activations, that's easy. A function to get activations from any model at any layer can very easily be defined:

def get_activations(model, layer, X_batch):
    get_activations = theano.function([model.layers[0].input], model.layers[layer].get_output(train=False), allow_input_downcast=True)
    activations = get_activations(X_batch) # same result as above
    return activations

but it seems like there is no attribute called 'output' of any layers so I guess it is the 'get_output()' method that is supposed to be used. That works for me at least.

@hemanth2090
Copy link

@damaha I am trying to get the weights of the hidden layers in a graph model. I used the function you have defined above. I got an error saying -- AttributeError: 'Graph' object has no attribute 'layers'

get_output() does not give the required output when using a graph model. Can you please share a code snippet of your implementation?

@fchollet Can you also please help. I am looking to save the weights so that I can create the image after the convolutions. I used model.save_weights to save a hdf5 file. But I am unable to convert that into a numpy array and save it as an image.

@lemuriandezapada
Copy link

graph layers are called nodes

@havaeimo
Copy link

I have a CNN model where I do the softmax across channels. (Ive made a costum activation function and pass it to convolutional2D to do this).

graph = Graph()
graph.add_input(name='input',input_shape=(img_channels, img_rows, img_cols))
graph.add_node(Convolution2D(16, 3, 3, border_mode='valid'),name='conv_1',input='input')
graph.add_node(Activation('relu'),name='rlu_1',input='conv_1')
graph.add_node(Convolution2D(nb_classes, 22, 22,activation=across_channel_softmax, border_mode='valid'), name='conv_2', input='rlu_1')  # By doing this convolutional operation the shape of the featuremaps will be (nb_classes,1,1)
graph.add_node(Flatten(), name='flatten', input='conv_2')
graph.add_output(name='output',input='flatten')

Now to get outputs from my across_channel_softmax layer i define a theano function

get_activations = theano.function([graph.inputs['input'].input], graph.nodes['conv_2'].get_output(train=False), allow_input_downcast=True)
    results = get_activations(data) 

It works if data has the same shape (i.e. i.e same rows and cols) as the training data. But im under the assumption that in the theano function "get_activations" there is nothing to restrict the input shape and so I should be able to give it a larger image size and the output would scale accordingly. However when I do so I get the following error.
Note that if i wanted to get the outputs of 'conv_1', the same way it works, but thats not what im interested in

ValueError: Dimension 2 in Rebroadcast's input was supposed to be 1 (got 11 instead)
Apply node that caused the error: Rebroadcast{?,?,1,1}(GpuDimShuffle{1,0,2,3}.0)
Toposort index: 44
Inputs types: [CudaNdarrayType(float32, 4D)]
Inputs shapes: [(288, 5, 11, 11)]
Inputs strides: [(231, 66528, 11, 1)]
Inputs values: ['not shown']
Outputs clients: [[GpuElemwise{Add}[(0, 0)](Rebroadcast{?,?,1,1}.0, GpuDimShuffle{x,0,x,x}.0)]]

@justdark
Copy link

justdark commented Nov 4, 2015

@damaha your function have some problem when I use a Merge layer as the first Layer of model, How could I solve it?

@justdark
Copy link

justdark commented Nov 4, 2015

I see, I should pass the variable to the origin Sequence before it merged.

@magic282
Copy link

I have a Graph model and tried to get the output of the hidden layers using the method from the Keras FAQ

(feature1, feature2, feature3) = self.feature_to_nparray(feature)
feature1= np.array([feature1])
feature2= np.array([feature2])
feature3= np.array([feature3])
data = {'f1': feature1, 'f2': feature2, 'f3': feature3}
get_h1_output = theano.function([self.model.inputs[i].input for i in self.model.input_order],
                           self.model.nodes['h1'].get_output(train=False),
                           on_unused_input='ignore', allow_input_downcast=True)
get_softmax_output = theano.function([self.model.inputs[i].input for i in self.model.input_order],
                                    self.model.nodes['softmax'].get_output(train=False),
                                    on_unused_input='ignore', allow_input_downcast=True)
h1_output = get_h1_output(data)
softmax_output = get_softmax_output(data)

But I got the following error:

...
  File "C:\Anaconda\lib\site-packages\theano-0.7.0-py2.7.egg\theano\compile\function_module.py", line 786, in __call__
    allow_downcast=s.allow_downcast)
  File "C:\Anaconda\lib\site-packages\theano-0.7.0-py2.7.egg\theano\tensor\type.py", line 116, in filter
    data = theano._asarray(data, dtype=self.dtype)
  File "C:\Anaconda\lib\site-packages\theano-0.7.0-py2.7.egg\theano\misc\safe_asarray.py", line 33, in _asarray
    rval = numpy.asarray(a, dtype=dtype, order=order)
  File "C:\Anaconda\lib\site-packages\numpy\core\numeric.py", line 462, in asarray
    return array(a, dtype, copy=False, order=order)
TypeError: ('Bad input argument to theano function with name "Classifier.py:117"  at index 0(0-based)', 'float() argument must be a string or a number')

What does this error mean? Is there something wrong with the getting intermediate layer output code?

Thanks!

@MLWave
Copy link

MLWave commented Apr 19, 2016

@xypan1232

I got the same for Keras 1.0.

AttributeError: 'Sequential' object has no attribute '_predict'

Changing to predict fixed it for me.

@damaha
Copy link

damaha commented May 12, 2016

Small changes have happened in Keras 1.0 and the function I posted earlier should be changed to this:

from keras import backend as K
def get_activations(model, layer, X_batch):
    get_activations = K.function([model.layers[0].input, K.learning_phase()], model.layers[layer].output)
    activations = get_activations([X_batch,0])
    return activations

layer being an integer index.
With the new functional API to define graph structured models you should be able to use the same function (above) in any defined model as long as you know the index of the layer you want hidden activations extracted from.

@artemyk
Copy link

artemyk commented May 15, 2016

Got an error running the previous code. Had to modify it so that output is also a list:

from keras import backend as K
def get_activations(model, layer, X_batch):
    get_activations = K.function([model.layers[0].input, K.learning_phase()], [model.layers[layer].output,])
    activations = get_activations([X_batch,0])
    return activations

@EpochalEngineer
Copy link

In addition to setting up a test point at every layer I want to inspect, I have to run the batch through each test point separately?

Is there not a way to run one batch and get data from every test point?

@jdoerrie
Copy link
Contributor

jdoerrie commented Jun 9, 2016

I just ran into a situation where I was using the Functional API similar to the one described in http://keras.io/getting-started/faq/#how-can-i-visualize-the-output-of-an-intermediate-layer. Unfortunately, I did not have access to the inputs and encoded variables anymore, so that the proposed solution

encoder = Model(input=inputs, output=encoded)
X_encoded = encoder.predict(X)

was not possible. My situation was similar to this:

def get_model():
    inputs = Input(shape=(784,), name='input')
    encoded = Dense(32, activation='relu', name='encoded')(inputs)
    decoded = Dense(784)(encoded)
    model = Model(input=inputs, output=decoded)

def get_encoded(model, X):
    # Want to get encoder output here

model = get_model()
get_encoded(model, X)

Note that I manually named the relevant layers.

I was hoping a simple

input_layer = model.get_layer('input')
encoded_layer = model.get_layer('encoded')
encoder = Model(input=input_layer, output=encoded_layer)

would work, however since those are layers and not tensors an exception is thrown (https://github.com/fchollet/keras/blob/master/keras/engine/topology.py#L1545-L1549).

After digging up the source of Layer.__call__ (https://github.com/fchollet/keras/blob/master/keras/engine/topology.py#L489-L500) my workaround was

input_layer = model.get_layer('input')
encoded_layer = model.get_layer('encoded')

input_tensor = input_layer.inbound_nodes[-1].output_tensors[0]
encoded_tensor = encoded_layer.inbound_nodes[-1].output_tensors[0]

encoder = Model(input=input_tensor, output=encoded_layer)

This of course will break if the __call__ logic changes, but it might be useful anyway. Also I was wondering whether it would make sense to make the Model constructor more forgiving and allow passing layers. Thoughts?

@jinopallickal
Copy link

@damaha @artemyk

When I run this code for a 3 Layer CNN ..it was working perfectly

but for a 4 Layer CNN it show some error like core dump error...what may be the reason

from keras import backend as K
def get_activations(model, layer, X_batch):
get_activations = K.function([model.layers[0].input, K.learning_phase()], model.layers[layer].output)
activations = get_activations([X_batch,0])
return activations

@davharris
Copy link

It seems like this is an ongoing issue (I'm the 15th person to comment in this thread over the course of a year, and the same question was asked in #53, #89, #113, and #621). Would it make sense to add a get_activations function to the package that users can call like get_weights?

@alyato
Copy link

alyato commented Nov 2, 2016

hi @magic282 ,Do you solve your question? There is some wrong with my code.

model = graph()
model.add_input(name='input0',input_shape=())
model.add_node(Convolution2D(),name='c1',input='input0')
.......

And i want to see the output of the c1,Then

getFeatureMap = theano.function(model.inputs['input0'].input,model.nodes['c1'].get_output(train=False),
allow_input_downcast=True)

But it show me that
TypeError: list indices must be integers, not str

@magic282
Copy link

magic282 commented Nov 2, 2016

@alyato sorry I am using other tools now.

@alyato
Copy link

alyato commented Nov 2, 2016

@damaha ,i use the graph model.

model = graph()
model.add_input(name='input0',input_shape=())
model.add_node(Convolution2D(),name='c1',input='input0')
.......

And i want to see the output of the c1,Then

getFeatureMap = theano.function(model.inputs['input0'].input,model.nodes['c1'].get_output(train=False),
allow_input_downcast=True)

But it show me that
TypeError: list indices must be integers, not str

Do you give me some advice,please. Thanks.

@fractalvision
Copy link

@alyato it's the index of a needed layer in model.layers, so smth like dict(enumerate(model.layers)).

@alyato
Copy link

alyato commented Nov 12, 2016

@fractalvision ,Thanks.But i also don't understand.Do you explain what you say in detail.
I call the model.get_config(), getting the following result.
clname
we can check the sequence of class_name
like,

the index of input is 0
the index of ZeroPadding is 1

Is it right?

@generallc
Copy link

generallc commented Nov 25, 2016

@fchollet @damaha Suppose the model is fully-convolutional nets and has merge layer, when the input image size is not equal to the training patch size, I can not get the merge layer activations.
for example the model is:ddaeircnn5_3x3_train30

when the input size is 64 X 64, I can get any layer activations using the function you refer.
when the input size is 576 X 720, I can get the first , second and third layer activations. But I can not get the fifth layer(After the merge layer) activation. The error is as follows

Maybe this is because this model uses Deconvolution networks, whose output shape must be predetermined.This causes the model to fail to predict different images of different image sizes.https://github.com/titu1994/Image-Super-Resolution/blob/master/models.py

@srv902
Copy link

srv902 commented Dec 11, 2016

How do I modify the output of a hidden layer before passing it to the next layer, like division or multiplication ??

howard0su pushed a commit to howard0su/keras that referenced this issue Jan 31, 2017
* fix

* fix
@philipperemy
Copy link

More generally you can visualise the output/activations of every layer of your model. I wrote an example with MNIST to show how here:

https://github.com/philipperemy/keras-visualize-activations

So far it's the less painful I've seen.

@cpury
Copy link

cpury commented Aug 7, 2017

The code posted here by @damaha and @artemyk works great for most layers, but I'm working with RNNs and would like to get the activations of each cell for each datapoint in the input sequence.

It seems like the code only returns the final output after the whole sequence was processed.

Any ideas?

@philipperemy
Copy link

philipperemy commented Aug 8, 2017

@cpury you should be able to use this trick:

Suppose your sequence is [0, 2, 3, 5, 6].

You are able to extract the activations after the whole sequence is processed, right?

So why don't you just subsample iteratively from the initial sequence? Here we go:

Input your sequence 1: [0] -> get the activations 1 (just after RNN)
Input your sequence 2: [0, 2] -> get the activations 2 (just after RNN)
Input your sequence 3: [0, 2, 3] -> get the activations 3 (just after RNN)
Input your sequence 4: [0, 2, 3, 5] -> get the activations 4 (just after RNN)
Input your sequence 5: [0, 2, 3, ,5, 6] -> get the activations 5(just after RNN)

Merge the activations of 1, 2, 3, 4, 5 into a list. You now have the activations for each time step.

It should work nicely but it will require O(n) computation if your sequence is of length n.

If you want something O(1), drop Keras and go straight Tensorflow!

Hope it helps

@leonardltk
Copy link

Hi, I am using Functional API for a basic classifier

Inp = Input( shape=(1,969), name='Input' ) 
x = Dense(units=512, activation='sigmoid', name='Hidden')(x)
x = Dense(units=20, activation='softmax', name='Output')(x)
model = Model(Inp, x)

After training, i would like to extract the "Hidden" layer BEFORE applying the sigmoid activation.
How to i do that ?

I know for a fact that to extract the layer AFTER applying sigmoid activation will be as follows:

        Inp = model.input
        Outp = model.get_layer('Hidden').output
        curr_layer_model = Model(Inp, Outp)
        bottle_feature = curr_layer_model.predict(x)

But how do i extract BEFORE the sigmoid activation?

Thanks

@vinayakumarr
Copy link

from keras import backend as K

def get_activations(model, layer, X_batch):
get_activations = K.function([model.layers[0].input, K.learning_phase()], model.layers[layer].output)
activations = get_activations([X_batch,0])
print(activations)
return activations

my_featuremaps = get_activations(cnn, 1, ([X_train[:10], 0])[0])
np.savetxt('featuremap.txt', my_featuremaps)

The above code is generating the below error with TensorFlow as backend

TypeError: outputs of a TensorFlow backend function should be a list or tuple.

Actually, this works fins with theano as backend

@YukiYaoxq
Copy link

@damaha hello, sorry, I want to know what's the 'predictions' stands for what. Can I see it as the output of what I want? thanks.

@haskarb
Copy link

haskarb commented Oct 14, 2017

I am using this code to get representation after model.add(Dense(hidden_dims)) this statement. Since, I am using sigmoid function values at every layer, why am I getting negative values?
@damaha @artemyk @fchollet

from __future__ import print_function

from keras.preprocessing import sequence
from keras.models import Sequential, Model
from keras.layers import Dense, Dropout, Activation
from keras.layers import Embedding
from keras.layers import Conv1D, GlobalMaxPooling1D
from keras.datasets import imdb
from keras import backend as K
import numpy as np
# set parameters:
max_features = 500
maxlen = 40
batch_size = 32
embedding_dims = 50
filters = 250
kernel_size = 3
hidden_dims = 20
epochs = 2
print('Loading data...')
(x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=max_features)
print(len(x_train), 'train sequences')
print(len(x_test), 'test sequences')

print('Pad sequences (samples x time)')
x_train = sequence.pad_sequences(x_train, maxlen=maxlen)
x_test = sequence.pad_sequences(x_test, maxlen=maxlen)
print('x_train shape:', x_train.shape)
print('x_test shape:', x_test.shape)
print('Build model...')
model = Sequential()
model.add(Embedding(max_features, embedding_dims, input_length=maxlen))
model.add(Dropout(0.2))
model.add(Conv1D(filters, kernel_size, padding='valid', activation='relu', strides=1))
model.add(GlobalMaxPooling1D())
model.add(Dense(hidden_dims))
model.add(Dropout(0.2))
model.add(Activation('sigmoid'))
model.add(Dense(1))
model.add(Activation('sigmoid'))

model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
model.fit(x_train, y_train, batch_size=batch_size, epochs=epochs,
validation_data=(x_test, y_test))
model.compile(loss='binary_crossentropy', optimizer='adam',   metrics=['accuracy'])
model.fit(x_train, y_train, batch_size=batch_size, epochs=1, validation_data=(x_test, y_test))
def get_activations(model, layer, X_batch):
    get_activations = K.function([model.layers[0].input, K.learning_phase()], [model.layers[layer].output,])
    activations = get_activations([X_batch,0])
    return activations
X_train=np.array(get_activations(model=model,layer=4, X_batch=x_train)[0], dtype=np.float32)
print(X_train)

@FiammettaC
Copy link

FiammettaC commented May 27, 2018

I have a network in Keras and my last layer is a fully connected layer with activation = softmax. I extracted the weights from the softmax (as suggested in the posts above) and multiplied them by 0 and 1 (I am trying to restrict the predictions).

How can I pass the modified softmax weights matrix to the model? Is there a way compile and fit a Keras model with updated weights?

I know about the function set_weights(), but my problem is that set_weights() expects input of shape (hidden_layers, vocab_size), while my new weights have size (len(X_train), vocab_size). Can anybody know how to do this?

This is my model:

def get_activations(model, layer, X_batch):
    get_activations = K.function([model.layers[0].input, K.learning_phase()], [model.layers[layer].output,])
    activations = get_activations([X_batch,0])
    return activations

model = Sequential()
model.add(Embedding(vocab_size, embedding_size, input_length=55, weights=[pretrained_weights]))
model.add(Bidirectional(LSTM(units=embedding_size)))
model.add(Dense(vocab_size, activation='softmax'))

softmax_weights = np.array(get_activations(model, 3, X_train)[0], dtype=np.float32)

modified_softmax_weights = np.multiply(softmax_weights, weights_array) 
#weights array is a binary array containing 0 and 1, according to the index of the element I would like to set to 0)`

@iagorichard
Copy link

iagorichard commented Oct 26, 2018

Hi all,

I have a big question. I have trained my model and tryed to compile my second model. It works fine, but I can't predict my batch_X because I'm using images like input data.

My code to get the train and validation data is like follows:
_`train_generator = train_datagen.flow_from_directory(
train_data_dir,
target_size=(img_width, img_height),
batch_size=batch_size,
class_mode='binary')

validation_generator = test_datagen.flow_from_directory(
validation_data_dir,
target_size=(img_width, img_height),
batch_size=batch_size,
class_mode='binary')`_

When I try to use my train_generator in model2.predict() it shows me the error ValueError: Please provide as model inputs either a single array or a list of arrays. You passed: x=<keras_preprocessing.image.DirectoryIterator object at 0x00000292311E7710>

Can anyone help me? Sorry for my bad english, I'm learning it!

Best Regards.

ACTUALIZATION
A have solved that problem using it:

x_batch, y_batch = train_generator.next()
activations = model2.predict(x_batch)

Thanks!

@olivierblais
Copy link

olivierblais commented Oct 29, 2018

Hello, I am trying to get the hidden layer representation of the given data using the above solutions
from keras import backend as K def get_activations(model, layer, X_batch): get_activations = K.function([model.layers[0].input, K.learning_phase()], [model.layers[layer].output,]) activations = get_activations([X_batch,0]) return activations

However, my model has 3 inputs, do you know how I should modify this function?

Thank you in advance

@imairish
Copy link

imairish commented Oct 1, 2019

@alyato

I was a bit quick in copying you code before and not checking if it made sense. From Keras >1.0.0 layers doesn't have a method called get_output(). In my second comment in this thread I also state this and rewrite the proposed get_activations() function that has been proposed. Instead you need to use the attribute layers[index].ouput

Another thing is wrong in your code. Keras 1.0.3 doesn't even have a model type named graph(), it has Graph() which is imported from a module named .legacy.models, I highly recommend that you DO NOT use this! Instead use the new functional API which has a nice introduction here https://keras.io/getting-started/functional-api-guide/
This is how you do DAG models in keras now. In your simple toy case above you could also use Sequential models.

Here is example code that works with the functional API and Keras 1.1.0 (Should work in any version > 1.0.0):

import numpy as np
from keras.layers import Input, Dense, Convolution2D, Flatten
from keras.models import Model
import keras.backend as K

def get_activations(model, layer, X_batch):
    get_activations = K.function([model.layers[0].input, K.learning_phase()], model.layers[layer].output)
    activations = get_activations([X_batch,0])
    return activations

inputs = Input(shape=(1,28,28))

x = Convolution2D(64,3,3)(inputs)
x = Flatten()(x)
predictions = Dense(10, activation='softmax')(x)

model = Model(input=inputs, output=predictions)
model.compile(optimizer='rmsprop',
              loss='categorical_crossentropy',
              metrics=['accuracy'])

X = np.ones((10,1,28,28))
my_featuremaps = get_activations(model, 1, X)

How can we implement DAG-RNN in keras?

@devjaynemorais
Copy link

Could someone help me solve the following problem?

Environment: Keras==1.1.0 Theano==1.0.2 numpy==1.15.1 scipy==1.3.0

I created a fine tuning and frozen all layers except layer [2] because I want to get the activation values only from layer [2].

Network summary before freezing:

Layer (type) | Output Shape | Param # | Connected to


dense_1 (Dense) (None, 512) 2097664 dense_input_1[0][0]


dropout_1 (Dropout) (None, 512) 0 dense_1[0][0]
dense_1[0][0]


dense_2 (Dense) (None, 32) 16416 dropout_1[0][0]
dropout_1[1][0]


dropout_2 (Dropout) (None, 32) 0 dense_2[0][0]
dense_2[1][0]


dense_3 (Dense) (None, 1) 33 dropout_2[0][0]
dropout_2[1][0]


Total params: 2114113

Freezing layers:

for layer in model.layers[0:]:
------ layer.trainable = False
model.layers[2].trainable = True

Network summary after freezing:

`Layer (type) | Output Shape | Param # | Connected to


dense_1 (Dense) (None, 512) 0 dense_input_1[0][0]


dropout_1 (Dropout) (None, 512) 0 dense_1[0][0]


dense_2 (Dense) (None, 32) 16416 dropout_1[1][0]


dropout_2 (Dropout) (None, 32) 0 dense_2[1][0]


dense_3 (Dense) (None, 1) 0 dropout_2[1][0]


Total params: 16416`

To print layer output [2]:

OutFunc = keras.backend.function([model2.input], [model2.layers[2].get_output_at(0)])
out_val = OutFunc([inputs])[0]
print(out_val)

Returns the following output error:

MissingInputError Traceback (most recent call last)
in
1 #OutFunc = keras.backend.function([model2.input], [model2.layers[0].output])
----> 2 OutFunc = keras.backend.function([model2.input], [model2.layers[2].get_output_at(0)])
3
4
5 out_val = OutFunc([inputs])[0]

~/anaconda3/lib/python3.7/site-packages/keras/backend/theano_backend.py in function(inputs, outputs, updates, **kwargs)
725 return T.clip(x, min_value, max_value)
726
--> 727
728 def equal(x, y):
729 return T.eq(x, y)

~/anaconda3/lib/python3.7/site-packages/keras/backend/theano_backend.py in init(self, inputs, outputs, updates, **kwargs)
711
712 def pow(x, a):
--> 713 return T.pow(x, a)
714
715

~/anaconda3/lib/python3.7/site-packages/theano/compile/function.py in function(inputs, outputs, mode, updates, givens, no_default_updates, accept_inplace, name, rebuild_strict, allow_input_downcast, profile, on_unused_input)
315 on_unused_input=on_unused_input,
316 profile=profile,
--> 317 output_keys=output_keys)
318 return fn

~/anaconda3/lib/python3.7/site-packages/theano/compile/pfunc.py in pfunc(params, outputs, mode, updates, givens, no_default_updates, accept_inplace, name, rebuild_strict, allow_input_downcast, profile, on_unused_input, output_keys)
484 accept_inplace=accept_inplace, name=name,
485 profile=profile, on_unused_input=on_unused_input,
--> 486 output_keys=output_keys)
487
488

~/anaconda3/lib/python3.7/site-packages/theano/compile/function_module.py in orig_function(inputs, outputs, mode, accept_inplace, name, profile, on_unused_input, output_keys)
1837 on_unused_input=on_unused_input,
1838 output_keys=output_keys,
-> 1839 name=name)
1840 with theano.change_flags(compute_test_value="off"):
1841 fn = m.create(defaults)

~/anaconda3/lib/python3.7/site-packages/theano/compile/function_module.py in init(self, inputs, outputs, mode, accept_inplace, function_builder, profile, on_unused_input, fgraph, output_keys, name)
1485 # OUTPUT VARIABLES)
1486 fgraph, additional_outputs = std_fgraph(inputs, outputs,
-> 1487 accept_inplace)
1488 fgraph.profile = profile
1489 else:

~/anaconda3/lib/python3.7/site-packages/theano/compile/function_module.py in std_fgraph(input_specs, output_specs, accept_inplace)
179
180 fgraph = gof.fg.FunctionGraph(orig_inputs, orig_outputs,
--> 181 update_mapping=update_mapping)
182
183 for node in fgraph.apply_nodes:

~/anaconda3/lib/python3.7/site-packages/theano/gof/fg.py in init(self, inputs, outputs, features, clone, update_mapping)
173
174 for output in outputs:
--> 175 self.import_r(output, reason="init")
176 for i, output in enumerate(outputs):
177 output.clients.append(('output', i))

~/anaconda3/lib/python3.7/site-packages/theano/gof/fg.py in import_r(self, variable, reason)
344 # Imports the owners of the variables
345 if variable.owner and variable.owner not in self.apply_nodes:
--> 346 self.import(variable.owner, reason=reason)
347 elif (variable.owner is None and
348 not isinstance(variable, graph.Constant) and

~/anaconda3/lib/python3.7/site-packages/theano/gof/fg.py in import(self, apply_node, check, reason)
389 "for more information on this error."
390 % (node.inputs.index(r), str(node)))
--> 391 raise MissingInputError(error_msg, variable=r)
392
393 for node in new_nodes:

MissingInputError: Input 0 of the graph (indices start from 0), used to compute InplaceDimShuffle{x,x}(keras_learning_phase), was not provided and not given a value. Use the Theano flag exception_verbosity='high', for more information on this error.

Backtrace when that variable is created:

File "", line 219, in _call_with_frames_removed
File "/home/jayne/anaconda3/lib/python3.7/site-packages/keras/backend/init.py", line 61, in
from .theano_backend import *
File "", line 983, in _find_and_load
File "", line 967, in _find_and_load_unlocked
File "", line 677, in _load_unlocked
File "", line 728, in exec_module
File "", line 219, in _call_with_frames_removed
File "/home/jayne/anaconda3/lib/python3.7/site-packages/keras/backend/theano_backend.py", line 23, in
_LEARNING_PHASE = T.scalar(dtype='uint8', name='keras_learning_phase') # 0 = test, 1 = train

hubingallin pushed a commit to hubingallin/keras that referenced this issue Sep 22, 2023
* chore: adding binary accuracy

* chore: fix docstring
hubingallin pushed a commit to hubingallin/keras that referenced this issue Sep 22, 2023
* Add golden correctness tests for Adam and SGD

* Fix dtype issues

* Sync with main (keras-team#56)

* Minor touch ups

* Fix a pretty major bug

* Format code

* Big rethink of Variable API

* Make build-by-run the default build(), leveraging new zero_history KerasTensor mode

* Minor fixes

* Format code

* Switch back to build-by-eager-run for simplicity

* Add raise upon build failure

* Work around JAX bug.

* Add a few more tests.

* Add saving tests

* Adds test suite for SGD and golden correctness tests for all optimizers (keras-team#40)

* Add golden correctness tests for Adam and SGD

* Fix dtype issues

* Add binary accuracy (keras-team#41)

* chore: adding binary accuracy

* chore: fix docstring

* Add tests for add_loss and activity regularization.

* Reformat code

* Add ActivityRegularization layer

* Fix JAX CI.

* Add Lambda Callback (keras-team#42)

* Add LambdaCallback

* Add Lambda Callback

* Add Lambda Callback

* Rename lambda_callback_test.py

* Add einsum (keras-team#43)

* Add einsum

* address comments

* Fix format line length (keras-team#45)

* Add Embedding layer

* Shorten lines

* Add .vscode to .gitignore (keras-team#46)

* rm vscode settings

* add .vscode to gitignore

* Set demo program backend (keras-team#48)

* Add tests for training arg resolution in Layer.

* Implement mixed precision.

* Replace backend.execute with backend.numpy.XXX (keras-team#50)

* Add cosine similarity loss and update l2_normalize from regularizers (keras-team#34)

* Begin cosine loss

* Add testing for cosine similarity

* Fix formatting

* Docstring standardization

* Formatting

* Create numerical_utils

* Fix issue with call context lingering.

* Add the EarlyStopping callback (keras-team#44)

* add earlystopping callback

* addressing comments

* address comments

* addressing comments

* remove unused imports

* re-enable imports checks (keras-team#51)

* Add nn.one_hot (keras-team#52)

* Add GaussianDropout layer.

* Add GaussianNoise layer

* Add Categorical Accuracy Metric (keras-team#47)

* chore: adding categorical accuracy metric

* chore: reformat docstrings

* chore: reformat

* chore: ndims with len

* refactor the docstring

* Fix typos

* Implement masking.

---------

Co-authored-by: Francois Chollet <francois.chollet@gmail.com>
Co-authored-by: Aritra Roy Gosthipaty <aritra.born2fly@gmail.com>
Co-authored-by: Ramesh Sampath <1437573+sampathweb@users.noreply.github.com>
Co-authored-by: Chen Qian <chenmoney@google.com>
Co-authored-by: Haifeng Jin <5476582+haifeng-jin@users.noreply.github.com>
Co-authored-by: Gabriel Rasskin <43894452+grasskin@users.noreply.github.com>

* Adds rmsprop optimizer and tests

* Add AdamW optimizer and tests, minor formatting changes

* Implemented formatting fixes

---------

Co-authored-by: Francois Chollet <francois.chollet@gmail.com>
Co-authored-by: Aritra Roy Gosthipaty <aritra.born2fly@gmail.com>
Co-authored-by: Ramesh Sampath <1437573+sampathweb@users.noreply.github.com>
Co-authored-by: Chen Qian <chenmoney@google.com>
Co-authored-by: Haifeng Jin <5476582+haifeng-jin@users.noreply.github.com>
Co-authored-by: Gabriel Rasskin <43894452+grasskin@users.noreply.github.com>
hubingallin pushed a commit to hubingallin/keras that referenced this issue Sep 22, 2023
…m#72)

* Add golden correctness tests for Adam and SGD

* Fix dtype issues

* Sync with main (keras-team#56)

* Minor touch ups

* Fix a pretty major bug

* Format code

* Big rethink of Variable API

* Make build-by-run the default build(), leveraging new zero_history KerasTensor mode

* Minor fixes

* Format code

* Switch back to build-by-eager-run for simplicity

* Add raise upon build failure

* Work around JAX bug.

* Add a few more tests.

* Add saving tests

* Adds test suite for SGD and golden correctness tests for all optimizers (keras-team#40)

* Add golden correctness tests for Adam and SGD

* Fix dtype issues

* Add binary accuracy (keras-team#41)

* chore: adding binary accuracy

* chore: fix docstring

* Add tests for add_loss and activity regularization.

* Reformat code

* Add ActivityRegularization layer

* Fix JAX CI.

* Add Lambda Callback (keras-team#42)

* Add LambdaCallback

* Add Lambda Callback

* Add Lambda Callback

* Rename lambda_callback_test.py

* Add einsum (keras-team#43)

* Add einsum

* address comments

* Fix format line length (keras-team#45)

* Add Embedding layer

* Shorten lines

* Add .vscode to .gitignore (keras-team#46)

* rm vscode settings

* add .vscode to gitignore

* Set demo program backend (keras-team#48)

* Add tests for training arg resolution in Layer.

* Implement mixed precision.

* Replace backend.execute with backend.numpy.XXX (keras-team#50)

* Add cosine similarity loss and update l2_normalize from regularizers (keras-team#34)

* Begin cosine loss

* Add testing for cosine similarity

* Fix formatting

* Docstring standardization

* Formatting

* Create numerical_utils

* Fix issue with call context lingering.

* Add the EarlyStopping callback (keras-team#44)

* add earlystopping callback

* addressing comments

* address comments

* addressing comments

* remove unused imports

* re-enable imports checks (keras-team#51)

* Add nn.one_hot (keras-team#52)

* Add GaussianDropout layer.

* Add GaussianNoise layer

* Add Categorical Accuracy Metric (keras-team#47)

* chore: adding categorical accuracy metric

* chore: reformat docstrings

* chore: reformat

* chore: ndims with len

* refactor the docstring

* Fix typos

* Implement masking.

---------

Co-authored-by: Francois Chollet <francois.chollet@gmail.com>
Co-authored-by: Aritra Roy Gosthipaty <aritra.born2fly@gmail.com>
Co-authored-by: Ramesh Sampath <1437573+sampathweb@users.noreply.github.com>
Co-authored-by: Chen Qian <chenmoney@google.com>
Co-authored-by: Haifeng Jin <5476582+haifeng-jin@users.noreply.github.com>
Co-authored-by: Gabriel Rasskin <43894452+grasskin@users.noreply.github.com>

* Adds rmsprop optimizer and tests

* Add AdamW optimizer and tests, minor formatting changes

* Implemented formatting fixes

* Adds clip norm and clip value tests to Adam

* Adds Adagrad and Adadelta optimizers

* Applies fixes to formatting and deletes unnecessary kwargs

---------

Co-authored-by: Francois Chollet <francois.chollet@gmail.com>
Co-authored-by: Aritra Roy Gosthipaty <aritra.born2fly@gmail.com>
Co-authored-by: Ramesh Sampath <1437573+sampathweb@users.noreply.github.com>
Co-authored-by: Chen Qian <chenmoney@google.com>
Co-authored-by: Haifeng Jin <5476582+haifeng-jin@users.noreply.github.com>
Co-authored-by: Gabriel Rasskin <43894452+grasskin@users.noreply.github.com>
hubingallin pushed a commit to hubingallin/keras that referenced this issue Sep 22, 2023
…rl) (keras-team#80)

* Add golden correctness tests for Adam and SGD

* Fix dtype issues

* Sync with main (keras-team#56)

* Minor touch ups

* Fix a pretty major bug

* Format code

* Big rethink of Variable API

* Make build-by-run the default build(), leveraging new zero_history KerasTensor mode

* Minor fixes

* Format code

* Switch back to build-by-eager-run for simplicity

* Add raise upon build failure

* Work around JAX bug.

* Add a few more tests.

* Add saving tests

* Adds test suite for SGD and golden correctness tests for all optimizers (keras-team#40)

* Add golden correctness tests for Adam and SGD

* Fix dtype issues

* Add binary accuracy (keras-team#41)

* chore: adding binary accuracy

* chore: fix docstring

* Add tests for add_loss and activity regularization.

* Reformat code

* Add ActivityRegularization layer

* Fix JAX CI.

* Add Lambda Callback (keras-team#42)

* Add LambdaCallback

* Add Lambda Callback

* Add Lambda Callback

* Rename lambda_callback_test.py

* Add einsum (keras-team#43)

* Add einsum

* address comments

* Fix format line length (keras-team#45)

* Add Embedding layer

* Shorten lines

* Add .vscode to .gitignore (keras-team#46)

* rm vscode settings

* add .vscode to gitignore

* Set demo program backend (keras-team#48)

* Add tests for training arg resolution in Layer.

* Implement mixed precision.

* Replace backend.execute with backend.numpy.XXX (keras-team#50)

* Add cosine similarity loss and update l2_normalize from regularizers (keras-team#34)

* Begin cosine loss

* Add testing for cosine similarity

* Fix formatting

* Docstring standardization

* Formatting

* Create numerical_utils

* Fix issue with call context lingering.

* Add the EarlyStopping callback (keras-team#44)

* add earlystopping callback

* addressing comments

* address comments

* addressing comments

* remove unused imports

* re-enable imports checks (keras-team#51)

* Add nn.one_hot (keras-team#52)

* Add GaussianDropout layer.

* Add GaussianNoise layer

* Add Categorical Accuracy Metric (keras-team#47)

* chore: adding categorical accuracy metric

* chore: reformat docstrings

* chore: reformat

* chore: ndims with len

* refactor the docstring

* Fix typos

* Implement masking.

---------

Co-authored-by: Francois Chollet <francois.chollet@gmail.com>
Co-authored-by: Aritra Roy Gosthipaty <aritra.born2fly@gmail.com>
Co-authored-by: Ramesh Sampath <1437573+sampathweb@users.noreply.github.com>
Co-authored-by: Chen Qian <chenmoney@google.com>
Co-authored-by: Haifeng Jin <5476582+haifeng-jin@users.noreply.github.com>
Co-authored-by: Gabriel Rasskin <43894452+grasskin@users.noreply.github.com>

* Adds rmsprop optimizer and tests

* Add AdamW optimizer and tests, minor formatting changes

* Implemented formatting fixes

* Adds clip norm and clip value tests to Adam

* Adds Adagrad and Adadelta optimizers

* Applies fixes to formatting and deletes unnecessary kwargs

* Adds Adamax and Adafactor and associated tests

* Adds Nadam and Ftrl optimizers and associated tests

---------

Co-authored-by: Francois Chollet <francois.chollet@gmail.com>
Co-authored-by: Aritra Roy Gosthipaty <aritra.born2fly@gmail.com>
Co-authored-by: Ramesh Sampath <1437573+sampathweb@users.noreply.github.com>
Co-authored-by: Chen Qian <chenmoney@google.com>
Co-authored-by: Haifeng Jin <5476582+haifeng-jin@users.noreply.github.com>
Co-authored-by: Gabriel Rasskin <43894452+grasskin@users.noreply.github.com>
pnacht pushed a commit to pnacht/keras that referenced this issue Nov 10, 2023
* chore: adding binary accuracy

* chore: fix docstring
pnacht pushed a commit to pnacht/keras that referenced this issue Nov 10, 2023
* Add golden correctness tests for Adam and SGD

* Fix dtype issues

* Sync with main (keras-team#56)

* Minor touch ups

* Fix a pretty major bug

* Format code

* Big rethink of Variable API

* Make build-by-run the default build(), leveraging new zero_history KerasTensor mode

* Minor fixes

* Format code

* Switch back to build-by-eager-run for simplicity

* Add raise upon build failure

* Work around JAX bug.

* Add a few more tests.

* Add saving tests

* Adds test suite for SGD and golden correctness tests for all optimizers (keras-team#40)

* Add golden correctness tests for Adam and SGD

* Fix dtype issues

* Add binary accuracy (keras-team#41)

* chore: adding binary accuracy

* chore: fix docstring

* Add tests for add_loss and activity regularization.

* Reformat code

* Add ActivityRegularization layer

* Fix JAX CI.

* Add Lambda Callback (keras-team#42)

* Add LambdaCallback

* Add Lambda Callback

* Add Lambda Callback

* Rename lambda_callback_test.py

* Add einsum (keras-team#43)

* Add einsum

* address comments

* Fix format line length (keras-team#45)

* Add Embedding layer

* Shorten lines

* Add .vscode to .gitignore (keras-team#46)

* rm vscode settings

* add .vscode to gitignore

* Set demo program backend (keras-team#48)

* Add tests for training arg resolution in Layer.

* Implement mixed precision.

* Replace backend.execute with backend.numpy.XXX (keras-team#50)

* Add cosine similarity loss and update l2_normalize from regularizers (keras-team#34)

* Begin cosine loss

* Add testing for cosine similarity

* Fix formatting

* Docstring standardization

* Formatting

* Create numerical_utils

* Fix issue with call context lingering.

* Add the EarlyStopping callback (keras-team#44)

* add earlystopping callback

* addressing comments

* address comments

* addressing comments

* remove unused imports

* re-enable imports checks (keras-team#51)

* Add nn.one_hot (keras-team#52)

* Add GaussianDropout layer.

* Add GaussianNoise layer

* Add Categorical Accuracy Metric (keras-team#47)

* chore: adding categorical accuracy metric

* chore: reformat docstrings

* chore: reformat

* chore: ndims with len

* refactor the docstring

* Fix typos

* Implement masking.

---------

Co-authored-by: Francois Chollet <francois.chollet@gmail.com>
Co-authored-by: Aritra Roy Gosthipaty <aritra.born2fly@gmail.com>
Co-authored-by: Ramesh Sampath <1437573+sampathweb@users.noreply.github.com>
Co-authored-by: Chen Qian <chenmoney@google.com>
Co-authored-by: Haifeng Jin <5476582+haifeng-jin@users.noreply.github.com>
Co-authored-by: Gabriel Rasskin <43894452+grasskin@users.noreply.github.com>

* Adds rmsprop optimizer and tests

* Add AdamW optimizer and tests, minor formatting changes

* Implemented formatting fixes

---------

Co-authored-by: Francois Chollet <francois.chollet@gmail.com>
Co-authored-by: Aritra Roy Gosthipaty <aritra.born2fly@gmail.com>
Co-authored-by: Ramesh Sampath <1437573+sampathweb@users.noreply.github.com>
Co-authored-by: Chen Qian <chenmoney@google.com>
Co-authored-by: Haifeng Jin <5476582+haifeng-jin@users.noreply.github.com>
Co-authored-by: Gabriel Rasskin <43894452+grasskin@users.noreply.github.com>
pnacht pushed a commit to pnacht/keras that referenced this issue Nov 10, 2023
…m#72)

* Add golden correctness tests for Adam and SGD

* Fix dtype issues

* Sync with main (keras-team#56)

* Minor touch ups

* Fix a pretty major bug

* Format code

* Big rethink of Variable API

* Make build-by-run the default build(), leveraging new zero_history KerasTensor mode

* Minor fixes

* Format code

* Switch back to build-by-eager-run for simplicity

* Add raise upon build failure

* Work around JAX bug.

* Add a few more tests.

* Add saving tests

* Adds test suite for SGD and golden correctness tests for all optimizers (keras-team#40)

* Add golden correctness tests for Adam and SGD

* Fix dtype issues

* Add binary accuracy (keras-team#41)

* chore: adding binary accuracy

* chore: fix docstring

* Add tests for add_loss and activity regularization.

* Reformat code

* Add ActivityRegularization layer

* Fix JAX CI.

* Add Lambda Callback (keras-team#42)

* Add LambdaCallback

* Add Lambda Callback

* Add Lambda Callback

* Rename lambda_callback_test.py

* Add einsum (keras-team#43)

* Add einsum

* address comments

* Fix format line length (keras-team#45)

* Add Embedding layer

* Shorten lines

* Add .vscode to .gitignore (keras-team#46)

* rm vscode settings

* add .vscode to gitignore

* Set demo program backend (keras-team#48)

* Add tests for training arg resolution in Layer.

* Implement mixed precision.

* Replace backend.execute with backend.numpy.XXX (keras-team#50)

* Add cosine similarity loss and update l2_normalize from regularizers (keras-team#34)

* Begin cosine loss

* Add testing for cosine similarity

* Fix formatting

* Docstring standardization

* Formatting

* Create numerical_utils

* Fix issue with call context lingering.

* Add the EarlyStopping callback (keras-team#44)

* add earlystopping callback

* addressing comments

* address comments

* addressing comments

* remove unused imports

* re-enable imports checks (keras-team#51)

* Add nn.one_hot (keras-team#52)

* Add GaussianDropout layer.

* Add GaussianNoise layer

* Add Categorical Accuracy Metric (keras-team#47)

* chore: adding categorical accuracy metric

* chore: reformat docstrings

* chore: reformat

* chore: ndims with len

* refactor the docstring

* Fix typos

* Implement masking.

---------

Co-authored-by: Francois Chollet <francois.chollet@gmail.com>
Co-authored-by: Aritra Roy Gosthipaty <aritra.born2fly@gmail.com>
Co-authored-by: Ramesh Sampath <1437573+sampathweb@users.noreply.github.com>
Co-authored-by: Chen Qian <chenmoney@google.com>
Co-authored-by: Haifeng Jin <5476582+haifeng-jin@users.noreply.github.com>
Co-authored-by: Gabriel Rasskin <43894452+grasskin@users.noreply.github.com>

* Adds rmsprop optimizer and tests

* Add AdamW optimizer and tests, minor formatting changes

* Implemented formatting fixes

* Adds clip norm and clip value tests to Adam

* Adds Adagrad and Adadelta optimizers

* Applies fixes to formatting and deletes unnecessary kwargs

---------

Co-authored-by: Francois Chollet <francois.chollet@gmail.com>
Co-authored-by: Aritra Roy Gosthipaty <aritra.born2fly@gmail.com>
Co-authored-by: Ramesh Sampath <1437573+sampathweb@users.noreply.github.com>
Co-authored-by: Chen Qian <chenmoney@google.com>
Co-authored-by: Haifeng Jin <5476582+haifeng-jin@users.noreply.github.com>
Co-authored-by: Gabriel Rasskin <43894452+grasskin@users.noreply.github.com>
pnacht pushed a commit to pnacht/keras that referenced this issue Nov 10, 2023
…rl) (keras-team#80)

* Add golden correctness tests for Adam and SGD

* Fix dtype issues

* Sync with main (keras-team#56)

* Minor touch ups

* Fix a pretty major bug

* Format code

* Big rethink of Variable API

* Make build-by-run the default build(), leveraging new zero_history KerasTensor mode

* Minor fixes

* Format code

* Switch back to build-by-eager-run for simplicity

* Add raise upon build failure

* Work around JAX bug.

* Add a few more tests.

* Add saving tests

* Adds test suite for SGD and golden correctness tests for all optimizers (keras-team#40)

* Add golden correctness tests for Adam and SGD

* Fix dtype issues

* Add binary accuracy (keras-team#41)

* chore: adding binary accuracy

* chore: fix docstring

* Add tests for add_loss and activity regularization.

* Reformat code

* Add ActivityRegularization layer

* Fix JAX CI.

* Add Lambda Callback (keras-team#42)

* Add LambdaCallback

* Add Lambda Callback

* Add Lambda Callback

* Rename lambda_callback_test.py

* Add einsum (keras-team#43)

* Add einsum

* address comments

* Fix format line length (keras-team#45)

* Add Embedding layer

* Shorten lines

* Add .vscode to .gitignore (keras-team#46)

* rm vscode settings

* add .vscode to gitignore

* Set demo program backend (keras-team#48)

* Add tests for training arg resolution in Layer.

* Implement mixed precision.

* Replace backend.execute with backend.numpy.XXX (keras-team#50)

* Add cosine similarity loss and update l2_normalize from regularizers (keras-team#34)

* Begin cosine loss

* Add testing for cosine similarity

* Fix formatting

* Docstring standardization

* Formatting

* Create numerical_utils

* Fix issue with call context lingering.

* Add the EarlyStopping callback (keras-team#44)

* add earlystopping callback

* addressing comments

* address comments

* addressing comments

* remove unused imports

* re-enable imports checks (keras-team#51)

* Add nn.one_hot (keras-team#52)

* Add GaussianDropout layer.

* Add GaussianNoise layer

* Add Categorical Accuracy Metric (keras-team#47)

* chore: adding categorical accuracy metric

* chore: reformat docstrings

* chore: reformat

* chore: ndims with len

* refactor the docstring

* Fix typos

* Implement masking.

---------

Co-authored-by: Francois Chollet <francois.chollet@gmail.com>
Co-authored-by: Aritra Roy Gosthipaty <aritra.born2fly@gmail.com>
Co-authored-by: Ramesh Sampath <1437573+sampathweb@users.noreply.github.com>
Co-authored-by: Chen Qian <chenmoney@google.com>
Co-authored-by: Haifeng Jin <5476582+haifeng-jin@users.noreply.github.com>
Co-authored-by: Gabriel Rasskin <43894452+grasskin@users.noreply.github.com>

* Adds rmsprop optimizer and tests

* Add AdamW optimizer and tests, minor formatting changes

* Implemented formatting fixes

* Adds clip norm and clip value tests to Adam

* Adds Adagrad and Adadelta optimizers

* Applies fixes to formatting and deletes unnecessary kwargs

* Adds Adamax and Adafactor and associated tests

* Adds Nadam and Ftrl optimizers and associated tests

---------

Co-authored-by: Francois Chollet <francois.chollet@gmail.com>
Co-authored-by: Aritra Roy Gosthipaty <aritra.born2fly@gmail.com>
Co-authored-by: Ramesh Sampath <1437573+sampathweb@users.noreply.github.com>
Co-authored-by: Chen Qian <chenmoney@google.com>
Co-authored-by: Haifeng Jin <5476582+haifeng-jin@users.noreply.github.com>
Co-authored-by: Gabriel Rasskin <43894452+grasskin@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests