Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

No model.predict_proba or model.predict_classes using Functional API #2524

Closed
gammaguy opened this issue Apr 27, 2016 · 22 comments
Closed

No model.predict_proba or model.predict_classes using Functional API #2524

gammaguy opened this issue Apr 27, 2016 · 22 comments

Comments

@gammaguy
Copy link

I was just trying the Functional API on my binary XOR function:

Preamble

import numpy as np
from keras.models import Sequential
from keras.layers import Dense, Input
from keras.models import Model

data = np.array([
[0, 0, 0],
[0, 1, 0],
[1, 0, 0],
[1, 1, 1]
])

X_train = X_test = data[:, :-1]
y_train = y_test = data[:, -1]

The Sequential Model way works fines:

model = Sequential()
model.add(Dense(10, input_dim=2, init='uniform', activation='relu'))
model.add(Dense(10, activation='relu'))
model.add(Dense(1, activation='sigmoid'))

model.compile(optimizer='rmsprop',
              loss='binary_crossentropy',
              metrics=['accuracy'])

model.fit(X_train, y_train, nb_epoch=250, batch_size=1, verbose=1)

predict = model.predict(X_test, batch_size=1)
print(predict)
proba = model.predict_proba(X_test, batch_size=1)
print(proba)
classes = model.predict_classes(X_test, batch_size=1)
print(classes)

But the Functional API version doesn't work as model2.predict_proba and model2.predict_classes gives the errors:
"AttributeError: 'Model' object has no attribute 'predict_proba'" and
"AttributeError: 'Model' object has no attribute 'predict_classes '" respectively (although model2,predict works fine) :

inputs = Input(shape=(2,))

x = Dense(7, activation='relu')(inputs)
x = Dense(7, activation='relu')(x)
predictions = Dense(1, activation='sigmoid')(x)

model2 = Model(input=inputs, output=predictions)
model2.compile(optimizer='rmsprop',
              loss='binary_crossentropy',
              metrics=['accuracy'])
model2.fit(X_train, y_train, nb_epoch=250, batch_size=1, verbose=1)

predict = model2.predict(X_test, batch_size=1)
print(predict)
proba = model2.predict_proba(X_test, batch_size=1)
print(proba)
classes = model2.predict_classes(X_test, batch_size=1)
print(classes)

I understand how to get both from the model.predict and notice that they are not in the Keras Functional API documentation but just wanted to make sure it was done on purpose.

@jbencook
Copy link
Contributor

jbencook commented May 5, 2016

+1 I'm curious about this too. What's the preferred way of retrieving probabilities?

@jbencook
Copy link
Contributor

jbencook commented May 5, 2016

Ahh I guess it returns probabilities by default. My mistake.

@tjrileywisc
Copy link
Contributor

I just noticed this too. predict_classes() should be simple to implement, but I don't know where it should go in the functional API. This is from models.py (for the Sequential model):

def predict_classes(self, x, batch_size=32, verbose=1):
    '''Generate class predictions for the input samples
    batch by batch.
    # Arguments
        x: input data, as a Numpy array or list of Numpy arrays
            (if the model has multiple inputs).
        batch_size: integer.
        verbose: verbosity mode, 0 or 1.
    # Returns
        A numpy array of class predictions.
    '''
    proba = self.predict(x, batch_size=batch_size, verbose=verbose)
    if proba.shape[-1] > 1:
        return proba.argmax(axis=-1)
    else:
        return (proba > 0.5).astype('int32')

@fchollet
Copy link
Member

For models that have more than one output, these concepts are ill-defined.
And it would be a bad idea to make available something in the one-output
case but not in other cases (inconsistent API).

For the Sequential model, the reason this is supported is for backwards
compatibility only.

On 11 May 2016 at 13:32, tjrileywisc notifications@github.com wrote:

I just noticed this too. predict_classes() should be simple to implement,
but I don't know where it should go in the functional API. This is from
models.py (for the Sequential model):

def predict_classes(self, x, batch_size=32, verbose=1):
'''Generate class predictions for the input samples
batch by batch.
# Arguments
x: input data, as a Numpy array or list of Numpy arrays
(if the model has multiple inputs).
batch_size: integer.
verbose: verbosity mode, 0 or 1.
# Returns
A numpy array of class predictions.
'''
proba = self.predict(x, batch_size=batch_size, verbose=verbose)
if proba.shape[-1] > 1:
return proba.argmax(axis=-1)
else:
return (proba > 0.5).astype('int32')


You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub
#2524 (comment)

@james18
Copy link

james18 commented Jan 9, 2017

What does it mean: "For models that have more than one output, these concepts are ill-defined."?

I would like to have a multi input (X1 and X2), multi output model (Y1 and Y2), where I could predict Y1 and Y2 (both values and probabilities) given X1 and X2 inputs. Any suggestions?

@james18
Copy link

james18 commented Jan 12, 2017 via email

@audiofeature
Copy link

audiofeature commented Jan 13, 2017 via email

@james18
Copy link

james18 commented Jan 13, 2017

from keras.utils.np_utils import probas_to_classes
should work for 1.2.0

@mratsim
Copy link

mratsim commented Jan 20, 2017

The KerasClassifier wrapper, in a Pipeline at least, needs predict_classes and can't be used with the functional model.

@alyato
Copy link

alyato commented May 1, 2017

@james18 I know the probas_to_classes equal to the predict_classes in Sequence.
Does that the model.predict in function API equal to predict_proba in Sequence?
Is it right?

@root-master
Copy link

The output ofmodel.predict() and model.predict_proba() both is numpy array of predicted classes and not the probability. I am using VGG16 architecture for a multi label classification problem with activation='softmax in laste layer. I am not sure how to calculate probability (output of network). Using model.predict() obviously is going to predict wrong classes for multi label class problem because because threshold for classification is set to 0.5 (binary threshold). I am also not sure how the training error is being computed for multi label classification problem in Keras. I am using keras.__version__=2.0.5. Does anyone recommend a solution for computing the classification probability? Also do you think Keras's algorithm is suitable for multi label classification problem?

@audiofeature
Copy link

audiofeature commented Jul 10, 2017 via email

@mcamack
Copy link

mcamack commented Sep 21, 2017

@audiofeature I think you have things reversed? Sigmoid is a binary logistic classifier, 2 classes. Softmax gives probabilities and is used for many output classes

@audiofeature
Copy link

@mcamack
No, well, i was not precise enough:

For a multi-class problem, where you predict 1 of many classes, you use Softmax output.

However, in both binary and multi-label classification problems, where multiple classes might be 1 in the output, you use a sigmoid output.
(The question of Jacob was for multi label classification)

@am-firnas
Copy link

predict() method in functional api gives the probabality values between 0-1. How to get the predicted real values if it regression in LSTM ? :(

@mratsim
Copy link

mratsim commented Nov 21, 2017

@AMFIRNAS I think it's a question for stack overflow. For regression your last layer shouldn't have an activation (or activation = linear, i.e. f(x) = x).

@jemshit
Copy link

jemshit commented Aug 26, 2018

y_prob = model.predict(x) 
y_classes = y_prob.argmax(axis=-1)

https://stackoverflow.com/a/45176824/3736955

@scofield7419
Copy link

multi label class

It's exactly the problem I encountered too.
multi label class task can be satisfied by function api Model(), to yield the predicted_prob

@arpita739
Copy link

from sklearn import metrics
import matplotlib.pyplot as plt

plt.figure()

# Below for loop iterates through your models list
for m in models:
    model = m['model'] # select the model
    #model.fit(X_train, y_train) # train the model
    y_pred=model.predict(X_test) # predict the test data
# Compute False postive rate, and True positive rate
    #fpr, tpr, thresholds = metrics.roc_curve(y_test, model.y_pred_bin(X_test)[:,1])
    fpr, tpr, thresholds = metrics.roc_curve(y_test, model.predict_proba(X_test)[:,1])
# Calculate Area under the curve to display on the plot
    auc = metrics.roc_auc_score(y_test,model.predict(X_test))
# Now, plot the computed values
    plt.plot(fpr, tpr, label='%s ROC (area = %0.2f)' % (m['label'], auc))
# Custom settings for the plot 
plt.plot([0, 1], [0, 1],'r--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('1-Specificity(False Positive Rate)')
plt.ylabel('Sensitivity(True Positive Rate)')
plt.title('Receiver Operating Characteristic')
plt.legend(loc="lower right")
plt.show()   # Display

AttributeError: 'Functional' object has no attribute 'predict_proba'

Please help with this error

@chandanmalla
Copy link

This should be closed.

You can use model.predict() instead of model.predict_proba()

@tanmoy-abcd
Copy link

You can instead use , self.model.predict_on_batch()

The following is the custom implementation of printing F1 score and Auc after each epoch

from sklearn.metrics import f1_score,roc_auc_score

class f1_auc(tf.keras.callbacks.Callback):
def init(self,training_data,validation_data):
self.x = training_data[0]
self.y = training_data[1]
self.x_val = validation_data[0]
self.y_val = validation_data[1]

def y_(y):
r = []
for i in y:
if i[0] > 0.5:
r.append([0])
else:
r.append([1])
return np.array(r)

def on_epoch_end(self, epoch, logs={}):
y_pred_train = self.model.predict_on_batch(self.x)
roc_train = roc_auc_score(y_(self.y), y_(y_pred_train))
y_pred_val = self.model.predict_on_batch(self.x_val)
roc_val = roc_auc_score(y_(self.y_val), y_(y_pred_val))
f1_train = f1_score(y_(self.y), y_(y_pred_train))
f1_val = f1_score(y_(self.y_val), y_(y_pred_val))
print('\rroc-auc_train: %s - roc-auc_val: %s' % (str(round(roc_train,4)),str(round(roc_val,4))),end=100*' '+'\n')
print('f1_train : {} , f1_val : {}'.format(f1_train,f1_val))
return

f1auc = f1_auc(training_data = (X_train , Y_train) , validation_data = (X_test , Y_test))

Here my Y is catergorical , so I had transformed each output to [0,1] catergorical type , but when you pass them through sklearn.metrics' f1_score , roc_auc_score ,it will give an error because it takes it as multilabel output , so y_ is custom function for just converting them to 1 true value

Hope it helps !!

@krille90
Copy link

krille90 commented Jun 9, 2021

I guess this should be closed as both predict_proba and predict_classes are deprecated.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests