Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

predict_classes in score_model.py results in AttributeError #64

Closed
bml1g12 opened this issue Aug 20, 2018 · 8 comments
Closed

predict_classes in score_model.py results in AttributeError #64

bml1g12 opened this issue Aug 20, 2018 · 8 comments
Labels
priority: MEDIUM medium priority

Comments

@bml1g12
Copy link

bml1g12 commented Aug 20, 2018

  • [ x] I'm up-to-date with the latest release:

    pip install --upgrade --user git+https://github.com/autonomio/talos.git@daily-dev
    
  • [x ] I've confirmed that my Keras model works outside of Talos.


When I run

h = ta.Scan(X_train, Y_train, params=p, dataset_name="debug", experiment_no="1", model=keras_nn_model_talos, grid_downsample=0.002, talos_log_name="talos.log", reduction_method="spear", reduction_metric="val_fbeta_score_acc")

I get the following error:

---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
<ipython-input-9-b4cbea7ca6f1> in <module>()
      8      'second_GRU_layer':[True, False]}
      9 h = ta.Scan(X_train, Y_train, x_val=X_dev, y_val=Y_dev, params=p, dataset_name="debug", experiment_no="1", 
---> 10             model=keras_nn_model_talos, grid_downsample=0.002, talos_log_name="talos.log", reduction_method="spear", reduction_metric="fbeta_score")
     11 
     12 ## I had to edit a line of ~/anaconda3/envs/tfgpu-keras/lib/python3.6/site-packages/talos/metrics/score_model.py

~/anaconda3/envs/tfgpu-keras/lib/python3.6/site-packages/talos/scan/Scan.py in __init__(self, x, y, params, dataset_name, experiment_no, model, x_val, y_val, val_split, shuffle, search_method, reduction_method, reduction_interval, reduction_window, grid_downsample, reduction_threshold, reduction_metric, round_limit, talos_log_name, debug, seed, clear_tf_session, disable_progress_bar)
    140         # input parameters section ends
    141 
--> 142         self._null = self.runtime()
    143 
    144     def runtime(self):

~/anaconda3/envs/tfgpu-keras/lib/python3.6/site-packages/talos/scan/Scan.py in runtime(self)
    145 
    146         self = scan_prepare(self)
--> 147         self = scan_run(self)

~/anaconda3/envs/tfgpu-keras/lib/python3.6/site-packages/talos/scan/scan_run.py in scan_run(self)
     27                      disable=self.disable_progress_bar)
     28     while len(self.param_log) != 0:
---> 29         self = rounds_run(self)
     30         self.pbar.update(1)
     31     self.pbar.close()

~/anaconda3/envs/tfgpu-keras/lib/python3.6/site-packages/talos/scan/scan_run.py in rounds_run(self)
     59 
     60     _hr_out = run_round_results(self, _hr_out)
---> 61     self._val_score = get_score(self)
     62     write_log(self)
     63     self.result.append(_hr_out)

~/anaconda3/envs/tfgpu-keras/lib/python3.6/site-packages/talos/metrics/score_model.py in get_score(self)
     15 
     16     try:
---> 17         y_pred = self.keras_model.predict_classes(self.x_val)
     18        # y_pred = self.keras_model.predict(self.x_val)
     19         return Performance(y_pred, self.y_val, self.shape, self.y_max).result

AttributeError: 'Model' object has no attribute 'predict_classes'

Which can seemingly be fixed simply by changing
talos/metrics/score_model.py line 17
from y_pred = self.keras_model.predict_classes(self.x_val)
to y_pred = self.keras_model.predict(self.x_val)

My params dictionary and model:

p = {'adam_learning_rate': [0.01, 0.001, 0.0001],
     'num_filters': [12, 32, 64, 196],
     'gru_hidden_units':[32, 64, 128, 196],
     'dropout_rate':[0.2,0.5,0.8],
     'batch_size': [64, 128, 256],
     'epochs': [3],
     'second_GRU_layer':[True, False]}

def keras_nn_model_talos(x_train, y_train, x_val, y_val, params):
        

    
    #https://stackoverflow.com/questions/43547402/how-to-calculate-f1-macro-in-keras
    def my_recall_acc(y_true, y_pred):
        """Recall metric.
    
        Only computes a batch-wise average of recall.
    
        Computes the recall, a metric for multi-label classification of
        how many relevant items are selected.
        """
        true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
        possible_positives = K.sum(K.round(K.clip(y_true, 0, 1)))
        recall = true_positives / (possible_positives + K.epsilon())
        return recall
    
    def my_precision_acc(y_true, y_pred):
        """Precision metric.
    
        Only computes a batch-wise average of precision.
    
        Computes the precision, a metric for multi-label classification of
        how many selected items are relevant.
        """
        true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
        predicted_positives = K.sum(K.round(K.clip(y_pred, 0, 1)))
        precision = true_positives / (predicted_positives + K.epsilon())
        return precision
    
    def f1_acc(y_true, y_pred):
        precision = my_precision_acc(y_true, y_pred)
        recall = my_recall_acc(y_true, y_pred)
        return 2*((precision*recall)/(precision+recall+K.epsilon()))
        
    X_input = Input(shape = x_train.shape[1:])

    # Step 1: CONV layer 
    X = Conv1D(filters=int(params["num_filters"]), kernel_size=15,strides=4)(X_input)  # CONV1D
    X = BatchNormalization()(X)                               # Batch normalization
    X = Activation('relu')(X)                                 # ReLu activation
    X = Dropout(rate=params["dropout_rate"])(X)                                 # dropout (use 0.8)

    # Step 2: First GRU Layer
    X = GRU(units = int(params["gru_hidden_units"]), return_sequences = True)(X)         # GRU (use 128 units and return the sequences)
    X = Dropout(rate=params["dropout_rate"])(X)                                  # dropout (use 0.8)
    X = BatchNormalization()(X)                                 # Batch normalization
    
    if params["second_GRU_layer"]:
        # Step 3: Second GRU Layer 
        X = GRU(units = int(params["gru_hidden_units"]), return_sequences = True)(X)                                 # GRU (use 128 units and return the sequences)
        X = Dropout(rate=params["dropout_rate"])(X)                                 # dropout (use 0.8)
        X = BatchNormalization()(X)                                 # Batch normalization
        X = Dropout(rate=params["dropout_rate"])(X)                                   # dropout (use 0.8)
    
    # Step 4: Time-distributed dense layer 
    X = TimeDistributed(Dense(1, activation = "sigmoid"))(X) # time distributed  (sigmoid)


    model = Model(inputs = X_input, outputs = X)
    
    opt = Adam(lr=params["adam_learning_rate"], beta_1=0.9, beta_2=0.999, decay=0.01)
    model.compile(loss='binary_crossentropy', optimizer=opt, metrics=["acc", my_recall_acc, my_precision_acc, f1_acc])
    
    
    history = model.fit(x_train, y_train, batch_size = int(params["batch_size"]), 
          validation_data=(x_val, y_val),
          epochs=int(params["epochs"])) 
    
    return history, model  

X_train.shape, Y_train.shape, X_dev.shape, Y_dev.shape
((1800, 201, 41), (1800, 47, 1), (400, 201, 41), (400, 47, 1))

@matthewcarbone
Copy link
Collaborator

@bml1g12 Yeah you'll get this error right now if you're not using the Sequential type model. predict_classes only works with that and doesn't work with the functional model. I think what you've done is something of a hot fix that seems to work with some cases but not others. I would be very careful to be sure that predict is doing what you want it to! Otherwise, this should be fine.

@ackRow
Copy link
Contributor

ackRow commented Aug 20, 2018

I was trying to use talos with a Keras Functional Model too but surprisingly Keras didn't implemented the predict_classes function on them.

So I come up with a quick fix using argmax(self.keras_model.predict(self.x_val), axis=-1) that worked for me.
(don't forget to import argmax from numpy)

However it seems that it didn't work for everybody, resulting in weird arrays full of zeros.

I finally looked up the code of predict_classes in the Keras repo and it kind of uses argmax the way I proposed.

Therefore I'm waiting for suggestions.

@mikkokotila
Copy link
Contributor

This is related with #39 and #42.

My suggestion is that we add an option for those that want to use the above under a parameter experimental_functional_support=True for Scan() which simply uses the above instead of the one that is being used otherwise. We need to keep in mind it's not tested (as in #42 it becomes apparent how this might not work for all cases).

@x94carbone @ackRow what do you think about this approach?

@matthewcarbone
Copy link
Collaborator

I think this is ok. Sort of a use-at-your-own-risk kind of option. It weird because I see all over the internet people recommending this fix, but it seems for certain kinds of data input it totally breaks everything.

It is kinda crazy that Keras hasn't addressed this issue by now if you ask me.

@bml1g12
Copy link
Author

bml1g12 commented Aug 21, 2018

y_pred = self.keras_model.predict(self.x_val) seems to be working for me (I ran a keras run with optimal parameters selected by Talos, and got similar results between the two) .

@ackRow
Copy link
Contributor

ackRow commented Aug 21, 2018

I think it's a good idea too. I can try to code that and add an error message when using a Functional Model advising you to use the experimental feature.

@bml1g12 predict and predict_classes output different arrays (one the probabilities and the other the predicted label of the object to compare with the actual label) so I don't know why it works for you but I think it's not an optimal solution.

@mikkokotila
Copy link
Contributor

@ackRow that's great, thanks so much. At the moment there are 4 live issues related with this, so it seems like high value target :)

@mikkokotila
Copy link
Contributor

This is now fixed in the current dev branch.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
priority: MEDIUM medium priority
Projects
None yet
Development

No branches or pull requests

4 participants