Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Passing variable to Lambda function and loading model JSON #5396

Closed
jessejohns opened this issue Feb 14, 2017 · 10 comments
Closed

Passing variable to Lambda function and loading model JSON #5396

jessejohns opened this issue Feb 14, 2017 · 10 comments

Comments

@jessejohns
Copy link

I'm not sure if I'm just ignorant of how Lambda functions work, but if I do the following:

x = Lambda(lambda x: x * 0.1)(x)

I have no problem saving and loading a model. However, if I try to pass a variable:

x = Lambda(lambda x: x * factor)(x)

I get the following error on the function closure when loading a model:

TypeError: arg 5 (closure) must be None or tuple.

When I printed out the closure when the model failed, I get whatever that value passed to the variable was - [0.1], for example. I can train and run models, no problem, but I just can't load from the JSON.

Relevant JSON of what produces the error - getting a [0.1] where it expects a null:
{"class_name": "Lambda", "config": {"function": ["c\u0001\u0000\u0000\u0000\u0001\u0000\u0000\u0000\u0002\u0000\u0000\u0000\u0013\u0000\u0000\u0000s\b\u0000\u0000\u0000|\u0000\u0000\u0088\u0000\u0000\u0014S(\u0001\u0000\u0000\u0000N(\u0000\u0000\u0000\u0000(\u0001\u0000\u0000\u0000t\u0001\u0000\u0000\u0000x(\u0001\u0000\u0000\u0000t\u000e\u0000\u0000\u0000scale_residual(\u0000\u0000\u0000\u0000s-\u0000\u0000\u0000/home/jesse/Source/dl_code/mdl/model_keras.pyt\b\u0000\u0000\u0000<lambda>5\u0004\u0000\u0000s\u0000\u0000\u0000\u0000", null, [0.1]], "name": "lambda_1", "trainable": true, "function_type": "lambda", "arguments": {}, "output_shape": null, "output_shape_type": "raw"}, "inbound_nodes": [[["batchnormalization_18", 0, 0]]], "name": "lambda_1"}

JSON of working model:
{"class_name": "Lambda", "config": {"function": ["c\u0001\u0000\u0000\u0000\u0001\u0000\u0000\u0000\u0002\u0000\u0000\u0000S\u0000\u0000\u0000s\b\u0000\u0000\u0000|\u0000\u0000d\u0001\u0000\u0014S(\u0002\u0000\u0000\u0000Ng\u009a\u0099\u0099\u0099\u0099\u0099\u00b9?(\u0000\u0000\u0000\u0000(\u0001\u0000\u0000\u0000t\u0001\u0000\u0000\u0000x(\u0000\u0000\u0000\u0000(\u0000\u0000\u0000\u0000s-\u0000\u0000\u0000/home/jesse/Source/dl_code/mdl/model_keras.pyt\b\u0000\u0000\u0000<lambda>5\u0004\u0000\u0000s\u0000\u0000\u0000\u0000", null, null], "name": "lambda_1", "trainable": true, "function_type": "lambda", "arguments": {}, "output_shape": null, "output_shape_type": "raw"}, "inbound_nodes": [[["batchnormalization_18", 0, 0]]], "name": "lambda_1"}

@allanzelener
Copy link
Contributor

allanzelener commented Feb 14, 2017

I addressed a similar error to this in PR #5350.

Keras seems to have a freeze on PRs while Keras 2 is being finalized. You can install my branch with pip install git+https://github.com/allanzelener/keras.git@func_defaults_to_tuple.

Alternatively I believe saving the model to YAML and the weights separately should also work.

Edit: Just realized this isn't exactly the same error, it's for a different arg, but a similar solution should work. The basic issue is that JSON does not preserve tuples and converts them to lists but marshall, which Keras uses to load serialized functions, requires tuple arguments.

@jessejohns
Copy link
Author

jessejohns commented Feb 16, 2017

I had tried your fix, previously, but with no success. When I converted the closure argument to a list:

if isinstance(closure,list): closure = tuple(closure)

the error that I get, then, is:

Traceback (most recent call last):
File "train_imageNet.py", line 169, in
tm.main(train_images,valid_images,params)
File "/Source/dl_code/mdl/train_model.py", line 16, in main
model = cmdl.create_model(params)
File "/Source/dl_code/mdl/core.py", line 293, in create_model
model = model_from_json(open(os.path.join(load_dir,'model.json')).read())
File "/Source/keras/keras/models.py", line 213, in model_from_json
return layer_from_config(config, custom_objects=custom_objects)
File "/Source/keras/keras/utils/layer_utils.py", line 40, in layer_from_config
custom_objects=custom_objects)
File "/Source/keras/keras/engine/topology.py", line 2582, in from_config
process_layer(layer_data)
File "/Source/keras/keras/engine/topology.py", line 2560, in process_layer
custom_objects=custom_objects)
File "/Source/keras/keras/utils/layer_utils.py", line 40, in layer_from_config
custom_objects=custom_objects)
File "/Source/keras/keras/layers/core.py", line 682, in from_config
function = func_load(config['function'], globs=globs)
File "/Source/keras/keras/utils/generic_utils.py", line 199, in func_load
closure=closure)
TypeError: arg 5 (closure) expected cell, found float

@mobeets
Copy link

mobeets commented Mar 13, 2017

I'm getting the same error, TypeError: arg 5 (closure) must be None or tuple. My model also has a merge with a Lambda.

If I try saving the model separately as yaml, this still doesn't work. In this case I get TypeError: arg 5 (closure) expected cell, found int

@mobeets
Copy link

mobeets commented Mar 13, 2017

It's probably worth noting that I get the same exact bug if I try to save the model in the variational autoencoder example here.

This problem definitely has to do with Lambdas, because if you change line 40 to say "z_mean" instead of "z", this problem goes away.

@allanzelener
Copy link
Contributor

Yeah, it looks like Keras doesn't properly handle serializing functions with closures. It dumps the contents of the closure but doesn't simulate having a closure when loading the dumped contents. Creating closure cells isn't straightforward but there's a way suggested in this StackOverflow post.

Here's a workaround: Pass in every variable in the closure as an argument. When creating the Lambda layer supply the additional arguments with Lambda(f, arguments=kwargs).

Another workaround is to never serialize your model, just do model.save_weights and whenever you need to instantiate the model run the code used to construct the model and then use model.load_weights.

@bahramlavi
Copy link

bahramlavi commented Jun 1, 2017

Hello all,

Since I added a Lambda layer in my code, I'm getting an error of loading my saved keras model by calling load_model function. Here is the code of the Lambda:

distance = Lambda(cosine_distance,output_shape=cosine_distance_output_shape)([proc_a, proc_b])

And the error:

Traceback (most recent call last):
File "", line 1, in
File "/usr/local/lib/python2.7/site-packages/keras/models.py", line 140, in load_model
weight_values = K.batch_get_value(symbolic_weights)
File "/usr/local/lib/python2.7/site-packages/keras/models.py", line 189, in model_from_config
# Raises
File "/usr/local/lib/python2.7/site-packages/keras/utils/layer_utils.py", line 34, in layer_from_config
to_display = ['Layer (type)', 'Output Shape', 'Param #']
File "/usr/local/lib/python2.7/site-packages/keras/engine/topology.py", line 2395, in from_config
# Arguments
File "/usr/local/lib/python2.7/site-packages/keras/engine/topology.py", line 2373, in process_layer
def from_config(cls, config, custom_objects=None):
File "/usr/local/lib/python2.7/site-packages/keras/utils/layer_utils.py", line 34, in layer_from_config
to_display = ['Layer (type)', 'Output Shape', 'Param #']
File "/usr/local/lib/python2.7/site-packages/keras/layers/core.py", line 621, in from_config
x = K.placeholder(shape=input_shape)
File "/usr/local/lib/python2.7/site-packages/keras/utils/generic_utils.py", line 58, in func_load
global custom objects are reverted to state
TypeError: arg 4 (defaults) must be None or tuple

Is there any thought to fix?

@dgorissen
Copy link
Contributor

Just ran into this, FYI, workaround from @allanzelener (passing in every variable) worked for me

@Miail
Copy link

Miail commented Jul 17, 2017

I seem to have a similar issue, but aren't quite sure how to implement the suggested workaround.

#
#   Python scritpt -  Keras RCNN model.
#
import keras
from keras.models import Model
from keras.layers import Input, Dense, Dropout, Flatten, Activation
from keras.layers import merge, Conv2D, MaxPooling2D, Input
from keras.layers.normalization import BatchNormalization
from keras.layers.core import Lambda
import numpy as np
from keras.layers import add
from keras import backend as K


#   RCL:
#   BatchNorm(Relu(conv(L-1) + conv(L)))
#

def make_RCNN(input,number_of_rcl,num_of_filter, filtersize,alpha,pool):
    feed_forward = Conv2D(filters=num_of_filter, kernel_size=1, name='init')(input)
    
    for x in xrange(number_of_rcl):
        output = RCL(feed_forward,num_of_filter,filtersize,alpha,pool)
        feed_forward = output
    
    return feed_forward

def RCL(feed_forward_input,num_of_filter, filtersize, alpha,pool):
    conv = Conv2D(filters=num_of_filter, kernel_size=filtersize, padding='same')
    recurrent_input = conv(feed_forward_input)
    merged = add([feed_forward_input,recurrent_input])
    conv_relu = Lambda(lambda x : K.relu(x,alpha=alpha))(merged)
    conv_relu_batchnorm = BatchNormalization()(conv_relu)
    if pool:
        conv_relu_batchnorm_pool = MaxPooling2D()(conv_relu_batchnorm)
        return conv_relu_batchnorm_pool
    else:
        
        return conv_relu_batchnorm

input = Input(shape=(30,30,3))
output = make_RCNN(input,number_of_rcl=3,num_of_filter=3,filtersize=3,alpha=0.2, pool=True)

model = Model(input = input, output = output)
model.compile(optimizer='rmsprop', loss='binary_crossentropy')
model.summary()

@stale
Copy link

stale bot commented Oct 15, 2017

This issue has been automatically marked as stale because it has not had recent activity. It will be closed after 30 days if no further activity occurs, but feel free to re-open a closed issue if needed.

@stale stale bot added the stale label Oct 15, 2017
@stale stale bot closed this as completed Nov 14, 2017
@donniek
Copy link

donniek commented Sep 26, 2020

Dear all,
This issue can fix. The closure in FunctionType need create a cell.
Thanks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants