-
Notifications
You must be signed in to change notification settings - Fork 268
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Using Evaluate
with multiple inputs
#552
Comments
@cynthia166 could you clean up the formatting of the above so it becomes more readable. |
Hello Miko, I cleaned up the format:
|
Thank you so much :) |
Sorry for not replying earlier. Have you looked at this example for multiple inputs: https://autonomio.github.io/talos/#/Examples_Multiple_Inputs |
This will be handled in #582 so merging with that. |
Hello, I was wondering if you could please help me,
Confirm the below
I have looked for an answer in the Docs. Yes, I have done it
My Python version is 3.5 or higher: Python 3.8.5 and TensorFlow 2.4.1
I have searched through the issues Issues for a duplicate.
I've tested that my Keras model works as a stand-alone. : Yes, I have been running the keras.
My python is 3.8 and .I am trying to hypertune a deep autoencoder, in my function I need too pass thE X value, I have get the following error:
ValueError: Layer model_4 expects 2 input(s), but it received 1 input tensors. Inputs received: [<tf.Tensor 'IteratorGetNext:0' shape=(None, 470) dtype=float32>]
Here ir my code:
def autoEncoder(X_train, y_train, x_val, y_val,params):
'''
Autoencoder for Collaborative Filter Model
'''
#model = Sequential()
users_items_matrix, content_info = X
#content_info = X[:,420:X.shape[1]]
#users_items_matrix = X[:,0:420]
Input
input_layer = Input(shape=(users_items_matrix.shape[1],), name='UserScore')
input_content = Input(shape=(content_info.shape[1],), name='Itemcontent')
Encoder
-----------------------------
enc = Dense(512, activation=params["activation"], name='EncLayer1')(input_layer)
Content Information
#embbeding Turns positive integers (indexes) into dense vectors of fixed size.
x_content = Embedding(100, 256, input_length=content_info.shape[1])(input_content)
x_content = Flatten()(x_content)
x_content = Dense(256, activation=params["activation"],
name='ItemLatentSpace')(x_content)
Latent Space
-----------------------------
Dropout layer randomly sets input units to 0 with a frequency of rate at each step during training time, which helps prevent overfitting. Inputs not set to 0 are scaled up by 1/(1 - rate) such that the sum over all inputs is unchanged.
lat_space = Dense(256, activation=params["activation"], name='UserLatentSpace')(enc)
lat_space= add([lat_space, x_content], name='LatentSpace')
lat_space = Dropout(params["dropout"], name='Dropout')(lat_space) # Dropout
Decoder
-----------------------------
dec = Dense(512, activation=params["activation"], name='DecLayer1')(lat_space)
Output
output_layer = Dense(users_items_matrix.shape[1], activation='linear', name='UserScorePred')(dec)
this model maps an input to its reconstruction
model = Model([input_layer, input_content], output_layer)
#model.compile(optimizer = SGD(lr=0.0001), loss='mse')
model.compile(
optimizer=params'optimizer',
loss='mean_squared_error',)
model.summary()
history = model.fit(X,y ,
validation_data=(X_test, y_test),
batch_size=params['batch_size'],
epochs=params['epochs'],
verbose=0)
return history,model
p = {'lr': (0.5, 5, 10),
'hidden_layers':[0, 1, 2],
'batch_size': (20, 30, 50),
'epochs': [50],
#'times':[216,300,600],
#'neurons':[416,600,1200],
'dropout': (0, 0.5, 5),
'weight_regulizer':[None],
'emb_output_dims': [None],
'optimizer': [Adam, "Nadam", "RMSprop"],
'activation':["relu", "selu"],
}
t = ta.Scan(x=X,
y=y,
model=autoEncoder,
#grid_downsample=1,
params=p,
val_split = 0,
experiment_name='im' )
Than you very much
The text was updated successfully, but these errors were encountered: