-
Notifications
You must be signed in to change notification settings - Fork 19.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Multiple Inputs #148
Comments
We added that some time ago. See the Merge layer: http://keras.io/layers/core/#merge Note that in concat mode, on GPU, you will have to use the latest version of Theano (get it from the repo, not pip), due a bug with past Theano versions. |
Not sure that this is what I wanted: Let say that my input contains two images. I would like one to be passed through CNN1 and the second to be passed through CNN2. Then I can merge them using the merge layer. But how can I use the library in order to handle the two different inputs? Basically I would like to have more than one input or to be able to split the input (using a split layer) to a few layers.. so each sub-input could be passed to a different network... |
That's what the Merge layer allows you to do. You can have two different networks, with different inputs (like 2 images) and you can merge their output into a single tensor. |
So let say that I want to train such a model. My training data contain pairs of images: (i1, i2). I want that i1 will be passed through CNN1 and i2 will be passed through CNN2. How should I design the network and how should I perform the training such that CNN1 will be applied only on i1 and CNN2 will be applied only on i2? Thank you. |
The code snippet example in the doc page I linked provides all the info you need. You will train your model with list of inputs: model.fit([X_CNN1, X_CNN2], y) Which will feed at time t |
Thank you! |
I had a similar requirement. What if i need to pass two inputs into the same node. But these two inputs aren't of the same dimension. Say 1 is an image, and the other is some hidden representation of another image in a lower dimension. I cannot concatenate them since they are of different dimensions. Is there any support to access two totally different inputs within a layer. ( i am modifying keras source to add a new node type, LSTM2. But I only have access to one input, i.e. x) |
@karishmamalkan I need this feature too. Did you get any update on this? |
@jwgu Hi, There was no method to pass multiple inputs to the RNN except to concatenate,dot etc as described by the merge layer. But i found a work around. If you want two inputs, both of which need to be multiplied by trainable weights, then you can use a Graph layer as follows: Supposed you have two inputs x1 and x2 at each step of the RNN/LSTM. Your RNN function looks like: then you can have a
and then you can pass this sequence into the RNN layer. |
Supposedly this Merge Layer should support merging outputs from multiple (>2) sources, right?
Here is the code for model fit (of course model compiled before): But I get the following error:
Is it possible to merge more than two outputs? If so, could some one tell me how should I change my code? Thank you! |
It is possible to merge three tensors, basically just the way you describe but without the syntax errors. You are not sharing the code that you are actually using (it wouldn't even run!), but the code you are actually using presumably involves only two inputs. The following code runs fine: from keras.models import Sequential
from keras.layers import Dense, Merge
left_branch = Sequential()
left_branch.add(Dense(32, input_dim=784))
middle_branch = Sequential()
middle_branch.add(Dense(32, input_dim=784))
right_branch = Sequential()
right_branch.add(Dense(32, input_dim=784))
merged = Merge([left_branch, middle_branch, right_branch], mode='concat')
final_model = Sequential()
final_model.add(merged)
final_model.add(Dense(10, activation='softmax'))
print final_model.inputs |
Thank you very much @fchollet ! But now I think I've figured out what the problem was:
Then I say:
So far, I thought I've constructed the two branches, with left and middle branching having the same structure. So I merge them
Then I compile and train the model:
I then printed the input shape of final_model and got: So I assume a safe way of merging multiple sources is to, for each source, go from the very beginning towards the end to construct a sequential model, then merge them (after I did so, the code worked)? |
@fchollet regarding your answer You will train your model with list of inputs: model.fit([X_CNN1, X_CNN2], y) Does model.fit accepts multiple inputs for validation data? For example, Is the following line legitimate? |
@karishmamalkan Thanks for your tip on how to provide two different inputs for an RNN. However in this way, later on, a new set of weights are applied to the merged input by the recurrent class, is that right? |
Hi sorry for hacking into this thread. I currently need to implement a layer that needs two inputs but has its own trainable parameters. For example, a linear-chain CRF layer takes tokens and tags as inputs, but it has its own trainable parameters (emission matrix and transition matrix). In this case, should I write a layer that is inherited from Merge? |
Hi benjaminklein, |
@fchollet @benjaminklein How does the problem of 2 CNNs is solved in Keras 2? @fchollet : The link doesn't contain any code snippet anymore. |
@fchollet: Can we merge layers with inputs of different sizes ? For example, I would like to merge two LSTM layers, first layer with input sequence shape (30,2) and the other layer with input sequence shape (15,1). I would then like to merge these two layers and pass the output to a second layer. Please let me know ! |
I am trying to add three models on passenger data, below is code. I am getting the following error. OUTPUT ` convert an array of values into a dataset matrixdef create_dataset(dataset, look_back): fix random seed for reproducibilitynumpy.random.seed(7) load the datasetdataframe = pd.read_csv('/home/shivampanchal/PycharmProjects/WeatherPrediction/data/pass.csv') convert into datatimedataframe['yy_mnh']=pd.to_datetime(dataframe['Month']) normalize the datasetscaler = MinMaxScaler(feature_range=(0, 1)) split into train and test setstrain_size = int(len(dataset) * 0.67) reshape into X=t and Y=t+1look_back = 15 trainX, trainY = create_dataset(train, look_back) #NN #CNN #LSTM batch_size = 5 merged = Merge([model1, model2, model3], mode='concat') final_model = Sequential() print final_model.inputs print(trainX_NN.shape) final_model.fit([trainX_NN, trainX_CNN, trainX_LSTM], trainY, epochs=10, batch_size=batch_size, verbose=2) |
This works well.
Perhaps adding a "data" parameter as an option to define "X" and "y", could make this even more clear and consistent and thus we can use the "training_data " dict as the sole input. |
I have two images, first image and its label is good, second images and its label is bad. I want to pass both images at a time to deep learning model for training. While testing I will have two images (unlabelled) and I want to detect which one is good and which one is bad. Could you please tell how to do it? |
I built an architecture using merge and tried to fit the model:
but got the following error `--------------------------------------------------------------------------- /usr/local/lib/python2.7/dist-packages/keras/legacy/interfaces.pyc in wrapper(*args, **kwargs) /usr/local/lib/python2.7/dist-packages/keras/models.pyc in fit_generator(self, generator, steps_per_epoch, epochs, verbose, callbacks, validation_data, validation_steps, class_weight, max_queue_size, workers, use_multiprocessing, shuffle, initial_epoch) /usr/local/lib/python2.7/dist-packages/keras/legacy/interfaces.pyc in wrapper(*args, **kwargs) /usr/local/lib/python2.7/dist-packages/keras/engine/training.pyc in fit_generator(self, generator, steps_per_epoch, epochs, verbose, callbacks, validation_data, validation_steps, class_weight, max_queue_size, workers, use_multiprocessing, shuffle, initial_epoch) /usr/local/lib/python2.7/dist-packages/keras/engine/training.pyc in _standardize_user_data(self, x, y, sample_weight, class_weight, check_batch_axis, batch_size) /usr/local/lib/python2.7/dist-packages/keras/engine/training.pyc in _standardize_input_data(data, names, shapes, check_batch_axis, exception_prefix) AttributeError: 'DirectoryIterator' object has no attribute 'ndim'` anyone please tell me whats wrong ? |
Hi, i have similar problem with this issue. i want to merge a thermal and visible image features using deep learning. and i tried to train model u have explained. Traceback******************* c:\users\neehal lingayat\anaconda3\envs\tensorflow1\lib\site-packages\keras\engine\training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, **kwargs) c:\users\neehal lingayat\anaconda3\envs\tensorflow1\lib\site-packages\keras\engine\training.py in _standardize_user_data(self, x, y, sample_weight, class_weight, check_array_lengths, batch_size) c:\users\neehal lingayat\anaconda3\envs\tensorflow1\lib\site-packages\keras\engine\training_utils.py in standardize_input_data(data, names, shapes, check_batch_axis, exception_prefix) c:\users\neehal lingayat\anaconda3\envs\tensorflow1\lib\site-packages\keras\engine\training_utils.py in (.0) c:\users\neehal lingayat\anaconda3\envs\tensorflow1\lib\site-packages\keras\engine\training_utils.py in standardize_single_array(x) IndexError: tuple index out of range my full coded is as follows:
train_x1=image
train_x2=images1 input1=Input(shape=(150,150,1)) input2=Input(shape=(150,150,1)) from keras.layers.merge import concatenate (lr=1e-4, decay=1e-6, momentum=0.9, nesterov=True) compile the modelAdam = Adam(lr=0.001, beta_1=0.9, beta_2=0.999, epsilon=None, decay=0.0, amsgrad=False) train_y2], can someone please guide me what i should do .....I am new to this field so i got no idea how to fix this. |
hi, I have a doubt. In my code, model.fit(x=[X_train_1,X_train_2],y= Y] the error is x=ndim is not applicable on a list, how do I rectify this??? |
@fchollet Hi , I have met a problem. I want to merge two output of different networks with different weights. And the weights are computed by another network. What should I do to finish this? Thank you for your help. |
If you are using customize callback metric (in my case, F1-macro for multiclass classification), you may encounter the problem like this original setting with 1 input
modified setting with 2 input
you can see the difference at Hope this could help those who have encountered the same problem. |
@fchollet how I would go about concatenating a 3 element array input with image inputs? Here is what I have `rgb_img_input = Input(tensor=rgb_img)
Running this gives the error
|
@scstu is it compulsory to use
when I tried |
* added random rotation * update test name * updates * updated the seed for random crop as well * added name * added name
* added random rotation * update test name * updates * updated the seed for random crop as well * added name * added name
Hi, Any plans to add support for multiple inputs?
For example - Having two images - each will be passed through a different CNN and the final representations of the two will be merged at the end.
Thank you!
The text was updated successfully, but these errors were encountered: