New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to use TimeDistributed if I have multiple inputs #3057
Comments
Have you found a solution yet? |
num_inputs = 3
input_dim = 784
input_length = 20
output_dim = 32
model = Sequential()
model.add(Dense(output_dim, input_dim=input_dim))
merged_input = Input((num_inputs, input_dim))
temps = [model(merged_input[:, x, :]) for x in range(num_inputs)]
merged = merge(temps, 'concat')
merged_model = Model(input=merged_input, output=merged)
seq_inputs = [Input(input_length, input_dim) for x in range(num_inputs)]
seq = map(Reshape((input_length, 1, input_dim)), seq_inputs)
seq = merge(seq, 'concat', concat_axis=2)
outputs = TimeDistributed(merged_model)(seq) |
@farizrahman4u |
If you guys need it, I could enable multi input support to the TimeDistributed wrapper. |
I think it is not necessary, as everything for this exist. Also this thread easily pops up when search in the Internet. |
@farizrahman4u I believe the trick of merging multiple input tensors would not work if the shape of the input tensor differ from each other. It would be nice if there is native and robust support of using multiple input tensors with TimeDistributed. |
They should have the same number of timesteps either way. |
@farizrahman4u I think that would be useful and for the sake of consistency. |
Is multi input support planned to be implemented? I'm currently packing multi-inputs into a single input, which is not very ideal/good design. |
@farizrahman4u With theano backend, I had TypeError: ('Not a Keras tensor:', Subtensor{::, int64, ::}.0) because of line 13 ...
temps = [model(merged_input[:, x, :]) for x in range(num_inputs)]
... With tensorflow backend, I had AttributeError: 'NoneType' object has no attribute 'inbound_nodes' because of line 15 ...
merged_model = Model(input=merged_input, output=merged)
... I am new to keras and use keras: 2.0.2 |
Any update on multi-input support for TimeDistributed?
For the sake of completeness, MAX_LENGTH is different from INFO_LENGTH, in general. Without additional sentence-level information, TimeDistributed has a single input and everything seems to work fine. Is there any work around to include multiple input? |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed after 30 days if no further activity occurs, but feel free to re-open a closed issue if needed. |
I am using Bucketing to group together batches of different length. This is time series data where I have multiple time series of the same length (but variable length across batches) as input to the different layers at the beginning of the model. Thus the input shape for my input layer is (None, 1) because I only have one column of data per input. How can I apply @farizrahman4u original solution without an input length? I've tried submitting None as a dimension and get the following error: ValueError: Tried to convert 'shape' to a tensor and failed. Error: None values not supported. |
If the multi-input of TimeDistributed has different shape, such as there has three input A, B, C, only A is a 5D tensor, the B and C are not, but I have to input them together to a custom_conv function through Lambda layer which will be wrapped by the TimeDistributed, so does it have possibility to support to input a list with different shapes. |
@karenyun Can you elaborate? How you pass a 5D tensor and 2 non-5D tensors exactly? |
Here is a simple code that shows the problem of trying to combine a multi-input model with LSTM. It fails on the TimeDistributed line. Any ideas on how to fix it?
|
@iretiayo Have you found any solution to your problem? I'm currently looking at the exact same architectural problem. As you, I'm having a 2D vector and an image as input, which is then fed to a LSTM. If you have any solution using a different approach, it would also be great if you could share it! My only idea of how to solve that is by using T input networks with shared weight, which then are used as a sequence to feed to the lstm layer. |
@raharth Have you found any solution? |
Unfortunately I don't see your code. As far as I remember I actually used shared weights to solve it, but it was a whole mess and really hacky. I actually decided to build the same architecture using pytorch, which is way more flexible and cleaner for an architecture like that. Since it doesn't compile the graph it doesn't care about what you did with the tensor before feeding it to a specific layer, so you just need to take care that the shape actually matches what it expects. |
@farizrahman4u Thanks a lot for the proposed solution. I tried Lambda layer, but it seems the nested model can't be trained when it is embedded in Lambda layer. Do you have any suggestion regarding this issue? |
RepeatVector can help, e.g.
|
I was able to solve this problem using the
Note that return sequences before the concat is False and that the |
TimeDistributed works fine if there is only one input as is in this exampe at the bottom of the page. But when there are multiple inputs, TimeDistributed seems not working.
Say, if my model has 3 inputs,
seq_inputs= [Input(shape=(TIME_STEPS, FEATURE_LENGTH)) for i in range(3)] outputs=TimeDistributed(model)(seq_inputs)
the reported error is: TypeError: can only concatenate tuple (not "list") to tuple
So I changed the last to
outputs=TimeDistributed(model)(*seq_inputs)
, but there is still an error saying that TypeError: call() takes at most 3 arguments (4 given)################# below is my code
from keras.models import Sequential, Model, Graph
from keras.layers import Input, Convolution2D, MaxPooling2D, LSTM, Dense, BatchNormalization, ZeroPadding2D, Flatten, merge, Masking, Dropout, TimeDistributed, Reshape, Lambda, Embedding
from keras import backend
NUM_INPUTS=3
TIME_STEPS=20
model = Sequential()
model.add(Dense(32, input_dim=784))
inputs = [Input(shape=(32,)) for i in range(NUM_INPUTS)]
temps=[model(x) for x in inputs]
merged=merge(temps, mode='concat')
merged_model=Model(input=inputs, output=merged)
merged_model(inputs)
pdb.set_trace()
seq_inputs = (Input(shape=(TIME_STEPS, 32)) for i in range(NUM_INPUTS))
outputs=TimeDistributed(merged_model)(*seq_inputs)
The text was updated successfully, but these errors were encountered: