Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Are there slice layer and split layer in Keras? #890

Closed
guohengkai opened this issue Oct 25, 2015 · 21 comments
Closed

Are there slice layer and split layer in Keras? #890

guohengkai opened this issue Oct 25, 2015 · 21 comments

Comments

@guohengkai
Copy link
Contributor

I need to share inputs and slice inputs for multiple output layers. Are there slice layer and split layer in Keras such as those in Caffe? Thanks.

@EderSantana
Copy link
Contributor

Not yet, but you can try a Lambda layer like the one we talk here #883

@shyamupa
Copy link

I did not understand how does Lambda layer help in performing slice/split operation. eg. If I have a (2,200,200) tensor, how can I split it into (200,200) and (200,200) outputs? How can lambda layer generate multiple outputs?

@lathen
Copy link
Contributor

lathen commented May 19, 2016

I searched for the same functionality and it wasn't quite obvious how to use the Lambda layer. You can do something like this (using the functional API) to slice out the first channel in x:

y = Lambda(lambda x: x[:,0,:,:], output_shape=(1,) + input_shape[2:])(x)

As I understand it the Lambda layer can only generate one output, so you have to use multiple Lambdas to slice out all the channels in x. Note that you need to specify the output_shape explicitly.

@stale
Copy link

stale bot commented May 23, 2017

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs, but feel free to re-open it if needed.

@stale stale bot closed this as completed Jun 22, 2017
@marc-moreaux
Copy link

I drop this layer implementation here as I got inspiration from @lathen
Its a layer implementation of the slice

def crop(dimension, start, end):
    # Crops (or slices) a Tensor on a given dimension from start to end
    # example : to crop tensor x[:, :, 5:10]
    # call slice(2, 5, 10) as you want to crop on the second dimension
    def func(x):
        if dimension == 0:
            return x[start: end]
        if dimension == 1:
            return x[:, start: end]
        if dimension == 2:
            return x[:, :, start: end]
        if dimension == 3:
            return x[:, :, :, start: end]
        if dimension == 4:
            return x[:, :, :, :, start: end]
    return Lambda(func)

To slice x as follows, x[:, :, 5:10], just call :
x = crop(2,5,10)(x)

@rjpg
Copy link

rjpg commented Dec 24, 2017

hello I wanted to feed several LSTM (that has input 2D) with the out put of a convnet2D that has dimension (3D [#filter image, width, hight]).

(in my case they are not images they are multivariable time series x-time steps y-variables... conv is for pre processing and lstm to analyze the sequence)

The first "imagem" from the first filter would feed the first LSTM and 2nd image(filter) feed the 2nd LSTM and so on ...

like shyamupa says :

I did not understand how does Lambda layer help in performing slice/split operation. eg. If I have a (2,200,200) tensor, how can I split it into (200,200) and (200,200) outputs? How can lambda layer generate multiple outputs?

I am trying to feed this merge :
merged = Merge([lstm1, lstm2], mode='concat')

from a layer :
Conv2D(2, kernel_size=(3,3), padding="same")

Thanks

@alejandrojapkin
Copy link

I intend to create split layers based on a custom sorting mechanism. Any guide on how to implement that?
Here's the intended architecture:
screen shot 2018-05-23 at 12 33 40

@asafarevich
Copy link

asafarevich commented Aug 6, 2018

@datascienceteam01 you can use tf.split

inputs = Input([2048,1])
split = Lambda( lambda x: tf.split(x,num_or_size_splits=2,axis=1))(inputs)
print ('0:', split[0].shape) # 0: (?, 1024, 1)
print ('1:',split[1].shape)  # 1: (?, 1024, 1)
# to use them
layer1 = Dense(5)(split[0])
layer2 = Dense(5)(split[1])

@Apizius
Copy link

Apizius commented Aug 15, 2018

keras.backend.slice(x, start, size) or keras.backend.gather(reference, indices) might do the trick.

@mhaghighat
Copy link

You could add a layer like this:

from keras.layers import Lambda
from keras.backend import slice
.
.
x = Lambda( lambda x: slice(x, START, SIZE))(x)

For example, if you want to eliminate the first element in the second dimention:
x = Lambda( lambda x: slice(x, (0, 1), (-1, -1)))(x)

Documentation pages:
keras.layers.Lambda
keras.backend.slice

@danFromTelAviv
Copy link
Contributor

Is there a way to keep dims after slicing?
BTW you can now slice most tensors like you would numpy arrays - x[:5,:10] without any special lambdas or anything like that...

@davidavdav
Copy link

If an entry in the third element is -1, then that dimension is sliced until the end. Any other integer indicates the length rather than the end index---something you might have expected from something called slice.

@offchan42
Copy link

offchan42 commented Dec 7, 2018

Why do we need to apply Lambda() to the sliced tensor like x[:,5:10]? Why don't we just use the sliced tensor?
I tried the crop function written by @marc-moreaux compared to simple slicing and they both give the same tensor output. I even try to connect them to a Dense layer and they both compile.
You can look at the image attached to see my simple test.
image
What am I missing here? Why do you need Lambda?
What is the difference?
PS. Ignore that from keras import backend as K line. It was old code.

@tomwright01
Copy link

tomwright01 commented Dec 11, 2018

@Anton-Velikodnyy
I used this approach and was able to compile and fit the model, now when I try to load the model it fails with:
NameError: name 'tf' is not defined
importing tensorflow as tf doesn't seem to help.

import tensorflow as tf
import keras
from keras.layers import Input, Dense, Lambda, Concatenate
from keras.models import Model

I = input(shape=(5,5))
splits = Lambda(lambda x: tf.split(x, num_or_size_splits=5, axis=2))(I)
densors = []
for idx in range(5):
    densors.append(Dense(1)(splits[idx])
X=Concatenate()(densors)
m = Model(inputs=I, outputs=X)
m.compile(optimizer='adam', loss='binary_crossentropy', metrics=['binary_accuracy'])
m.save('test_model.h5')

m_new = keras.models.load_model('test_model.h5')

@asafarevich
Copy link

The code you pasted had several errors that made it not runable. Please test the code chunk prior to pasting it. I fixed those but not sure if that is what you implementation was doing.
Also in Github to have code span multiple lines use 3 backticks instead of one.
I solved your error based on this link https://github.com/keras-team/keras/issues/5298
I tried the above and it works with tf 1.11 keras 2.something.

However there is fundamental reason it broke. You were using the wrong Keras, the one you are looking for is in tensorflow. below is the code for that, and works with tf 1.11

import tensorflow as tf
import tensorflow.keras as keras
from tensorflow.keras.layers import Input, Dense, Lambda, Concatenate
from tensorflow.keras.models import Model


I = Input(shape=(5,5))
splits = Lambda(lambda x: tf.split(x, num_or_size_splits=5, axis=2))(I)
densors = []
for idx in range(5):
    densors.append(Dense(1)(splits[idx]))
X=Concatenate()(densors)
m = Model(inputs=I, outputs=X)
m.compile(optimizer='adam', loss='binary_crossentropy', metrics=['binary_accuracy'])
m.save('test_model.h5')

m_new = keras.models.load_model('test_model.h5')

@tomwright01
Copy link

@Anton-Velikodnyy
Thanks, I cleaned up the code. It was just a trivial but repeatable example. Adding custom_objects to the load_model function solved the problem.
m = keras.models.load_model('test_model.h5', custom_objects={"tf":tf})

@ParikhKadam
Copy link

ParikhKadam commented Dec 31, 2018

@lathen @marc-moreaux I understood what you mean but I am getting unexpected shapes when slicing. I didn't use the lamba function or the crop function but the problem lies in simple slicing such as x[0] or x[0,:,:]. I don't know why it happens and so finding a solution here.. Here is the problem.

When I try to slice a tensor of shape (2, None, None) in two halves on 1st dimension using x[0, :, :] or simply x[0] and x[1, :, :] or simply x[1], is get two tensors of shape (None, None, 1). I don't know from where this extra dimension is coming. Can you please help?

Here's the piece of code:

from keras import backend as K

def negative_avg_log_error(y_true, y_pred):

    def sum_of_log_probabilities(true_and_pred):
        y_true, y_pred_start, y_pred_end = true_and_pred

        print(K.int_shape(y_true))
        print(K.int_shape(y_pred_start))
        print(K.int_shape(y_pred_end))
        start_index = int(y_true[0])
        end_index = int(y_true[1])
        start_probability = y_pred_start[start_index]
        end_probability = y_pred_end[end_index]
        return K.log(start_probability) + K.log(end_probability)

    y_true = K.squeeze(y_true, axis=0)
    y_pred_start = y_pred[0]
    y_pred_end = y_pred[1]
    print(type(y_pred))
    print(K.int_shape(y_true))
    print(K.int_shape(y_pred))
    print(K.int_shape(y_pred_start))
    print(K.int_shape(y_pred_end))
    batch_probability_sum = K.map_fn(sum_of_log_probabilities, (y_true, y_pred_start, y_pred_end), dtype='float32')
    return -K.mean(batch_probability_sum, axis=0)

Here's the output:

<class 'tensorflow.python.framework.ops.Tensor'>
(None, None)
(2, None, None)
(None, None, 1)
(None, None, 1)
(None,)
(None, 1)
(None, 1)
---------------------------------------------------------------------------
**errors due to this unexpected behaviour**

The problem:

y_pred_start = y_pred[0]
y_pred_end = y_pred[1]

print(K.int_shape(y_pred)) -> (2, None, None)
print(K.int_shape(y_pred_start)) -> (None, None, 1) instead of (None, None)
print(K.int_shape(y_pred_end)) -> (None, None, 1) instead of (None, None)

Thank you...

@ParikhKadam
Copy link

Also, posted it on tensorflow issues.. Only see the this comment else you might get confused.

tensorflow/tensorflow#24519 (comment)

@offchan42
Copy link

offchan42 commented Jan 2, 2019

I have edited the code of @marc-moreaux to accept -1 as dim argument. I now know that you need to use Keras layers for every operation so Lambda is needed for this.

def Crop(dim, start, end, **kwargs):
    # Crops (or slices) a Tensor on a given dimension from start to end
    # example : to crop tensor x[:, :, 5:10]

    def func(x):
        dimension = dim
        if dimension == -1:
            dimension = len(x.shape) - 1
        if dimension == 0:
            return x[start:end]
        if dimension == 1:
            return x[:, start:end]
        if dimension == 2:
            return x[:, :, start:end]
        if dimension == 3:
            return x[:, :, :, start:end]
        if dimension == 4:
            return x[:, :, :, :, start:end]

    return Lambda(func, **kwargs)

@KIC
Copy link

KIC commented Apr 6, 2020

I think one could also use TimeDistributed(Dense(1)) or not?

@rory-donovan-official
Copy link

rory-donovan-official commented Nov 5, 2021

Why would you use Lambda? It is not portable....

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests