Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature request] unsquashing unsorted_segment_x #18311

Open
roya0045 opened this issue Apr 7, 2018 · 7 comments
Open

[Feature request] unsquashing unsorted_segment_x #18311

roya0045 opened this issue Apr 7, 2018 · 7 comments
Assignees

Comments

@roya0045
Copy link

roya0045 commented Apr 7, 2018

Have I written custom code? No
OS Platform and Distribution? Win10
TensorFlow installed from? pip
TensorFlow version? 1.7
Bazel version? N/A
CUDA/cuDNN version? N/A
GPU model and memory? N/A
Exact command to reproduce: unsorted_segment_sum

Describe the problem

Let's say you have 3 sequences of 12 values (shape (3,12)) with 5 segments and you have an input of shape (None,12,3) transposed to (3,12,None)
using any function with unsorted_segement_x the output will be (5,None).

Would it be possible to have a function that does the same but still keep the first dimention.
Something equivalent to:

input.shape==(None,12,3)
segment.shape==(3,12)

outputs=[]
for ix, val in enumerate(tf.unstack(input)):
    ouputs.append(tf.unsorted_segment_sum(val,sequence[ix],5)
output=tf.stack(outputs)
output.shape==(3,5,None)

The use of such a thing would be to iterate over a weighted input and without having to loop over the data or using reshape. I think this should be somewhat straightforward to implement; not concatenate together all the sequences for the superior dimensions and use "keep_dims" similar to reduce_sum to trigger that behaviour.

@tensorflowbutler tensorflowbutler added the stat:awaiting response Status - Awaiting response from author label Apr 8, 2018
@tensorflowbutler
Copy link
Member

Thank you for your post. We noticed you have not filled out the following field in the issue template. Could you update them if they are relevant in your case, or leave them as N/A? Thanks.
Have I written custom code
OS Platform and Distribution
TensorFlow installed from
TensorFlow version
Bazel version
CUDA/cuDNN version
GPU model and memory
Exact command to reproduce

@roya0045
Copy link
Author

roya0045 commented Apr 8, 2018

Have I written custom code? No
OS Platform and Distribution? Win10
TensorFlow installed from? pip
TensorFlow version? 1.7
Bazel version? N/A
CUDA/cuDNN version? N/A
GPU model and memory? N/A
Exact command to reproduce: unsorted_segment_sum
(duplicate info for the bot)

@tensorflowbutler tensorflowbutler removed the stat:awaiting response Status - Awaiting response from author label Apr 9, 2018
@zhaoxin19
Copy link

I want to use the same feature too! It is strange that the segment_ops don't have keep_dims parameter.

@roya0045
Copy link
Author

@zhaoxin19 There are some workaround to this, a good idea might be to start a new thread about this since @rmlarsen seems to be deceased. If you are in a rush and need some workarounds I could help.

@zhaoxin19
Copy link

@roya0045 ,thank you very much for you kind help. I am doing deep model research and want to add this kind of operation in my new model. Lucily I've found the unsorted_segment_sum implement. But it is a little different as what I want. I think add a keep_dims parameter is a good idea in this operation. I'm not expert in adding or changing a new operation in tensorflow. I will be very grateful if anyone could help.

@zhaoxin19
Copy link

@roya0045 do you know how can we add this feature in tensorflow? Seems that we just need to change tf.unsorted_segment_sum api as tf.reduce_sum adding a keep_dims parameter.

@roya0045
Copy link
Author

I'm sure it can be done but I'm not sure how. I'm not familiar with c++ and the code base for tensorflow.

The workaround I have found so far are implemented in my latest projet

The only issue is that I have not tested if those 4 methods break backpropagation (get the weights first, train on random data, get the weight after and just subtract the initial weight to see if it worked.)

I wanted to test them but I haven't had much time to do so, if you want you could just set up a minimal network with a for loop for each version , train it and compare the weights to see if those workarounds are valid, I'd be interested in the results.

@mohantym mohantym removed their assignment Jul 13, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants