Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Scatter_nd doc not clear about concurrent updates #8102

Closed
aliosmanulusoy opened this issue Mar 5, 2017 · 8 comments
Closed

Scatter_nd doc not clear about concurrent updates #8102

aliosmanulusoy opened this issue Mar 5, 2017 · 8 comments
Labels
type:docs-bug Document issues

Comments

@aliosmanulusoy
Copy link

aliosmanulusoy commented Mar 5, 2017

The doc on the scatter_nd does not specify the consequence of multiple updates that reference the same location.

I've tested this using the following code:

indices = tf.constant([[4], [3], [1], [1]])
updates = tf.constant([9, 10, 11, 12])
shape = tf.constant([8])
scatter = tf.scatter_nd(indices, updates, shape)
with tf.Session() as sess:
  print sess.run(scatter)

The resulting tensor is:

[0, 23, 0, 10, 9, 0, 0, 0]

So it seems the updates are added. Is this the intended usage? If so, it would be great to clarify this in the docs.

Thanks!

@aliosmanulusoy aliosmanulusoy changed the title Doc on scatter_nd not clear about concurrent updates Scatter_nd doc not clear about concurrent updates Mar 6, 2017
@prb12 prb12 added the type:docs-bug Document issues label Mar 6, 2017
@drpngx
Copy link
Contributor

drpngx commented Mar 6, 2017

The CPU/GPU implementation initializes to zero and uses add rather than assign.

It seems like a somewhat stronger promise than assign, where the output would be

[0, 12, 0, 10, 9, 0, 0, 0] or [0, 11, 0, 10, 9, 0, 0, 0]

non-deterministically, or in an order that we define.

@rmlarsen for opinion

@raingo
Copy link
Contributor

raingo commented Aug 9, 2017

@vrv the document is still confusing for this function.

WARNING: The order in which updates are applied is nondeterministic, so the output will be nondeterministic if indices contains duplicates.

Even if order is nondeterministic, the summation is deterministic. Why not just promise that duplicates will be summed?

@vrv
Copy link

vrv commented Aug 9, 2017

I didn't write the code or the doc, so I'm not the best person to clarify this, but maybe because order of operations for floating point values matter that it's still technically a non-deterministic result. See "catastrophic cancellation" for cases where addition order matters.

@vrv
Copy link

vrv commented Aug 9, 2017

Also, I think the language implies all updates will be attempted in some order, but feel free to send a PR to update the doc to clarify that point, if you think it's useful.

@raingo
Copy link
Contributor

raingo commented Aug 9, 2017

OK. Thanks.

@girving girving removed their assignment Aug 9, 2017
@teramototoya
Copy link

Is there a way to change add to update?

@csnemes2
Copy link

csnemes2 commented Nov 2, 2017

If you are in the unpooling business:

@teramototoya what I did as a hack: with https://www.tensorflow.org/api_docs/python/tf/unique_with_counts i counted the multiplication in the indices and i divided the tensor which was holding the values (aka tensor named 'updates' in the first comment) with this counter
so when add() comes it will undo what the division made

you can check this simple script: https://github.com/csnemes2/conv-net-viz/blob/master/ut_unpool.py

If not, so your problem is general, then you have to somehow flatten your indices, and then tf.unique, see this post:
https://stackoverflow.com/questions/44117430/how-to-use-tf-scatter-nd-without-accumulation

@teramototoya
Copy link

teramototoya commented Nov 3, 2017

@csnemes2 I also tried the same implementation as your hack, but I could not do it well. It worked well with your implementation! Thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
type:docs-bug Document issues
Projects
None yet
Development

No branches or pull requests

8 participants