# tensorflow/tensorflow

Open
opened this Issue Sep 28, 2016 · 43 comments

Projects
None yet
Member

### aselle commented Sep 28, 2016

 NumPy style advanced indexing is documented here http://docs.scipy.org/doc/numpy/reference/arrays.indexing.html#advanced-indexing We currently support basic indexing http://docs.scipy.org/doc/numpy/reference/arrays.indexing.html#basic-slicing-and-indexing using StridedSlice. e.g. foo = Some tensor idx1=[1,2,3] idx2=[3,4,5] foo[idx1, idx2, 3]

Member

### aselle commented Sep 28, 2016

 Broken off of #206

Closed

Member

### shoyer commented Sep 28, 2016

 I mentioned this in #206, but I wanted to remind again that mixed indexing with slices/arrays has some really strange/unpredictable behavior for the order of result axes in NumPy. So it might be better to hold off on implementing that in TensorFlow until we're sure we're doing it right.
Member

### aselle commented Sep 28, 2016

 Yes, I am aware of how the mixed advanced/basic indexing is super nonintuitive. I could break the mixed indexing into its own issue.
Contributor

### yaroslavvb commented Sep 28, 2016

 idx1/idx2 in your example could be Tensor objects, right?

### Rob-Haslinger-Bose commented Oct 4, 2016

 does .10 currently support negative indexing? as in X[:,-1] for example?
Member

### aselle commented Oct 11, 2016 • edited

 @yaroslavvb: yes, `idx1` and`idx2` can be tensor objects. even in what is already implemented for baisc indexing, you can do ``````a=tf.constant(3); b=tf.constant(6); foo[a:b] `````` @robsync: negative indices have a bug in 0.10 and work in 0.11rc0
Contributor

### Fenugreek commented Oct 26, 2016

 @aselle: I saw in #206 according to your 7/12 comment that lvalue basic indexing has been implemented. But what does that mean? Because whenever I do any kind of lvalue indexing e.g. ```foo = Some tensor foo[:3] = tf.zeros(3)``` I get `TypeError: 'Tensor' object does not support item assignment.` Thanks.
Member

### shoyer commented Oct 26, 2016

 @Fenugreek Tensors are immutable. But you can do this sort of thing if `foo` is a `tf.Variable` by writing `foo[:3].assign(tf.zeros(3))`
Member

### danijar commented Oct 26, 2016

 ...and making sure that it gets executed: ```with tf.control_dependencies([foo[:3].assign(tf.zeros(3))]): foo = tf.identity(foo)```
Contributor

### Fenugreek commented Oct 26, 2016

 Great, thanks @shoyer and @danijar . I am past that problem now, but have another error with the basic indexing: ``` print pool.get_shape(), args.get_shape(), values.get_shape() tf.assign(pool[args], values)``` gives ``````(1024000,) (512000,) (512000,) Traceback (most recent call last): ... File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/array_ops.py", line 1644, in _DelegateStridedSliceShape return common_shapes.call_cpp_shape_fn(op, input_tensors_needed=[1, 2, 3]) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/common_shapes.py", line 596, in call_cpp_shape_fn raise ValueError(err.message) ValueError: Shape must be rank 1 but is rank 2 `````` There are no rank 2 shapes anywhere, so I am confused. The same error results even if I just do a `return pool[args]`, so has nothing to do with the assignment but with the basic indexing itself. Thanks again.

### charlienash commented Feb 17, 2017

 I'm having a similar error to @Fenugreek: ``````A = tf.constant([[1,2],[3,4],[5,6]]) id_rows = tf.constant([0,2]) A[id_rows, :] `````` gives an error: ``````ValueError: Shape must be rank 1 but is rank 2 for 'strided_slice' (op: 'StridedSlice') with input shapes: [3,2], [1,2], [1,2], [1]. `````` This is using version 1.0.0. Thanks.
Member

### aselle commented Mar 8, 2017

 You need to manually broadcast it i.e. `values.get_shape()` must match `pool[args].get_shape`. This is a limitation in the current implementation.

### Kublai-Jing commented Apr 4, 2017

 @aselle Can you elaborate more on how to manually broadcast to make it work ? Thanks.
Member

### shoyer commented Apr 4, 2017

 StackOverflow would be a great place to ask for tips on how to use broadcasting to make gather_nd work … On Mon, Apr 3, 2017 at 6:56 PM, Kublai-Jing ***@***.***> wrote: @aselle Can you elaborate more on how to manually broadcast to make it work ? Thanks. — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub <#4638 (comment)>, or mute the thread .

Open

Contributor

### cancan101 commented Jun 28, 2017

 I posted the issue to SO: https://stackoverflow.com/questions/44793286/slicing-tensorflow-tensor-with-tensor.
Contributor

### cancan101 commented Jun 28, 2017

 @itsmeolivia why was the issue closed? It doesn't seem like the example in the OP works.
Contributor

### itsmeolivia commented Jun 28, 2017

 This question is better asked on StackOverflow since it is not a bug or feature request. There is also a larger community that reads questions there. Thanks!
Contributor

### cancan101 commented Jun 28, 2017

 I'm asking why the ticket was closed given the op was in fact a feature. Was it implemented? Or decided to not implement. There is no info in the ticket as to why it was closed.
Contributor

### brianwa84 commented Jul 6, 2017

 @aselle probably want to keep this open right?

Contributor

### gibiansky commented Aug 25, 2017

 Is there any plan to implement this in TensorFlow?

### fword commented Nov 13, 2017

 any progress?
Contributor

Member

### tensorflowbutler commented Dec 22, 2017

 It has been 14 days with no activity and this issue has an assignee.Please update the label and/or status accordingly.
Member

### tensorflowbutler commented Jan 5, 2018

 Nagging Assigneee: It has been 14 days with no activity and this issue has an assignee. Please update the label and/or status accordingly.
Member

### tensorflowbutler commented Jan 24, 2018

 Nagging Assignee: It has been 14 days with no activity and this issue has an assignee. Please update the label and/or status accordingly.
Member

### yifeif commented Feb 8, 2018

 @aselle do you know if we have plan to implement this? If not, should we mark this as contribution welcome?
Member

### traveller59 commented Feb 9, 2018

 use gather to do advanced index need much more code than numpy/pytorch, and make code hard to read. please implement this if possible. I have implemented a simple function to do some numpy-style advanced indexing based on tf.gather_nd and tf.transpose.
Member

### tensorflowbutler commented Mar 3, 2018

 Nagging Assignee @aselle: It has been 14 days with no activity and this issue has an assignee. Please update the label and/or status accordingly.
Member

### tensorflowbutler commented Mar 17, 2018

 Nagging Assignee @aselle: It has been 14 days with no activity and this issue has an assignee. Please update the label and/or status accordingly.
Member

### tensorflowbutler commented Apr 1, 2018

 Nagging Assignee @aselle: It has been 14 days with no activity and this issue has an assignee. Please update the label and/or status accordingly.

Closed

Member

### tensorflowbutler commented Apr 16, 2018

 Nagging Assignee @aselle: It has been 14 days with no activity and this issue has an assignee. Please update the label and/or status accordingly.

Member

### aselle commented Apr 25, 2018

 @saxenasaurabh, could you take this issue over?
Member

### aselle commented Apr 25, 2018

 @saxenasaurabh, could you please take this issue over?

Member

### tensorflowbutler commented May 10, 2018

 Please remove the assignee, as this issue is inviting external contributions. Otherwise, remove the `contributions welcome` label. Thank you.
Member

### tensorflowbutler commented May 27, 2018

 Please remove the assignee, as this issue is inviting external contributions. Otherwise, remove the `contributions welcome` label. Thank you.
Member

### tensorflowbutler commented Jun 10, 2018

 Please remove the assignee, as this issue is inviting external contributions. Otherwise, remove the `contributions welcome` label. Thank you.
Member

### tensorflowbutler commented Jun 25, 2018

 Please remove the assignee, as this issue is inviting external contributions. Otherwise, remove the `contributions welcome` label. Thank you.
Member

### tensorflowbutler commented Jul 10, 2018

 Please remove the assignee, as this issue is inviting external contributions. Otherwise, remove the `contributions welcome` label. Thank you.

### gokul-uf commented Jul 26, 2018

 @aselle Is there active development on this front or is using tf Eager the less painful way to go? Thanks!
Member

### asimshankar commented Jul 26, 2018

 @gokul-uf - Nope, this is not being actively developed at this time. Not quite sure if using eager helps you either. You could easily convert a Tensor to numpy and use advanced indexing, but if you want to compute gradients through that indexing, that won't work.
Member

### shoyer commented Jul 26, 2018

 On a related note, it might be worth reviewing our proposal for how to change indexing behavior in NumPy moving forward: http://www.numpy.org/neps/nep-0021-advanced-indexing.html … On Thu, Jul 26, 2018 at 10:44 AM Asim Shankar ***@***.***> wrote: @gokul-uf - Nope, this is not being actively developed at this time. Not quite sure if using eager helps you either. You could easily convert a Tensor to numpy and use advanced indexing, but if you want to compute gradients through that indexing, that won't work. — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub <#4638 (comment)>, or mute the thread .