Implement advanced indexing (and mixed basic/advanced) #4638

aselle opened this Issue Sep 28, 2016 · 10 comments


None yet

6 participants

aselle commented Sep 28, 2016

NumPy style advanced indexing is documented here

We currently support basic indexing
using StridedSlice.


foo = Some tensor
foo[idx1, idx2, 3]

@aselle aselle added the enhancement label Sep 28, 2016
@aselle aselle self-assigned this Sep 28, 2016
aselle commented Sep 28, 2016

Broken off of #206

shoyer commented Sep 28, 2016

I mentioned this in #206, but I wanted to remind again that mixed indexing with slices/arrays has some really strange/unpredictable behavior for the order of result axes in NumPy. So it might be better to hold off on implementing that in TensorFlow until we're sure we're doing it right.

aselle commented Sep 28, 2016

Yes, I am aware of how the mixed advanced/basic indexing is super nonintuitive. I could break the mixed indexing into its own issue.


idx1/idx2 in your example could be Tensor objects, right?

robsync commented Oct 4, 2016

does .10 currently support negative indexing? as in X[:,-1] for example?

aselle commented Oct 11, 2016 edited

@yaroslavvb: yes, idx1 andidx2 can be tensor objects. even in what is already implemented for baisc indexing, you can do


@robsync: negative indices have a bug in 0.10 and work in 0.11rc0


@aselle: I saw in #206 according to your 7/12 comment that lvalue basic indexing has been implemented. But what does that mean? Because whenever I do any kind of lvalue indexing


foo = Some tensor
foo[:3] = tf.zeros(3)

I get TypeError: 'Tensor' object does not support item assignment.

shoyer commented Oct 26, 2016

@Fenugreek Tensors are immutable. But you can do this sort of thing if foo is a tf.Variable by writing foo[:3].assign(tf.zeros(3))

danijar commented Oct 26, 2016

...and making sure that it gets executed:

with tf.control_dependencies([foo[:3].assign(tf.zeros(3))]):
    foo = tf.identity(foo)

Great, thanks @shoyer and @danijar . I am past that problem now, but have another error with the basic indexing:

        print pool.get_shape(), args.get_shape(), values.get_shape()
        tf.assign(pool[args], values)


(1024000,) (512000,) (512000,)
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/", line 1644, in _DelegateStridedSliceShape
    return common_shapes.call_cpp_shape_fn(op, input_tensors_needed=[1, 2, 3])
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/", line 596, in call_cpp_shape_fn
    raise ValueError(err.message)
ValueError: Shape must be rank 1 but is rank 2

There are no rank 2 shapes anywhere, so I am confused. The same error results even if I just do a return pool[args], so has nothing to do with the assignment but with the basic indexing itself. Thanks again.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment