Implement advanced indexing (and mixed basic/advanced) #4638

Open
aselle opened this Issue Sep 28, 2016 · 10 comments

Projects

None yet

6 participants

@aselle
Member
aselle commented Sep 28, 2016

NumPy style advanced indexing is documented here http://docs.scipy.org/doc/numpy/reference/arrays.indexing.html#advanced-indexing

We currently support basic indexing http://docs.scipy.org/doc/numpy/reference/arrays.indexing.html#basic-slicing-and-indexing
using StridedSlice.

e.g.

foo = Some tensor
idx1=[1,2,3]
idx2=[3,4,5]
foo[idx1, idx2, 3]

@aselle aselle added the enhancement label Sep 28, 2016
@aselle aselle self-assigned this Sep 28, 2016
@aselle
Member
aselle commented Sep 28, 2016

Broken off of #206

@shoyer
Member
shoyer commented Sep 28, 2016

I mentioned this in #206, but I wanted to remind again that mixed indexing with slices/arrays has some really strange/unpredictable behavior for the order of result axes in NumPy. So it might be better to hold off on implementing that in TensorFlow until we're sure we're doing it right.

@aselle
Member
aselle commented Sep 28, 2016

Yes, I am aware of how the mixed advanced/basic indexing is super nonintuitive. I could break the mixed indexing into its own issue.

@yaroslavvb
Contributor

idx1/idx2 in your example could be Tensor objects, right?

@robsync
robsync commented Oct 4, 2016

does .10 currently support negative indexing? as in X[:,-1] for example?

@aselle
Member
aselle commented Oct 11, 2016 edited

@yaroslavvb: yes, idx1 andidx2 can be tensor objects. even in what is already implemented for baisc indexing, you can do

a=tf.constant(3); 
b=tf.constant(6);
foo[a:b]

@robsync: negative indices have a bug in 0.10 and work in 0.11rc0

@Fenugreek

@aselle: I saw in #206 according to your 7/12 comment that lvalue basic indexing has been implemented. But what does that mean? Because whenever I do any kind of lvalue indexing

e.g.

foo = Some tensor
foo[:3] = tf.zeros(3)

I get TypeError: 'Tensor' object does not support item assignment.
Thanks.

@shoyer
Member
shoyer commented Oct 26, 2016

@Fenugreek Tensors are immutable. But you can do this sort of thing if foo is a tf.Variable by writing foo[:3].assign(tf.zeros(3))

@danijar
Contributor
danijar commented Oct 26, 2016

...and making sure that it gets executed:

with tf.control_dependencies([foo[:3].assign(tf.zeros(3))]):
    foo = tf.identity(foo)
@Fenugreek

Great, thanks @shoyer and @danijar . I am past that problem now, but have another error with the basic indexing:

        print pool.get_shape(), args.get_shape(), values.get_shape()
        tf.assign(pool[args], values)

gives

(1024000,) (512000,) (512000,)
Traceback (most recent call last):
...
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/array_ops.py", line 1644, in _DelegateStridedSliceShape
    return common_shapes.call_cpp_shape_fn(op, input_tensors_needed=[1, 2, 3])
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/common_shapes.py", line 596, in call_cpp_shape_fn
    raise ValueError(err.message)
ValueError: Shape must be rank 1 but is rank 2

There are no rank 2 shapes anywhere, so I am confused. The same error results even if I just do a return pool[args], so has nothing to do with the assignment but with the basic indexing itself. Thanks again.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment