Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[feature request] numpy style shape sugar #586

Closed
DSLituiev opened this issue Dec 22, 2015 · 21 comments
Closed

[feature request] numpy style shape sugar #586

DSLituiev opened this issue Dec 22, 2015 · 21 comments
Assignees

Comments

@DSLituiev
Copy link

I'd like to suggest numpy-like shape read-only property for tf.Tensor, something like:

@property
def shape(self):
  return tuple(self.get_shape().as_list())
@sherrym sherrym self-assigned this Dec 23, 2015
@mrry
Copy link
Contributor

mrry commented Dec 28, 2015

I'd prefer not to have two ways to access (subtly different forms of) the same information. However, I wouldn't be opposed to changing Tensor.get_shape() to be a Tensor.shape property. Ideally you could use a (fully defined) TensorShape anywhere a list of integers is accepted. Would that work for your purposes?

@DSLituiev
Copy link
Author

It is OK for me even now as is, the question what is the optimal / general way ;)

In my opinion, I see two possible solution based on what you said and on criterion whether the TensorShape object can be used to initialize other objects or can be assigned to some other variables and carries information more than just a tuple of integers:

(1) if TensorShape has practical significance as is: implement both methods; maybe reimplement get_shape as a property (called something like tshape for tensor shape)

(2) if TensorShape is used only for internal convenience, change to the above suggested property

@sherrym sherrym removed their assignment Apr 18, 2016
@danijar
Copy link
Contributor

danijar commented May 22, 2016

Are there any plans on changing this? @DSLituiev's comment sounds very reasonable to me.

@mrry
Copy link
Contributor

mrry commented May 22, 2016

There are no plans to change the use of TensorShape, and we're happy to accept suggestions/PRs that would make TensorShape usable wherever a list of integers would be.

@DSLituiev
Copy link
Author

What is the use of TensorShape? Is it ever used downstream of get_shape()? As I understand, it cannot even be fed into shape parameter upon tensor initialization.

@DSLituiev
Copy link
Author

Do you imply that you'll oppose .shape returning simple tuple because there is a downstream component that makes use of TensorShape data structure that has something other than what is contained in the tuple?

@mrry
Copy link
Contributor

mrry commented May 22, 2016

Tensors in TensorFlow can have dynamic shapes. TensorShape is used throughout the Python API to represent shapes that might have one or more unknown dimensions, or even an unknown rank; and to combine such shapes easily.

So, yes, I'd be opposed to having two slightly incompatible properties/methods on Tensor that convey the same information, but cannot be used the same way. If we add Tensor.shape, it should return TensorShape, and we should deprecate Tensor.get_shape() at the same time.

Which tensor initialization function doesn't support TensorShape objects as the shape argument? We should fix that.

@danijar
Copy link
Contributor

danijar commented May 23, 2016

@mrry I totally agree that we shouldn't have both .shape and .get_shape(). One problem I have with the current TensorShape is that it cannot be combined into new shapes easily; please see the example below. Also, I think it's worth switching to the shorter and Numpy-equivalent .shape now (rather than waiting and breaking more code later).

>>> import tensorflow as tf
>>> a = tf.placeholder(tf.float32, [None, None, None])
>>> b = tf.reshape(a, [a.get_shape()[0], 10, -1])
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/usr/lib/python3.5/site-packages/tensorflow/python/ops/gen_array_ops.py", line 1092, in reshape
    name=name)
  File "/usr/lib/python3.5/site-packages/tensorflow/python/ops/op_def_library.py", line 411, in apply_op
    as_ref=input_arg.is_ref)
  File "/usr/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 566, in convert_to_tensor
    ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
  File "/usr/lib/python3.5/site-packages/tensorflow/python/ops/constant_op.py", line 179, in _constant_tensor_conversion_function
    return constant(v, dtype=dtype, name=name)
  File "/usr/lib/python3.5/site-packages/tensorflow/python/ops/constant_op.py", line 162, in constant
    tensor_util.make_tensor_proto(value, dtype=dtype, shape=shape))
  File "/usr/lib/python3.5/site-packages/tensorflow/python/framework/tensor_util.py", line 332, in make_tensor_proto
    _AssertCompatible(values, dtype)
  File "/usr/lib/python3.5/site-packages/tensorflow/python/framework/tensor_util.py", line 272, in _AssertCompatible
    (dtype.name, repr(mismatch), type(mismatch).__name__))
TypeError: Expected int32, got Dimension(None) of type 'Dimension' instead.
>>> 

@DSLituiev
Copy link
Author

probably this was the function. On the other hand, tf.placeholder accepts both TensorShape and integer tuples:

y = tf.placeholder(np.dtype("float"), shape = (None,1,3) )
y.get_shape()

"equivalent to"
y = tf.placeholder(np.dtype("float"), shape =
                   tf.TensorShape([tf.Dimension(None), tf.Dimension(1), tf.Dimension(3)])
                    )

@mrry, you must know better, but I never yet faced an expression where I had to feed TensorShape and tuple (well, sometimes list) of integers did do the job.

@DSLituiev
Copy link
Author

I mean: I understand that TensorShape is a convention for Python API, but I have not seen a case where it is what the final user might need. Do you have documentation on how the unknown rank is represented?

@girving
Copy link
Contributor

girving commented Jun 6, 2016

@mrry: Should this be contributions welcome, or do we still have plans to do our own refactoring of .shape vs. .get_shape?

@girving girving added the triaged label Jun 6, 2016
@mrry
Copy link
Contributor

mrry commented Jun 6, 2016

We can't—or, at least, really shouldn't—accept a contribution on this until the internal refactoring is done. However, the internal refactoring is P3, so it might be some time before it rises to the top of the pile.

@girving
Copy link
Contributor

girving commented Jun 6, 2016

Sounds good, let's leave it as is.

@danijar
Copy link
Contributor

danijar commented Jul 4, 2016

@mrry Any news on the internal refactoring?

@aselle aselle removed the triaged label Jul 28, 2016
@alextp
Copy link
Contributor

alextp commented Aug 15, 2016

@mrry friendly ping?

@mrry
Copy link
Contributor

mrry commented Aug 15, 2016

This remains blocked on a low-priority internal cleanup. I'd be happy to hand it off to somebody who is looking for something to do, but—due to the nature of the conflict—they would need to be a Google employee,

@danijar
Copy link
Contributor

danijar commented Aug 15, 2016

@mrry Maybe I can help? I'm currently interning at Google MTV, feel free to ping me.

@mrry
Copy link
Contributor

mrry commented Aug 15, 2016

@danijar I'll be in touch!

@martinwicke
Copy link
Member

What's the status of this? @yifeif may be able to help.

@danijar
Copy link
Contributor

danijar commented Sep 17, 2016

That would be perfect. I don't have the time for it, unfortunately. Can you contact @mrry?

@mrry mrry assigned yifeif and unassigned mrry Oct 26, 2016
martinwicke pushed a commit to martinwicke/tensorflow that referenced this issue Dec 14, 2016
  per request from tensorflow#586
Change: 141922517
@drpngx
Copy link
Contributor

drpngx commented Jan 24, 2017

Fixed by commit above.

@drpngx drpngx closed this as completed Jan 24, 2017
tarasglek pushed a commit to tarasglek/tensorflow that referenced this issue Jun 20, 2017
Update cifar input following data change.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

10 participants