Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Problems with TF_GraphGetTensorNumDims when applied to "Variable" operations #5106

Closed
juraj-bicikl opened this issue Oct 21, 2016 · 3 comments
Assignees

Comments

@juraj-bicikl
Copy link

For some reason, when applying TF_GraphGetTensorNumDims to a "Variable" operation,
the functions returns -1, even though the shape and number of dimensions are
well defined.

A simple example:

#include <string>
#include <iostream>
#include "tensorflow/c/c_api.h"

int main() {

  // creating a TF_Graph
  TF_Graph* graph = TF_NewGraph();

  // creating a "Variable" operation
  TF_OperationDescription* opDesc = TF_NewOperation(graph, "Variable", "w");
  const long long dims[2] = {2, 2};
  TF_SetAttrShape(opDesc, "shape", dims, 2);
  TF_SetAttrType(opDesc, "dtype", TF_DOUBLE);

  TF_Status* status = TF_NewStatus();
  TF_Operation* w = TF_FinishOperation(opDesc, status);
  std::string finish_message = std::string(TF_Message(status));

  TF_Port w_port = {w, 0};
  int w_num_dims = TF_GraphGetTensorNumDims(graph, w_port, status);
  std::string num_dims_message = std::string(TF_Message(status));

  std::cout << "finish_message: " << finish_message << '\n';
  std::cout << "w_num_dims: " << w_num_dims << '\n';
  std::cout << "num_dims_message: " << num_dims_message << '\n';

  TF_DeleteGraph(graph);
  TF_DeleteStatus(status);

  return 0;
}

The program returns:

finish_message:
w_num_dims: -1
num_dims_message:

However, if I do the same for a "Placeholder" operation (by simply replacing
"Variable" with "Placeholder), the returned number of dimensions is 2 (as expected).

It seems to me like a bug. It might be related to the previous issue that
I reported (#5059), since the inability to determine the shape of a tensor at a particular node propagates through the graph.

@asimshankar
Copy link
Contributor

Thanks for the detailed report @juraj-bicikl . This particular case is because the C++ function for shape inference for the Variable op is explicitly returning an unknown shape (state_ops.cc). That does seem fishy to me, I'll investigate further and get back to you.

@asimshankar asimshankar self-assigned this Oct 21, 2016
@asimshankar
Copy link
Contributor

Unfortunately, for historical reasons, the "Variable" op's shape function will not be able to distinguish between an unknown shape or a scalar shape (similar to the Placeholder op). Before the 1.0 release, we are going to try and fix this up. However, in the mean time:

  • I'm going to try and fix this up for non-scalar shapes
  • You might consider the workaround for Variable ops where you explicitly set the shape using something like:
TF_GraphSetTensorShape(graph, w_port, dims, 2, status);

@juraj-bicikl
Copy link
Author

Thanks a lot!
I just tried using TF_GraphSetTensorShape, and it seems like a nice workaround.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants