Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

tf.unstack did not work with tf 1.8 CudnnGRU tensors #22223

Closed
zheolong opened this issue Sep 12, 2018 · 5 comments
Closed

tf.unstack did not work with tf 1.8 CudnnGRU tensors #22223

zheolong opened this issue Sep 12, 2018 · 5 comments
Assignees
Labels
stat:awaiting response Status - Awaiting response from author

Comments

@zheolong
Copy link

zheolong commented Sep 12, 2018

  • Have I written custom code (as opposed to using a stock example script provided in TensorFlow):
    Yes
  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04):
$uname -r
3.10.0-327.el7.x86_64
  • Mobile device
Not mobile
  • TensorFlow installed from (source or binary):
    anaconda tf 1.8

  • TensorFlow version (use command below):

$conda list|grep tensor
tensorboard               1.8.0            py36hf484d3e_0
tensorflow                1.8.0                hb381393_0
tensorflow-base           1.8.0            py36h4df133c_0
tensorflow-gpu            1.8.0                h7b35bdc_0
  • Python version:
$python3.6 -V
Python 3.6.2 :: Continuum Analytics, Inc.
  • Bazel version (if compiling from source):
$bazel version
Build label: 0.4.5
Build target: bazel-out/local-fastbuild/bin/src/main/java/com/google/devtools/build/lib/bazel/BazelServer_deploy.jar
Build time: Thu Mar 16 12:19:38 2017 (1489666778)
Build timestamp: 1489666778
Build timestamp as int: 1489666778
  • CUDA/cuDNN version:
$conda list|grep -i cuda
cudatoolkit               8.0                           3    https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free
cudnn                     7.0.5                 cuda8.0_0
  • GPU model and memory:

== cat /etc/issue ===============================================
Linux rvab01298.sqa.ztt 3.10.0-327.el7.x86_64 #1 SMP Thu Nov 19 22:10:57 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
VERSION="7.2 (Paladin)"
VERSION_ID="7.2"
Qihoo360_BUGZILLA_PRODUCT_VERSION=7.2
Qihoo360_SUPPORT_PRODUCT_VERSION=7.2

== are we in docker =============================================
No

== compiler =====================================================
c++ (GCC) 4.9.2
Copyright (C) 2014 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.  There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.


== uname -a =====================================================
Linux rvab01298.sqa.ztt 3.10.0-327.el7.x86_64 #1 SMP Thu Nov 19 22:10:57 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux

== check pips ===================================================
numpy (1.13.3)
protobuf (3.5.1)
tensorflow (1.8.0)

== check for virtualenv =========================================
False

== tensorflow import ============================================
tf.VERSION = 1.8.0
tf.GIT_VERSION = b'unknown'
tf.COMPILER_VERSION = b'unknown'
Sanity check: array([1], dtype=int32)

== env ==========================================================
LD_LIBRARY_PATH :/usr/local/mpc-0.8.1/lib:/usr/local/gmp-4.3.2/lib:/usr/local/mpfr-2.4.2/lib:/gruntdata/qihoo360/cuda/lib64
DYLD_LIBRARY_PATH is unset

== nvidia-smi ===================================================
Wed Sep 12 13:34:30 2018
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 375.26                 Driver Version: 375.26                    |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  Tesla K40m          On   | 0000:02:00.0     Off |                    0 |
| N/A   36C    P0    67W / 235W |   1161MiB / 11439MiB |     39%      Default |
+-------------------------------+----------------------+----------------------+
|   1  Tesla K40m          On   | 0000:03:00.0     Off |                    0 |
| N/A   35C    P0    60W / 235W |     73MiB / 11439MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID  Type  Process name                               Usage      |
|=============================================================================|
|    0     13950    C   bin/arks                                       868MiB |
|    0     27880    C   python3.6                                      288MiB |
|    1     27880    C   python3.6                                       71MiB |
+-----------------------------------------------------------------------------+

== cuda libs  ===================================================
/usr/local/cuda-8.0/doc/man/man7/libcudart.7
/usr/local/cuda-8.0/doc/man/man7/libcudart.so.7
/usr/local/cuda-8.0/lib64/libcudart_static.a
/usr/local/cuda-8.0/lib64/libcudart.so.8.0.61
/usr/local/cuda-7.5/doc/man/man7/libcudart.7
/usr/local/cuda-7.5/doc/man/man7/libcudart.so.7
/usr/local/cuda-7.5/lib64/libcudart.so.7.5.18
/usr/local/cuda-7.5/lib64/libcudart_static.a
/usr/local/cuda-7.5/lib/libcudart.so.7.5.18
/usr/local/cuda-7.5/lib/libcudart_static.a

Describe the problem

tf.unstack did not work as expected. It did not reduce R rank tensor to R-1 rank tensor

Source code / logs

code:

#! /usr/bin/env python
# -*- coding: utf-8 -*-

import sys
import tensorflow as tf
rnn_model = tf.contrib.cudnn_rnn.CudnnGRU(
        num_layers=1,
        num_units=64,
        direction='unidirectional')
rnn_model.build([3, 1, 3])
inputs=[[[1,1,1],[1,1,1],[1,1,1]]]
inputs_tensor= tf.convert_to_tensor(inputs, dtype=tf.float32)
print(tf.shape(inputs_tensor))
rnn_out, rnn_state = rnn_model(inputs_tensor)
print("rnn_state: ", rnn_state)
rnn_layers = tf.unstack(rnn_state)
print("rnn_layers", rnn_layers)

paste the code to file demo.py, then run from linux command line:

$ python3.6 demo.py

output:

Tensor("Shape:0", shape=(3,), dtype=int32)
rnn_state:  (<tf.Tensor 'cudnn_gru/CudnnRNN:1' shape=(1, ?, 64) dtype=float32>,)
rnn_layers [<tf.Tensor 'unstack:0' shape=(1, ?, 64) dtype=float32>]

the rnn_layers should be rnn_layers [<tf.Tensor 'unstack:0' shape=(?, 64) dtype=float32>]

@zheolong zheolong changed the title tf.unstack did not work with tf 1.8 CudnnGRU tensors 【Bug】tf.unstack did not work with tf 1.8 CudnnGRU tensors Sep 12, 2018
@tensorflowbutler tensorflowbutler added the stat:awaiting response Status - Awaiting response from author label Sep 12, 2018
@tensorflowbutler
Copy link
Member

Thank you for your post. We noticed you have not filled out the following field in the issue template. Could you update them if they are relevant in your case, or leave them as N/A? Thanks.
Bazel version
Exact command to reproduce

@bignamehyp
Copy link
Member

rnn_state is a tuple, not a tensor.

Please debug yourself before posting issue on TensorFlow.

This question is better asked on StackOverflow since it is not a bug or feature request. There is also a larger community that reads questions there.

If you think we've misinterpreted a bug, please comment again with a clear explanation, as well as all of the information requested in the issue template. Thanks!

@zheolong
Copy link
Author

zheolong commented Sep 13, 2018

@bignamehyp rnn_state is a tuple? but I print it out as 'Tensor', moreover, the doc of tf.contrib.cudnn_rnn.CudnnGRU shows returns as Output tensor(s).

stackoverflow issue added, the link: stackoverflow issue

@zheolong
Copy link
Author

zheolong commented Sep 13, 2018

Thank you for your post. We noticed you have not filled out the following field in the issue template. Could you update them if they are relevant in your case, or leave them as N/A? Thanks.
Bazel version
Exact command to reproduce

ok, Bazel version and exact command added.

@zheolong
Copy link
Author

@bignamehyp rnn_state is a tuple? but I print it out as 'Tensor', moreover, the doc of tf.contrib.cudnn_rnn.CudnnGRU shows returns as Output tensor(s).

stackoverflow issue added, the link: [stackoverflow issue](https://stackoverflow.com/questions/52306358/tf-unstack-did-not-work-with-tf-1-8-cudnngru-tens

rnn_state is a tuple, not a tensor.

Please debug yourself before posting issue on TensorFlow.

This question is better asked on StackOverflow since it is not a bug or feature request. There is also a larger community that reads questions there.

If you think we've misinterpreted a bug, please comment again with a clear explanation, as well as all of the information requested in the issue template. Thanks!

ok , get it done, thx

@zheolong zheolong changed the title 【Bug】tf.unstack did not work with tf 1.8 CudnnGRU tensors tf.unstack did not work with tf 1.8 CudnnGRU tensors Sep 18, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
stat:awaiting response Status - Awaiting response from author
Projects
None yet
Development

No branches or pull requests

3 participants