Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Branch 143639671 #6667

Merged
merged 69 commits into from Jan 5, 2017
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
69 commits
Select commit Hold shift + click to select a range
f73fe0e
Add a benchmark for TensorFlow RPC performance
Jan 3, 2017
542e4dc
tfdbg doc: emphasize the new required BUILD dependnecy
caisq Jan 3, 2017
6a04c10
Fixed incorrect example for tf.while_loop().
tensorflower-gardener Jan 3, 2017
1514d36
Fix `sample` shape hints and remove `sample_n`.
jvdillon Jan 3, 2017
ea579f1
Update generated Python Op docs.
tensorflower-gardener Jan 3, 2017
81d9a24
Update pylintrc.
tensorflower-gardener Jan 3, 2017
7f62ba6
Update documentation for parse_example vs parse_single_example.
tensorflower-gardener Jan 3, 2017
1055b6a
Handle non-tensor args for predictions and labels.
tensorflower-gardener Jan 3, 2017
93fa85e
Update generated Python Op docs.
tensorflower-gardener Jan 3, 2017
2add2f1
Pass Estimator model_dir to the model_fn.
tensorflower-gardener Jan 3, 2017
92b5d2f
Update generated Python Op docs.
tensorflower-gardener Jan 3, 2017
e24d017
Restrict weights rank to be the same as the broadcast target, to avoi…
tensorflower-gardener Jan 3, 2017
2c0fa4e
Remove unused FLAGS.
Jan 3, 2017
ee1f819
Add control edge support to TensorId.
skye Jan 3, 2017
74edc58
Fix a bug in sparse_softmax_cross_entropy for weights of unspecified …
tensorflower-gardener Jan 3, 2017
a023d0b
Update generated Python Op docs.
tensorflower-gardener Jan 3, 2017
35f8b1f
Fix freeze_graph.
tensorflower-gardener Jan 4, 2017
109c03d
Android: add Timer utility class for measuring cpu and wall time.
andrewharp Jan 4, 2017
6ec984e
Adds the following new ops:
tensorflower-gardener Jan 4, 2017
d352573
Update ops-related pbtxt files.
tensorflower-gardener Jan 4, 2017
d4109cb
Optimize im2col section of quantized convolution
petewarden Jan 4, 2017
8a6014b
Fix usage of tensorflow namespace in graph_to_dot
petewarden Jan 4, 2017
7c63520
Update generated Python Op docs.
tensorflower-gardener Jan 4, 2017
6703501
Added Experiment integration tests with custom Estimator, linear/dnn/…
ispirmustafa Jan 4, 2017
271b7f3
tfdbg CLI: let list_tensors (lt) output display dump file size
caisq Jan 4, 2017
99125e7
tfdbg doc: minor fix re. command-line flags
caisq Jan 4, 2017
48da18f
Update generated Python Op docs.
tensorflower-gardener Jan 4, 2017
51e5d17
Convert tf.flags usage to argparse. Move use of FLAGS globals into m…
Jan 4, 2017
97866c1
Automated rollback of change 143523842
tensorflower-gardener Jan 4, 2017
e0ec343
LinearOperator (base class), prefer statically defined shape if avail…
langmore Jan 4, 2017
8cb009b
Updated description of CheckpointSaver.
tensorflower-gardener Jan 4, 2017
843974d
Fix typo (though --> through) in tf.placeholder_with_default().
tensorflower-gardener Jan 4, 2017
8cfffcf
Update generated Python Op docs.
tensorflower-gardener Jan 4, 2017
e78b994
Update ops-related pbtxt files.
tensorflower-gardener Jan 4, 2017
ea29616
Fix parsing of Python command-line arguments in tests.
hawkinsp Jan 4, 2017
012800e
Change for internal compatibility.
tensorflower-gardener Jan 4, 2017
3b5f50d
Add support for byte-level native access for Android TensorFlow.
tensorflower-gardener Jan 4, 2017
1118de0
Mark gemmlowp result as initialized.
Jan 4, 2017
bd97023
Switch tf-learn BaseEstimator.evaluate() to using evaluation.evaluate…
caisq Jan 4, 2017
4982c62
Add deprecation warnings to tf.neg and prepare for deprecation warnin…
aselle Jan 4, 2017
ffd7338
Update generated Python Op docs.
tensorflower-gardener Jan 4, 2017
354972d
Move SIMD feature warnings to the first use of intensive CPU computat…
petewarden Jan 4, 2017
2482564
Adds V2 versions of Queue and Reader ops using ResourceHandles.
tensorflower-gardener Jan 4, 2017
d7b1d0a
Update ops-related pbtxt files.
tensorflower-gardener Jan 4, 2017
e1eae19
Android: add debug-specific overlay for detection activity that can b…
andrewharp Jan 4, 2017
2522285
Allow fully dynamic batch/event overrides.
jvdillon Jan 4, 2017
fc8dd9f
Update generated Python Op docs.
tensorflower-gardener Jan 4, 2017
70d4f7e
LinearOperatorIdentity added to tensorflow/contrib/linalg/
langmore Jan 4, 2017
8d072f5
Android: add support for object names in MultiboxTracker
andrewharp Jan 4, 2017
f74af69
Update generated Python Op docs.
tensorflower-gardener Jan 4, 2017
ddedae6
Make srcs and deps arguments to tf_cuda_cc_test build rule optional.
hawkinsp Jan 4, 2017
fcc319a
Remove unused FLAGS variable.
Jan 4, 2017
2eb1604
Make Empty Op Stateful.
tensorflower-gardener Jan 4, 2017
de053cf
Update ops-related pbtxt files.
tensorflower-gardener Jan 4, 2017
73eff47
Update callers of array_ops.concat to call array_ops.concat_v2 instea…
tensorflower-gardener Jan 4, 2017
02d2385
Include stream_executor headers in pip package include directory.
Jan 4, 2017
b17f1e2
Removing comments for investigation of the root cause of test toleran…
tensorflower-gardener Jan 4, 2017
eba10b7
Defer optimizer function run in linear classifier until apply gradien…
tensorflower-gardener Jan 4, 2017
e43b9d8
Remove a few ununsed functions.
lilao Jan 5, 2017
f6d47fa
Improved documentation for OpenCL setup
benoitsteiner Jan 5, 2017
a31acbe
Remove pending inputs from RunState of DirectSession::Run.
Jan 5, 2017
37b430c
Moving FinalOpsHook into basic_session_run_hooks.
Jan 5, 2017
bf00bcc
Provide multiple implementations of RPC requests on the feed path.
mrry Jan 5, 2017
1628abf
Fixing problem with restoring scope with partitioned variables.
Jan 5, 2017
d954169
Update generated Python Op docs.
tensorflower-gardener Jan 5, 2017
333dc32
Change arg order for {softmax,sparse_softmax,sigmoid}_cross_entropy_w…
martinwicke Jan 5, 2017
b9b7b88
Update generated Python Op docs.
tensorflower-gardener Jan 5, 2017
7c97527
Make labeled_tensor use tf.contrib.nn.deprecated_flipped_* versions o…
martinwicke Jan 5, 2017
83a98cc
Merge commit for internal changes
caisq Jan 5, 2017
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
Expand Up @@ -93,14 +93,14 @@ public TensorFlowInferenceInterface() {

// Methods for creating a native Tensor and filling it with values.
public native void fillNodeFloat(String inputName, int[] dims, float[] values);

public native void fillNodeInt(String inputName, int[] dims, int[] values);

public native void fillNodeDouble(String inputName, int[] dims, double[] values);
public native void fillNodeByte(String inputName, int[] dims, byte[] values);

public native void readNodeFloat(String outputName, float[] values);
public native void readNodeInt(String outputName, int[] values);
public native void readNodeDouble(String outputName, double[] values);
public native void readNodeByte(String outputName, byte[] values);

/**
* Canary method solely for determining if the tensorflow_inference native library should be
Expand Down
22 changes: 12 additions & 10 deletions tensorflow/contrib/android/jni/tensorflow_inference_jni.cc
Expand Up @@ -272,7 +272,7 @@ JNIEXPORT jint JNICALL TENSORFLOW_METHOD(close)(JNIEnv* env, jobject thiz) {
}

// TODO(andrewharp): Use memcpy to fill/read nodes.
#define FILL_NODE_METHOD(DTYPE, JAVA_DTYPE, TENSOR_DTYPE) \
#define FILL_NODE_METHOD(DTYPE, JAVA_DTYPE, CTYPE, TENSOR_DTYPE) \
FILL_NODE_SIGNATURE(DTYPE, JAVA_DTYPE) { \
SessionVariables* vars = GetSessionVars(env, thiz); \
jboolean iCopied = JNI_FALSE; \
Expand All @@ -284,7 +284,7 @@ JNIEXPORT jint JNICALL TENSORFLOW_METHOD(close)(JNIEnv* env, jobject thiz) {
} \
env->ReleaseIntArrayElements(dims, dim_vals, JNI_ABORT); \
tensorflow::Tensor input_tensor(TENSOR_DTYPE, shape); \
auto tensor_mapped = input_tensor.flat<JAVA_DTYPE>(); \
auto tensor_mapped = input_tensor.flat<CTYPE>(); \
j##JAVA_DTYPE* values = env->Get##DTYPE##ArrayElements(arr, &iCopied); \
j##JAVA_DTYPE* value_ptr = values; \
const int array_size = env->GetArrayLength(arr); \
Expand All @@ -300,14 +300,14 @@ JNIEXPORT jint JNICALL TENSORFLOW_METHOD(close)(JNIEnv* env, jobject thiz) {
vars->input_tensors[input_name] = input_pair; \
}

#define READ_NODE_METHOD(DTYPE, JAVA_DTYPE) \
#define READ_NODE_METHOD(DTYPE, JAVA_DTYPE, CTYPE) \
READ_NODE_SIGNATURE(DTYPE, JAVA_DTYPE) { \
SessionVariables* vars = GetSessionVars(env, thiz); \
Tensor* t = GetTensor(env, thiz, node_name_jstring); \
if (t == nullptr) { \
return -1; \
} \
auto tensor_mapped = t->flat<JAVA_DTYPE>(); \
auto tensor_mapped = t->flat<CTYPE>(); \
jboolean iCopied = JNI_FALSE; \
j##JAVA_DTYPE* values = env->Get##DTYPE##ArrayElements(arr, &iCopied); \
j##JAVA_DTYPE* value_ptr = values; \
Expand All @@ -320,10 +320,12 @@ JNIEXPORT jint JNICALL TENSORFLOW_METHOD(close)(JNIEnv* env, jobject thiz) {
return 0; \
}

FILL_NODE_METHOD(Float, float, tensorflow::DT_FLOAT)
FILL_NODE_METHOD(Int, int, tensorflow::DT_INT32)
FILL_NODE_METHOD(Double, double, tensorflow::DT_DOUBLE)
FILL_NODE_METHOD(Float, float, float, tensorflow::DT_FLOAT)
FILL_NODE_METHOD(Int, int, int, tensorflow::DT_INT32)
FILL_NODE_METHOD(Double, double, double, tensorflow::DT_DOUBLE)
FILL_NODE_METHOD(Byte, byte, uint8_t, tensorflow::DT_UINT8)

READ_NODE_METHOD(Float, float)
READ_NODE_METHOD(Int, int)
READ_NODE_METHOD(Double, double)
READ_NODE_METHOD(Float, float, float)
READ_NODE_METHOD(Int, int, int)
READ_NODE_METHOD(Double, double, double)
READ_NODE_METHOD(Byte, byte, uint8_t)
2 changes: 2 additions & 0 deletions tensorflow/contrib/android/jni/tensorflow_inference_jni.h
Expand Up @@ -59,10 +59,12 @@ JNIEXPORT jint JNICALL TENSORFLOW_METHOD(close)(JNIEnv* env, jobject thiz);
FILL_NODE_SIGNATURE(Float, float);
FILL_NODE_SIGNATURE(Int, int);
FILL_NODE_SIGNATURE(Double, double);
FILL_NODE_SIGNATURE(Byte, byte);

READ_NODE_SIGNATURE(Float, float);
READ_NODE_SIGNATURE(Int, int);
READ_NODE_SIGNATURE(Double, double);
READ_NODE_SIGNATURE(Byte, byte);

#ifdef __cplusplus
} // extern "C"
Expand Down
Expand Up @@ -132,7 +132,7 @@ def test_mc_estimate_of_normal_mean_and_variance_is_correct_vs_analytic(self):
with self.test_session():
p = distributions.Normal(mu=[1.0, -1.0], sigma=[0.3, 0.5])
# Compute E_p[X] and E_p[X^2].
z = p.sample_n(n=n)
z = p.sample(n, seed=42)
e_x = monte_carlo.expectation(lambda x: x, p, z=z, seed=42)
e_x2 = monte_carlo.expectation(math_ops.square, p, z=z, seed=0)
var = e_x2 - math_ops.square(e_x)
Expand Down Expand Up @@ -161,7 +161,7 @@ def test_raises_if_both_z_and_n_are_none(self):
def test_raises_if_both_z_and_n_are_not_none(self):
with self.test_session():
dist = distributions.Normal(mu=0., sigma=1.)
z = dist.sample_n(n=1)
z = dist.sample(seed=42)
n = 1
seed = None
with self.assertRaisesRegexp(ValueError, 'exactly one'):
Expand All @@ -179,7 +179,7 @@ def test_returns_n_samples_if_n_provided(self):
def test_returns_z_if_z_provided(self):
with self.test_session():
dist = distributions.Normal(mu=0., sigma=1.)
z = dist.sample_n(n=10)
z = dist.sample(10, seed=42)
n = None
seed = None
z = monte_carlo._get_samples(dist, z, n, seed)
Expand Down
6 changes: 3 additions & 3 deletions tensorflow/contrib/bayesflow/python/ops/entropy.py
Expand Up @@ -143,7 +143,7 @@ def elbo_ratio(log_p,
shape broadcastable to `q.batch_shape`.
For example, `log_p` works "just like" `q.log_prob`.
q: `tf.contrib.distributions.Distribution`.
z: `Tensor` of samples from `q`, produced by `q.sample_n`.
z: `Tensor` of samples from `q`, produced by `q.sample(n)` for some `n`.
n: Integer `Tensor`. Number of samples to generate if `z` is not provided.
seed: Python integer to seed the random number generator.
form: Either `ELBOForms.analytic_entropy` (use formula for entropy of `q`)
Expand Down Expand Up @@ -193,7 +193,7 @@ def entropy_shannon(p,

Args:
p: `tf.contrib.distributions.Distribution`
z: `Tensor` of samples from `p`, produced by `p.sample_n(n)` for some `n`.
z: `Tensor` of samples from `p`, produced by `p.sample(n)` for some `n`.
n: Integer `Tensor`. Number of samples to generate if `z` is not provided.
seed: Python integer to seed the random number generator.
form: Either `ELBOForms.analytic_entropy` (use formula for entropy of `q`)
Expand Down Expand Up @@ -326,7 +326,7 @@ def renyi_ratio(log_p, q, alpha, z=None, n=None, seed=None, name='renyi_ratio'):
`float64` `dtype` recommended.
`log_p` and `q` should be supported on the same set.
alpha: `Tensor` with shape `q.batch_shape` and values not equal to 1.
z: `Tensor` of samples from `q`, produced by `q.sample_n`.
z: `Tensor` of samples from `q`, produced by `q.sample` for some `n`.
n: Integer `Tensor`. The number of samples to use if `z` is not provided.
Note that this can be highly biased for small `n`, see docstring.
seed: Python integer to seed the random number generator.
Expand Down
8 changes: 4 additions & 4 deletions tensorflow/contrib/bayesflow/python/ops/monte_carlo.py
Expand Up @@ -118,7 +118,7 @@ def expectation_importance_sampler(f,
`tf.contrib.distributions.Distribution`.
`float64` `dtype` recommended.
`log_p` and `q` should be supported on the same set.
z: `Tensor` of samples from `q`, produced by `q.sample_n`.
z: `Tensor` of samples from `q`, produced by `q.sample` for some `n`.
n: Integer `Tensor`. Number of samples to generate if `z` is not provided.
seed: Python integer to seed the random number generator.
name: A name to give this `Op`.
Expand Down Expand Up @@ -195,7 +195,7 @@ def expectation_importance_sampler_logspace(
`tf.contrib.distributions.Distribution`.
`float64` `dtype` recommended.
`log_p` and `q` should be supported on the same set.
z: `Tensor` of samples from `q`, produced by `q.sample_n`.
z: `Tensor` of samples from `q`, produced by `q.sample` for some `n`.
n: Integer `Tensor`. Number of samples to generate if `z` is not provided.
seed: Python integer to seed the random number generator.
name: A name to give this `Op`.
Expand Down Expand Up @@ -254,7 +254,7 @@ def expectation(f, p, z=None, n=None, seed=None, name='expectation'):
Args:
f: Callable mapping samples from `p` to `Tensors`.
p: `tf.contrib.distributions.Distribution`.
z: `Tensor` of samples from `p`, produced by `p.sample_n`.
z: `Tensor` of samples from `p`, produced by `p.sample` for some `n`.
n: Integer `Tensor`. Number of samples to generate if `z` is not provided.
seed: Python integer to seed the random number generator.
name: A name to give this `Op`.
Expand Down Expand Up @@ -314,6 +314,6 @@ def _get_samples(dist, z, n, seed):
'Must specify exactly one of arguments "n" and "z". Found: '
'n = %s, z = %s' % (n, z))
if n is not None:
return dist.sample_n(n=n, seed=seed)
return dist.sample(n, seed=seed)
else:
return ops.convert_to_tensor(z, name='z')
Expand Up @@ -18,22 +18,30 @@

from tensorflow.contrib import distributions
from tensorflow.python.framework import dtypes
from tensorflow.python.framework import ops
from tensorflow.python.framework import tensor_shape
from tensorflow.python.framework import tensor_util
from tensorflow.python.ops import array_ops
from tensorflow.python.ops import random_ops
from tensorflow.python.platform import test

dists = distributions
ds = distributions


class DistributionTest(test.TestCase):

def testParamShapesAndFromParams(self):
classes = [
dists.Normal, dists.Bernoulli, dists.Beta, dists.Chi2,
dists.Exponential, dists.Gamma, dists.InverseGamma, dists.Laplace,
dists.StudentT, dists.Uniform
ds.Normal,
ds.Bernoulli,
ds.Beta,
ds.Chi2,
ds.Exponential,
ds.Gamma,
ds.InverseGamma,
ds.Laplace,
ds.StudentT,
ds.Uniform,
]

sample_shapes = [(), (10,), (10, 20, 30)]
Expand All @@ -55,15 +63,15 @@ def testCopyExtraArgs(self):
with self.test_session():
# Note: we cannot easily test all distributions since each requires
# different initialization arguments. We therefore spot test a few.
normal = dists.Normal(mu=1., sigma=2., validate_args=True)
normal = ds.Normal(mu=1., sigma=2., validate_args=True)
self.assertEqual(normal.parameters, normal.copy().parameters)
wishart = dists.WishartFull(
df=2, scale=[[1., 2], [2, 5]], validate_args=True)
wishart = ds.WishartFull(df=2, scale=[[1., 2], [2, 5]],
validate_args=True)
self.assertEqual(wishart.parameters, wishart.copy().parameters)

def testCopyOverride(self):
with self.test_session():
normal = dists.Normal(mu=1., sigma=2., validate_args=True)
normal = ds.Normal(mu=1., sigma=2., validate_args=True)
normal_copy = normal.copy(validate_args=False)
base_params = normal.parameters.copy()
copy_params = normal.copy(validate_args=False).parameters.copy()
Expand All @@ -76,21 +84,21 @@ def testIsScalar(self):
mu = 1.
sigma = 2.

normal = dists.Normal(mu, sigma, validate_args=True)
self.assertTrue(tensor_util.constant_value(normal.is_scalar_event))
self.assertTrue(tensor_util.constant_value(normal.is_scalar_batch))
normal = ds.Normal(mu, sigma, validate_args=True)
self.assertTrue(tensor_util.constant_value(normal.is_scalar_event()))
self.assertTrue(tensor_util.constant_value(normal.is_scalar_batch()))

normal = dists.Normal([mu], [sigma], validate_args=True)
self.assertTrue(tensor_util.constant_value(normal.is_scalar_event))
self.assertFalse(tensor_util.constant_value(normal.is_scalar_batch))
normal = ds.Normal([mu], [sigma], validate_args=True)
self.assertTrue(tensor_util.constant_value(normal.is_scalar_event()))
self.assertFalse(tensor_util.constant_value(normal.is_scalar_batch()))

mvn = dists.MultivariateNormalDiag([mu], [sigma], validate_args=True)
self.assertFalse(tensor_util.constant_value(mvn.is_scalar_event))
self.assertTrue(tensor_util.constant_value(mvn.is_scalar_batch))
mvn = ds.MultivariateNormalDiag([mu], [sigma], validate_args=True)
self.assertFalse(tensor_util.constant_value(mvn.is_scalar_event()))
self.assertTrue(tensor_util.constant_value(mvn.is_scalar_batch()))

mvn = dists.MultivariateNormalDiag([[mu]], [[sigma]], validate_args=True)
self.assertFalse(tensor_util.constant_value(mvn.is_scalar_event))
self.assertFalse(tensor_util.constant_value(mvn.is_scalar_batch))
mvn = ds.MultivariateNormalDiag([[mu]], [[sigma]], validate_args=True)
self.assertFalse(tensor_util.constant_value(mvn.is_scalar_event()))
self.assertFalse(tensor_util.constant_value(mvn.is_scalar_batch()))

# We now test every codepath within the underlying is_scalar_helper
# function.
Expand All @@ -117,6 +125,65 @@ def testIsScalar(self):
self.assertTrue(is_scalar.eval(feed_dict={x: 1}))
self.assertFalse(is_scalar.eval(feed_dict={x: [1]}))

def testSampleShapeHints(self):
class _FakeDistribution(ds.Distribution):
"""Fake Distribution for testing _set_sample_static_shape."""

def __init__(self, batch_shape=None, event_shape=None):
self._static_batch_shape = tensor_shape.TensorShape(batch_shape)
self._static_event_shape = tensor_shape.TensorShape(event_shape)
super(_FakeDistribution, self).__init__(
dtype=dtypes.float32,
is_continuous=False,
is_reparameterized=False,
validate_args=True,
allow_nan_stats=True,
name="DummyDistribution")

def _get_batch_shape(self):
return self._static_batch_shape

def _get_event_shape(self):
return self._static_event_shape

with self.test_session():
# Make a new session since we're playing with static shapes. [And below.]
x = array_ops.placeholder(dtype=dtypes.float32)
dist = _FakeDistribution(batch_shape=[2, 3], event_shape=[5])
sample_shape = ops.convert_to_tensor([6, 7], dtype=dtypes.int32)
y = dist._set_sample_static_shape(x, sample_shape)
# We use as_list since TensorShape comparison does not work correctly for
# unknown values, ie, Dimension(None).
self.assertAllEqual([6, 7, 2, 3, 5], y.get_shape().as_list())

with self.test_session():
x = array_ops.placeholder(dtype=dtypes.float32)
dist = _FakeDistribution(batch_shape=[None, 3], event_shape=[5])
sample_shape = ops.convert_to_tensor([6, 7], dtype=dtypes.int32)
y = dist._set_sample_static_shape(x, sample_shape)
self.assertAllEqual([6, 7, None, 3, 5], y.get_shape().as_list())

with self.test_session():
x = array_ops.placeholder(dtype=dtypes.float32)
dist = _FakeDistribution(batch_shape=[None, 3], event_shape=[None])
sample_shape = ops.convert_to_tensor([6, 7], dtype=dtypes.int32)
y = dist._set_sample_static_shape(x, sample_shape)
self.assertAllEqual([6, 7, None, 3, None], y.get_shape().as_list())

with self.test_session():
x = array_ops.placeholder(dtype=dtypes.float32)
dist = _FakeDistribution(batch_shape=None, event_shape=None)
sample_shape = ops.convert_to_tensor([6, 7], dtype=dtypes.int32)
y = dist._set_sample_static_shape(x, sample_shape)
self.assertTrue(y.get_shape().ndims is None)

with self.test_session():
x = array_ops.placeholder(dtype=dtypes.float32)
dist = _FakeDistribution(batch_shape=[None, 3], event_shape=None)
sample_shape = ops.convert_to_tensor([6, 7], dtype=dtypes.int32)
y = dist._set_sample_static_shape(x, sample_shape)
self.assertTrue(y.get_shape().ndims is None)


if __name__ == "__main__":
test.main()