Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Branch 184622482 #16792

Merged
merged 49 commits into from
Feb 6, 2018
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
49 commits
Select commit Hold shift + click to select a range
a42450a
[tf-signal] Fix exception when input shape is unknown in mfccs_from_l…
rryan Feb 3, 2018
de6037d
[XLA] Assign mandatory constraints in a DFS order and non-manatory co…
blakehechtman Feb 4, 2018
fc09f65
Avoid retaining two copies of each constant in `ConstantOp`.
mrry Feb 4, 2018
1de2843
Minor fixes to the get started doc.
Feb 4, 2018
3cf771c
Support for quantized LSTM models.
tensorflower-gardener Feb 5, 2018
ab06a9c
Fixed sequence_mask behavior on unknown shape.
tensorflower-gardener Feb 5, 2018
c73d035
mini documentation fix
tensorflower-gardener Feb 5, 2018
6a822c3
Expand the activity analysis to composite names.
tensorflower-gardener Feb 5, 2018
95ed84d
Internal Change
tensorflower-gardener Feb 5, 2018
2385176
Get control_flow_ops.py ready to support de/serializing nested contro…
skye Feb 5, 2018
83a2e03
Enable aggressive identity node pruning in dependency optimizer.
tensorflower-gardener Feb 5, 2018
dcffcef
Clarify that tf.contrib.image.rotate angles are counterclockwise.
ringw Feb 5, 2018
5c631db
[XLA] add Conditional to the local Python XLA client.
tensorflower-gardener Feb 5, 2018
2fde0f2
Changing the link to point to new android job.
Feb 5, 2018
a9034ba
Make flat_transforms_to_matrices and matrices_to_flat_transforms publ…
ringw Feb 5, 2018
bccd80c
Proper reallocation of dynamic tensors.
tensorflower-gardener Feb 5, 2018
1696308
Automated g4 rollback of changelist 184323369
jvdillon Feb 5, 2018
13fb1cf
Support parsing from text and fused op in contrib.
Feb 5, 2018
2eacd72
Serialize the evaluation of the AssignAdd nodes to make the result more
benoitsteiner Feb 5, 2018
11167ab
[TF:XLA] Making constant folding deterministic.
yunxing Feb 5, 2018
fc8d9c3
Bug fix: Don't dereference nullptr in OpKernelContext::input_alloc_at…
tensorflower-gardener Feb 5, 2018
d0904cb
contrib/rnn: Fix #16703
asimshankar Feb 5, 2018
f8f921c
Fixes issue where external control dependencies in while loops are dr…
alextp Feb 5, 2018
b3360e0
[XLA] Add tests for Clamp of S32 and U32 vectors with broadcasted sca…
tensorflower-gardener Feb 5, 2018
1bbfc0c
[tf.data] Fix use-after-free bug when closing down an input pipeline.
mrry Feb 5, 2018
473bc35
[TF:XLA] Implement GatherNd.
hawkinsp Feb 5, 2018
d53202b
[XLA] Fix documentation for Clamp.
tensorflower-gardener Feb 5, 2018
1c762f7
Backward pass implementation for fusion optimizer.
tensorflower-gardener Feb 5, 2018
3374d3a
Automated g4 rollback of changelist 184573795
alextp Feb 5, 2018
85344e3
Fix CBLAS Conv reference implementation in TFLite.
miaout17 Feb 5, 2018
1b19bc9
Add logging to diagnose device properties parsing problem in Grappler.
yacoder Feb 5, 2018
c8674c8
Verify tflite model in TFLite Java API
tensorflower-gardener Feb 5, 2018
2074a56
Adding TensorSpec to represent the specification of Tensors.
sguada Feb 5, 2018
395550b
Assign total_loss in order not to crash if training loop exists early.
tensorflower-gardener Feb 5, 2018
2271f0f
Make fold batch norm code use OneofPattern and rearrange functions to…
Feb 5, 2018
5476489
[XLA] Sink layout sensitivity from CSE into HloInstruction::Identical…
tensorflower-gardener Feb 5, 2018
8fe70dc
"frame_name" attr must be altered when importing/exporting MetaGraphD…
skye Feb 5, 2018
aea9333
Remove makefile build dependency on all_opensource_files, as part of …
yifeif Feb 6, 2018
0375ffc
Add filepaths to test_local support.
Feb 6, 2018
dcefe9b
Shard linear operator tests.
tensorflower-gardener Feb 6, 2018
179795c
Support negative axis in concatenation
tensorflower-gardener Feb 6, 2018
fa99ec4
[XLA:GPU] Split IrEmitter{Unn,N}ested out of ir_emitter.h.
Feb 6, 2018
238bae4
Misc cleanups and tweaks:
Feb 6, 2018
78af5c7
Correctly treat "devices=/gpu:0" argument of replicate_model_fn.
isaprykin Feb 6, 2018
f92d4e8
[XLA] Add HloBindings::ToString().
Feb 6, 2018
54dd9c9
Cleanup markdown errors in `Bijector`.
jvdillon Feb 6, 2018
0913833
Added the ability to query the amount of RAM available
benoitsteiner Feb 6, 2018
1a0b637
Export align_corners to TF Lite
tensorflower-gardener Feb 6, 2018
a1ad45e
Merge commit for internal changes
Feb 6, 2018
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
12 changes: 12 additions & 0 deletions tensorflow/compiler/tests/BUILD
Original file line number Diff line number Diff line change
Expand Up @@ -665,6 +665,18 @@ tf_xla_py_test(
],
)

tf_xla_py_test(
name = "gather_nd_op_test",
size = "medium",
srcs = ["gather_nd_op_test.py"],
deps = [
":xla_test",
"//tensorflow/python:array_ops",
"//tensorflow/python:framework_for_generated_wrappers",
"//tensorflow/python:platform_test",
],
)

cuda_py_test(
name = "xla_device_test",
size = "small",
Expand Down
147 changes: 147 additions & 0 deletions tensorflow/compiler/tests/gather_nd_op_test.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,147 @@
# Copyright 2018 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Tests for tensorflow.ops.tf.gather_nd."""

from __future__ import absolute_import
from __future__ import division
from __future__ import print_function

import numpy as np

from tensorflow.compiler.tests.xla_test import XLATestCase
from tensorflow.python.framework import errors
from tensorflow.python.ops import array_ops
from tensorflow.python.platform import test


class GatherNdTest(XLATestCase):

def _runGather(self, params, indices):
with self.test_session():
paramsp = array_ops.placeholder(params.dtype)
indicesp = array_ops.placeholder(indices.dtype)
with self.test_scope():
gather_nd_t = array_ops.gather_nd(paramsp, indicesp)
feed_dict = {paramsp: params, indicesp: indices}
return gather_nd_t.eval(feed_dict=feed_dict)

def testSimpleDtype(self):
for dtype in self.numeric_types:
self.assertAllEqual(
np.array([7, 7, 8], dtype=dtype),
self._runGather(
np.array([8, 1, 2, 3, 7, 5], dtype=dtype),
np.array([[4], [4], [0]], np.int32)))

def testEmptyIndicesAndParamsOKButJustEmptyParamsFails(self):
with self.test_session():
params = np.ones((3, 3), dtype=np.float32)

indices_empty = np.empty((0, 2), dtype=np.int32)
gather_nd_ok_val = self._runGather(params, indices_empty)
self.assertAllClose(np.empty((0,), dtype=np.float32), gather_nd_ok_val)

indices_empty = np.empty((0, 1), dtype=np.int32)
gather_nd_ok_val = self._runGather(params, indices_empty)
self.assertAllClose(np.empty((0, 3), dtype=np.float32), gather_nd_ok_val)

params_empty = np.empty((0, 3), dtype=np.float32)
indices_empty = np.empty((0, 2), dtype=np.int32)
gather_nd_ok_val = self._runGather(params_empty, indices_empty)
self.assertAllClose(np.empty((0,), dtype=np.float32), gather_nd_ok_val)

params_empty = np.empty((0, 3), dtype=np.float32)
indices_nonempty = np.zeros((1, 2), dtype=np.int32)
with self.assertRaisesWithPredicateMatch(
errors.InvalidArgumentError, r"Gather dimension 0 is of size zero"):
self._runGather(params_empty, indices_nonempty)

def testIndexScalar(self):
params = np.array(
[[-8, -1, -2, -3, -7, -5], [8, 1, 2, 3, 7, 5]], dtype=np.float32).T
indices = np.array([4, 1], dtype=np.int32)
gather_nd_val = self._runGather(params, indices)
self.assertAllEqual(np.array(7), gather_nd_val)

def testParamsRankLargerThanIndexIndexScalarSlices(self):
params = np.array(
[[-8, -1, -2, -3, -7, -5], [8, 1, 2, 3, 7, 5]], dtype=np.float32).T
indices = np.array(
[
4,
], dtype=np.int32)
gather_nd_val = self._runGather(params, indices)
self.assertAllEqual(np.array([-7, 7]), gather_nd_val)

def testParamsRankLargerThanIndexSlices(self):
params = np.array(
[[-8, -1, -2, -3, -7, -5], [8, 1, 2, 3, 7, 5]], dtype=np.float32).T
indices = np.array([[4], [4], [0]], np.int32)
gather_nd_val = self._runGather(params, indices)
self.assertAllEqual(np.array([[-7, 7], [-7, 7], [-8, 8]]), gather_nd_val)

def testHigherRankParamsLargerThanIndexSlices(self):
params = np.array(
[[[-8, -1, -2, -3, -7, -5], [8, 1, 2, 3, 7, 5]],
[[-80, -10, -20, -30, -70, -50], [80, 10, 20, 30, 70, 50]]],
dtype=np.float32).T
indices = np.array([[4], [4], [0]], np.int32)
gather_nd_val = self._runGather(params, indices)
self.assertAllEqual(params[[4, 4, 0]], gather_nd_val)

def testEmptyIndicesLastRankMeansCopyEntireTensor(self):
params = np.array(
[[[-8, -1, -2, -3, -7, -5], [8, 1, 2, 3, 7, 5]],
[[-80, -10, -20, -30, -70, -50], [80, 10, 20, 30, 70, 50]]],
dtype=np.float32).T
indices = np.array([[], []], dtype=np.int32) # Size (2, 0)
gather_nd_val = self._runGather(params, indices)
self.assertAllEqual(
np.vstack((params[np.newaxis, :], params[np.newaxis, :])),
gather_nd_val)

def testHigherRankParamsAndIndicesLargerThanIndexSlices(self):
params = np.array(
[[[-8, -1, -2, -3, -7, -5], [8, 1, 2, 3, 7, 5]],
[[-80, -10, -20, -30, -70, -50], [80, 10, 20, 30, 70, 50]]],
dtype=np.float32).T
indices = np.array([[[3], [2], [1]], [[4], [4], [0]]], np.int32)
gather_nd_val = self._runGather(params, indices)
self.assertAllEqual(params[[3, 2, 1, 4, 4, 0]].reshape(2, 3, 2, 2),
gather_nd_val)

def testHigherRankParams(self):
shape = (10, 20, 5, 1, 17)
params = np.random.rand(*shape).astype(np.float32)
indices = np.vstack(
[np.random.randint(0, s, size=2000, dtype=np.int32) for s in shape]).T
gather_nd_val = self._runGather(params, indices)

expected = params[tuple(indices.T)]
self.assertAllEqual(expected, gather_nd_val)

def testHigherRankParamsAndIndices(self):
shape = (10, 20, 5, 1, 17)
params = np.random.rand(*shape).astype(np.float32)
indices = np.vstack(
[np.random.randint(0, s, size=2000, dtype=np.int32) for s in shape]).T
indices_reshaped = indices.reshape([10, 10, 20, 5])
gather_nd_val = self._runGather(params, indices_reshaped)
expected = params[tuple(indices.T)]
self.assertAllEqual(expected.reshape([10, 10, 20]), gather_nd_val)


if __name__ == "__main__":
test.main()
1 change: 0 additions & 1 deletion tensorflow/compiler/tf2xla/kernels/BUILD
Original file line number Diff line number Diff line change
Expand Up @@ -87,7 +87,6 @@ tf_kernel_library(
"variable_ops.cc",
],
hdrs = [
"gather_op.h",
"index_ops.h",
"shape_util.h",
],
Expand Down