Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
74 commits
Select commit Hold shift + click to select a range
8244e6e
Update ops-related pbtxt files.
tensorflower-gardener Aug 26, 2016
2a73e37
Update generated Python Op docs.
tensorflower-gardener Aug 26, 2016
26475fe
Added the ability to view memory usage for a tensorflow graph execution.
tensorflower-gardener Aug 26, 2016
59a92c2
Make the logplot button hover instead of change
RenatoUtsch Aug 26, 2016
2cbcc31
Added one_hot_column for putting sparse features into deep models (al…
Aug 26, 2016
7bd1eea
Revert changes from https://github.com/tensorflow/tensorflow/pull/405…
tensorflower-gardener Aug 26, 2016
ba98c6b
tuner_experiment catches NanLossDuringTrainingError to report infeasi…
tensorflower-gardener Aug 27, 2016
f2f582b
Optimized the gradients of the sqrt, rsqrt, and inv functions
benoitsteiner Aug 27, 2016
4f353f0
Add support of weights in metrics in _get_eval_op of Estimator.
tensorflower-gardener Aug 27, 2016
32374a7
Refactor the panes to a single codebase
RenatoUtsch Aug 27, 2016
45336c4
Better backwards compatibility for graphs that use functions.
tensorflower-gardener Aug 27, 2016
1d8ccf9
Update ops-related pbtxt files.
tensorflower-gardener Aug 27, 2016
597a6eb
Update generated Python Op docs.
tensorflower-gardener Aug 27, 2016
53ed135
Better information for when the weights in the checkpoint have a size…
tensorflower-gardener Aug 27, 2016
f2cc30e
Make tf/core:stream_executor_headers_lib public for custom op creation.
tensorflower-gardener Aug 27, 2016
dd3a8b0
Change default for run_config's save_checkpoint_sec.
martinwicke Aug 27, 2016
43e04b0
Update generated Python Op docs.
tensorflower-gardener Aug 27, 2016
962dafe
Adding Cudnn RNN support.
zheng-xq Aug 27, 2016
f409f4f
Add a new sequential update which is enabled by default to moving_ave…
tensorflower-gardener Aug 27, 2016
8177edd
Support sparse jobs for TensorFlow gRPC servers.
mrry Aug 27, 2016
885a97a
Update generated Python Op docs.
tensorflower-gardener Aug 27, 2016
22c13e3
Switch all ' to ".
tensorflower-gardener Aug 27, 2016
07562be
Refactor tf.learn export functions to make them use the same input_fn…
theweiho Aug 27, 2016
676caae
Update generated Python Op docs.
tensorflower-gardener Aug 27, 2016
532d1c8
Implement Beta.log_cdf and Beta.cdf.
tensorflower-gardener Aug 28, 2016
cf35735
Improve support for FunctionDefs using NodeDef.
tensorflower-gardener Aug 28, 2016
aeac274
Extend softmax and logsoftmax to make them work on an arbitrary dimen…
Aug 28, 2016
268751f
Update generated Python Op docs.
tensorflower-gardener Aug 28, 2016
1df4522
Add documentation and test to make it clear that users can call get s…
tensorflower-gardener Aug 29, 2016
d210c60
Re-arranged coordinator.join call in MonitoredSession. Now there is o…
ispirmustafa Aug 29, 2016
d4d3846
Fix failures due to Tensor's __bool__ override.
tensorflower-gardener Aug 29, 2016
325bc6e
Automated rollback of change 131342536
tensorflower-gardener Aug 29, 2016
61ab066
Fix typo in CrossedColumn example
tensorflower-gardener Aug 30, 2016
090b230
Allows one_hot_column to be used in create_feature_spec_for_parsing.
tensorflower-gardener Aug 30, 2016
7d60da4
Move vz-data-summary into TensorBoard.
RNabel Aug 30, 2016
c07399c
Refactor LinearClassifier implementation from inheritance to composit…
philstahlfeld Aug 30, 2016
efb5fce
Update generated Python Op docs.
tensorflower-gardener Aug 30, 2016
8b431f0
tfdbg: Validate debug dumps using partition graphs
caisq Aug 30, 2016
bec216a
check_ops for greater and greater_equal
goat000 Aug 30, 2016
f9100c1
Update generated Python Op docs.
tensorflower-gardener Aug 30, 2016
a7fc9f5
Enable C++ shape function for math_ops.py and array_ops.cc shape func…
tensorflower-gardener Aug 30, 2016
168e912
Tutorial on using input_fn to build customized input pipelines in
tensorflower-gardener Aug 30, 2016
bc08f2a
LinearClassifier metrics default to class predictions.
philstahlfeld Aug 30, 2016
ca450d2
Expand tests for DNNClassifier before refactoring.
tensorflower-gardener Aug 30, 2016
c17b7d5
Use np.maximum in gradient_checker. Previously, max was used, but ma…
langmore Aug 30, 2016
5532f08
Change tensor_forest shape functions to delegate to the C++ shape fun…
tensorflower-gardener Aug 30, 2016
660d7e5
Delegate to C++ shape inference functions for several ops in
tensorflower-gardener Aug 30, 2016
a5e2411
Internal change only.
RenatoUtsch Aug 30, 2016
79d8721
Fix incorrect docstring comment.
tensorflower-gardener Aug 30, 2016
8b667b7
Adds fractional_max_pool and fractional_avg_pool ops. Fixes #2953.
tensorflower-gardener Aug 30, 2016
a15f569
Fix link to linear/overview.md in tf.contrib.learn Quickstart
tensorflower-gardener Aug 30, 2016
431b8d5
Update ops-related pbtxt files.
tensorflower-gardener Aug 30, 2016
521a4f6
Update generated Python Op docs.
tensorflower-gardener Aug 30, 2016
01d291f
Remember when the readahead buffer reaches EOF.
rinugun Aug 30, 2016
197ab58
TensorBoard: Use commonjs for Typescript module resolution.
Aug 30, 2016
f61f2e5
Proxy LinearClassifier.config to internal estimator's config.
philstahlfeld Aug 30, 2016
d9df4bb
Reuse top-level name scope amongst distribution methods
tensorflower-gardener Aug 30, 2016
d1f21e6
Update generated Python Op docs.
tensorflower-gardener Aug 30, 2016
3fb9963
Stop having the vz-histogram-timeseries modify its own svg width/height.
Aug 30, 2016
c22b465
TensorBoard: Fix a bug in FireFox with chart expansion.
Aug 30, 2016
16dfd2c
By default make tensorflow tests try to use a gpu, if accessible.
gunan Aug 30, 2016
f7197d2
Make sure the metric's keywords are not None before using. Partial's …
tensorflower-gardener Aug 30, 2016
db0533e
Add more docstrings for Experiment.
martinwicke Aug 30, 2016
d9275cd
Automated rollback of change 131768856
gunan Aug 31, 2016
9d64678
Adding a fake bias/empty column to SDCA models
tensorflower-gardener Aug 31, 2016
3e8c4fd
Don't establish contexts on gpus not on visible_device_list.
Aug 31, 2016
d1e2047
Queues and SQSS raise Cancelled instead of Aborted on Enqueue when cl…
ebrevdo Aug 31, 2016
79c41d2
Update generated Python Op docs.
tensorflower-gardener Aug 31, 2016
c47a337
This CL implements the following behavior for stop_gradient in while …
yuanbyu Aug 31, 2016
4f7a434
special_math module added to bayesflow. Implement ndtr, log_ndtr, which
langmore Aug 31, 2016
7e7e0d6
Add new QueueRunner optional argument: queue_closed_exception_types.
ebrevdo Aug 31, 2016
64e6b7f
Update generated Python Op docs.
tensorflower-gardener Aug 31, 2016
70401bd
Adding support for text output.
tensorflower-gardener Aug 31, 2016
555d3e5
Merge commit for internal changes
caisq Aug 31, 2016
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions tensorflow/BUILD
Original file line number Diff line number Diff line change
Expand Up @@ -87,6 +87,7 @@ filegroup(
"//tensorflow/contrib:all_files",
"//tensorflow/contrib/bayesflow:all_files",
"//tensorflow/contrib/copy_graph:all_files",
"//tensorflow/contrib/cudnn_rnn:all_files",
"//tensorflow/contrib/distributions:all_files",
"//tensorflow/contrib/factorization:all_files",
"//tensorflow/contrib/factorization/kernels:all_files",
Expand Down Expand Up @@ -156,6 +157,7 @@ filegroup(
"//tensorflow/tensorboard/app:all_files",
"//tensorflow/tensorboard/backend:all_files",
"//tensorflow/tensorboard/components:all_files",
"//tensorflow/tensorboard/components/vz-data-summary:all_files",
"//tensorflow/tensorboard/lib:all_files",
"//tensorflow/tensorboard/lib/python:all_files",
"//tensorflow/tensorboard/scripts:all_files",
Expand Down
1 change: 1 addition & 0 deletions tensorflow/contrib/BUILD
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,7 @@ py_library(
deps = [
"//tensorflow/contrib/bayesflow:bayesflow_py",
"//tensorflow/contrib/copy_graph:copy_graph_py",
"//tensorflow/contrib/cudnn_rnn:cudnn_rnn_py",
"//tensorflow/contrib/distributions:distributions_py",
"//tensorflow/contrib/factorization:factorization_py",
"//tensorflow/contrib/ffmpeg:ffmpeg_ops_py",
Expand Down
1 change: 1 addition & 0 deletions tensorflow/contrib/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,7 @@
# Add projects here, they will show up under tf.contrib.
from tensorflow.contrib import bayesflow
from tensorflow.contrib import copy_graph
from tensorflow.contrib import cudnn_rnn
from tensorflow.contrib import distributions
from tensorflow.contrib import factorization
from tensorflow.contrib import framework
Expand Down
11 changes: 11 additions & 0 deletions tensorflow/contrib/bayesflow/BUILD
Original file line number Diff line number Diff line change
Expand Up @@ -39,6 +39,17 @@ cuda_py_test(
],
)

cuda_py_test(
name = "special_math_test",
size = "medium",
srcs = ["python/kernel_tests/special_math_test.py"],
additional_deps = [
":bayesflow_py",
"//tensorflow/python:framework_test_lib",
"//tensorflow/python:platform_test",
],
)

cuda_py_test(
name = "stochastic_graph_test",
size = "small",
Expand Down
1 change: 1 addition & 0 deletions tensorflow/contrib/bayesflow/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,7 @@
# pylint: disable=unused-import,wildcard-import,line-too-long
from tensorflow.contrib.bayesflow.python.ops import entropy
from tensorflow.contrib.bayesflow.python.ops import monte_carlo
from tensorflow.contrib.bayesflow.python.ops import special_math
from tensorflow.contrib.bayesflow.python.ops import stochastic_gradient_estimators
from tensorflow.contrib.bayesflow.python.ops import stochastic_graph
from tensorflow.contrib.bayesflow.python.ops import variational_inference
196 changes: 196 additions & 0 deletions tensorflow/contrib/bayesflow/python/kernel_tests/special_math_test.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,196 @@
# Copyright 2016 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Tests for Special Math Ops."""

from __future__ import absolute_import
from __future__ import division
from __future__ import print_function

import collections

import numpy as np
from scipy import special
import tensorflow as tf

sm = tf.contrib.bayesflow.special_math


def _check_strictly_increasing(array_1d):
diff = np.diff(array_1d)
np.testing.assert_array_less(0, diff)


def _make_grid(dtype, grid_spec):
"""Returns a uniform grid + noise, reshaped to shape argument."""
rng = np.random.RandomState(0)
num_points = np.prod(grid_spec.shape)
grid = np.linspace(
grid_spec.min, grid_spec.max, num=num_points).astype(dtype)
grid_spacing = (grid_spec.max - grid_spec.min) / num_points
grid += 0.1 * grid_spacing * rng.randn(*grid.shape)
# More useful if it's sorted (e.g. for testing monotonicity, or debugging).
grid = np.sort(grid)
return np.reshape(grid, grid_spec.shape)


GridSpec = collections.namedtuple("GridSpec", ["min", "max", "shape"])


ErrorSpec = collections.namedtuple("ErrorSpec", ["rtol", "atol"])


class NdtrTest(tf.test.TestCase):
_use_log = False
# Grid min/max chosen to ensure 0 < cdf(x) < 1.
_grid32 = GridSpec(min=-12.9, max=5., shape=[100])
_grid64 = GridSpec(min=-37.5, max=8., shape=[100])
_error32 = ErrorSpec(rtol=1e-4, atol=0.)
_error64 = ErrorSpec(rtol=1e-6, atol=0.)

def _test_grid(self, dtype, grid_spec, error_spec):
if self._use_log:
self._test_grid_log(dtype, grid_spec, error_spec)
else:
self._test_grid_no_log(dtype, grid_spec, error_spec)

def _test_grid_log(self, dtype, grid_spec, error_spec):
with self.test_session():
grid = _make_grid(dtype, grid_spec)
actual = sm.log_ndtr(grid).eval()

# Basic tests.
self.assertTrue(np.isfinite(actual).all())
# On the grid, -inf < log_cdf(x) < 0. In this case, we should be able
# to use a huge grid because we have used tricks to escape numerical
# difficulties.
self.assertTrue((actual < 0).all())
_check_strictly_increasing(actual)

# Versus scipy.
expected = special.log_ndtr(grid)
# Scipy prematurely goes to zero at some places that we don't. So don't
# include these in the comparison.
self.assertAllClose(expected.astype(np.float64)[expected < 0],
actual.astype(np.float64)[expected < 0],
rtol=error_spec.rtol, atol=error_spec.atol)

def _test_grid_no_log(self, dtype, grid_spec, error_spec):
with self.test_session():
grid = _make_grid(dtype, grid_spec)
actual = sm.ndtr(grid).eval()

# Basic tests.
self.assertTrue(np.isfinite(actual).all())
# On the grid, 0 < cdf(x) < 1. The grid cannot contain everything due
# to numerical limitations of cdf.
self.assertTrue((actual > 0).all())
self.assertTrue((actual < 1).all())
_check_strictly_increasing(actual)

# Versus scipy.
expected = special.ndtr(grid)
# Scipy prematurely goes to zero at some places that we don't. So don't
# include these in the comparison.
self.assertAllClose(expected.astype(np.float64)[expected < 0],
actual.astype(np.float64)[expected < 0],
rtol=error_spec.rtol, atol=error_spec.atol)

def test_float32(self):
self._test_grid(np.float32, self._grid32, self._error32)

def test_float64(self):
self._test_grid(np.float64, self._grid64, self._error64)


class LogNdtrTestLower(NdtrTest):
_use_log = True
_grid32 = GridSpec(min=-100., max=sm.LOGNDTR_FLOAT32_LOWER, shape=[100])
_grid64 = GridSpec(min=-100., max=sm.LOGNDTR_FLOAT64_LOWER, shape=[100])
_error32 = ErrorSpec(rtol=1e-4, atol=0.)
_error64 = ErrorSpec(rtol=1e-4, atol=0.)


# The errors are quite large when the input is > 6 or so. Also,
# scipy.special.log_ndtr becomes zero very early, before 10,
# (due to ndtr becoming 1). We approximate Log[1 + epsilon] as epsilon, and
# avoid this issue.
class LogNdtrTestMid(NdtrTest):
_use_log = True
_grid32 = GridSpec(
min=sm.LOGNDTR_FLOAT32_LOWER,
max=sm.LOGNDTR_FLOAT32_UPPER,
shape=[100])
_grid64 = GridSpec(
min=sm.LOGNDTR_FLOAT64_LOWER,
max=sm.LOGNDTR_FLOAT64_UPPER,
shape=[100])
# Differences show up as soon as we're in the tail, so add some atol.
_error32 = ErrorSpec(rtol=0.1, atol=1e-7)
_error64 = ErrorSpec(rtol=0.1, atol=1e-7)


class LogNdtrTestUpper(NdtrTest):
_use_log = True
_grid32 = GridSpec(
min=sm.LOGNDTR_FLOAT32_UPPER,
max=12., # Beyond this, log_cdf(x) may be zero.
shape=[100])
_grid64 = GridSpec(
min=sm.LOGNDTR_FLOAT64_UPPER,
max=35., # Beyond this, log_cdf(x) may be zero.
shape=[100])
_error32 = ErrorSpec(rtol=1e-6, atol=1e-14)
_error64 = ErrorSpec(rtol=1e-6, atol=1e-14)


class NdtrGradientTest(tf.test.TestCase):
_use_log = False
_grid = GridSpec(min=-100., max=100., shape=[1, 2, 3, 8])

def _test_grads_are_positive(self, dtype, grid_spec):
grid = tf.convert_to_tensor(_make_grid(dtype, grid_spec))
with self.test_session():
output = (sm.log_ndtr(grid) if self._use_log
else sm.ndtr(grid))

# If there are N points in the grid,
# grad_eval.shape = (N, N), with grad_eval[i, j] the partial derivative of
# the ith output point w.r.t. the jth grid point. We only expect the
# diagonal to be nonzero.
grad_eval, _ = tf.test.compute_gradient(
grid, grid_spec.shape, output, grid_spec.shape)
grad_eval = np.diag(grad_eval)

# Check for NaN separately in order to get informative failures.
self.assertFalse(np.isnan(grad_eval).any())
self.assertTrue((grad_eval > 0).all())
self.assertTrue(np.isfinite(grad_eval).all())

def test_float32(self):
self._test_grads_are_positive(np.float32, self._grid)

def test_float64(self):
self._test_grads_are_positive(np.float64, self._grid)


class LogNdtrGradientTest(NdtrGradientTest):
_use_log = True
_grid = GridSpec(min=-100., max=100., shape=[1, 2, 3, 8])
_error32 = ErrorSpec(rtol=1e-4, atol=0)
_error64 = ErrorSpec(rtol=1e-7, atol=0)


if __name__ == "__main__":
tf.test.main()
Loading