Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Upstream changes from internal #2028

Merged
merged 54 commits into from Apr 20, 2016
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
54 commits
Select commit Hold shift + click to select a range
ce6a995
Add paper-dialog as a dependency for tf-tensorboard.
teamdandelion Apr 15, 2016
4cc8426
Adds adjoint attribute to matrix inverse to make it possible to compu…
Apr 15, 2016
ed30375
tensorflow: add mutex.try_lock
Apr 15, 2016
1c06948
Add autoreload logic to TensorBoard, but do not wire it in.
teamdandelion Apr 15, 2016
11be20e
Update ops-related pbtxt files.
Apr 15, 2016
e13205f
Update generated Python Op docs.
Apr 15, 2016
50889d5
Add documentation to variables tutorial to point to inspect_checkpoint.
Apr 15, 2016
43f1c26
Add TensorBoard auto-reload behavior.
teamdandelion Apr 15, 2016
5a61179
The behavior of outer_dimensions was different than before when the s…
Apr 15, 2016
9f37988
Ignore variables without gradients. When plotting gradient histograms.
Apr 15, 2016
ea632fc
Fix crash in BFC allocator when failing to allocate memory from an al…
Apr 15, 2016
26fdfb8
Autogenerated Change: Release TensorBoard at TAG: 16
teamdandelion Apr 15, 2016
c883803
Make the ops_compatibility framework usable outside tensorflow/core.
josh11b Apr 16, 2016
ea6b59a
Start the collected queue runners automatically when creating a new s…
Apr 16, 2016
9eb5d42
Update generated Python Op docs.
Apr 16, 2016
78eba87
In tools/proto_text, move comment describing API to the header.
Apr 16, 2016
73d026e
Do not add large constants (> 10M) to the graph while constant folding.
keveman Apr 16, 2016
81be721
This would enable shape inference for higher order functions such as …
yuanbyu Apr 16, 2016
aed2450
Implement SparseTensor + (dense) Tensor.
concretevitamin Apr 16, 2016
4e3f022
Update ops-related pbtxt files.
Apr 16, 2016
b0f28df
Update generated Python Op docs.
Apr 16, 2016
f88d0cb
Add shape information to dense gradients.
mrry Apr 16, 2016
8db39c9
Starting special_math_ops.py. Made for math Ops that have more depen…
Apr 17, 2016
177fd83
Update generated Python Op docs.
Apr 17, 2016
0dfcea4
Added quantized type testing to tensor creation.
petewarden Apr 17, 2016
3f27cff
Optimized implementation of depthwise conv backprop filter for CPU.
Apr 18, 2016
517d3af
Parallelize MaxPool across batch dimension.
Apr 18, 2016
3c280f6
Added a format for saving an inference graph that can be memmapped an…
Apr 18, 2016
449ecb5
Fix the gradient for functions when its output has no dependencies on
Apr 18, 2016
c2d9cb1
Fixes and enhancements for contrib/tensorforest:
Apr 18, 2016
ec89b0c
Adding several loss functions including cosine_distance_loss, log_los…
Apr 18, 2016
3402f51
Fix for IndexedSlices gradient accumulation in while loop.
Apr 18, 2016
a0d14f0
Adding DirichletMultinomial class to contrib/distributions/
Apr 18, 2016
b92d7cf
Make the init_op optional and add an optional init_fn.
Apr 18, 2016
319f898
Update generated Python Op docs.
Apr 18, 2016
e7f6361
Expanding the scope filter in get_collection to support a full regex …
Apr 18, 2016
511634a
Update generated Python Op docs.
Apr 18, 2016
975094c
Make native depthwise Conv2d the default.
Apr 18, 2016
e07c48d
Fix for opensource break due to new losses code.
Apr 18, 2016
32e2bd1
Update TensorFlow README
teamdandelion Apr 18, 2016
2092fb4
Fixing additional silent int64->32 conversion errors/warnings.
Apr 18, 2016
5c9e7d3
Have StatSummarizer print OP types as well as name.
andrewharp Apr 18, 2016
ad21225
Refactoring/unification of weights & features within sdca_ops.cc.
Apr 18, 2016
b3b58fc
Introduce tf.sparse_reduce_sum() and a CPU kernel.
concretevitamin Apr 19, 2016
fc432e3
Update ops-related pbtxt files.
Apr 19, 2016
5c9bc51
Merge changes from github.
ilblackdragon Apr 19, 2016
b442bc6
Update ops-related pbtxt files.
Apr 19, 2016
c018da6
Update generated Python Op docs.
Apr 19, 2016
7e44d24
Internal change.
ilblackdragon Apr 19, 2016
8baf2f3
Minor improvements of shape inference for tensor array.
yuanbyu Apr 19, 2016
1150ce5
Fix bugs in tools/proto_text:
Apr 19, 2016
913b6e2
Make tf.diag_part work with inputs of unknown shape.
Apr 19, 2016
f242aed
Merge; fix formatting conflicts
martinwicke Apr 19, 2016
d70c765
Remove duplicate Cholesky gradient registration
martinwicke Apr 20, 2016
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
8 changes: 5 additions & 3 deletions README.md
Expand Up @@ -12,16 +12,18 @@ data flow graphs. Nodes in the graph represent mathematical operations, while
the graph edges represent the multidimensional data arrays (tensors) that flow
between them. This flexible architecture lets you deploy computation to one
or more CPUs or GPUs in a desktop, server, or mobile device without rewriting
code. TensorFlow was originally developed by researchers and engineers
code. TensorFlow also includes TensorBoard, a data visualization toolkit.

TensorFlow was originally developed by researchers and engineers
working on the Google Brain team within Google's Machine Intelligence research
organization for the purposes of conducting machine learning and deep neural
networks research. The system is general enough to be applicable in a wide
variety of other domains, as well.

**If you'd like to contribute to tensorflow, be sure to review the [contribution
**If you'd like to contribute to TensorFlow, be sure to review the [contribution
guidelines](CONTRIBUTING.md).**

**We use [github issues](https://github.com/tensorflow/tensorflow/issues) for
**We use [GitHub issues](https://github.com/tensorflow/tensorflow/issues) for
tracking requests and bugs, but please see
[Community](tensorflow/g3doc/resources/index.md#community) for general questions
and discussion.**
Expand Down
24 changes: 19 additions & 5 deletions WORKSPACE
Expand Up @@ -103,7 +103,7 @@ new_git_repository(
name = "iron_collapse",
build_file = "bower.BUILD",
remote = "https://github.com/polymerelements/iron-collapse.git",
tag = "v1.0.6",
tag = "v1.0.8",
)

new_git_repository(
Expand Down Expand Up @@ -159,7 +159,7 @@ new_git_repository(
name = "iron_input",
build_file = "bower.BUILD",
remote = "https://github.com/polymerelements/iron-input.git",
tag = "v1.0.9",
tag = "1.0.10",
)

new_git_repository(
Expand Down Expand Up @@ -187,7 +187,7 @@ new_git_repository(
name = "iron_overlay_behavior",
build_file = "bower.BUILD",
remote = "https://github.com/polymerelements/iron-overlay-behavior.git",
tag = "v1.6.2",
tag = "v1.6.3",
)

new_git_repository(
Expand Down Expand Up @@ -253,6 +253,20 @@ new_git_repository(
tag = "v1.1.3",
)

new_git_repository(
name = "paper_dialog",
build_file = "bower.BUILD",
remote = "https://github.com/polymerelements/paper-dialog.git",
tag = "v1.0.4",
)

new_git_repository(
name = "paper_dialog_behavior",
build_file = "bower.BUILD",
remote = "https://github.com/polymerelements/paper-dialog-behavior.git",
tag = "v1.2.5",
)

new_git_repository(
name = "paper_dropdown_menu",
build_file = "bower.BUILD",
Expand Down Expand Up @@ -320,7 +334,7 @@ new_git_repository(
name = "paper_radio_button",
build_file = "bower.BUILD",
remote = "https://github.com/polymerelements/paper-radio-button.git",
tag = "v1.1.1",
tag = "v1.1.2",
)

new_git_repository(
Expand Down Expand Up @@ -404,5 +418,5 @@ new_git_repository(
name = "webcomponentsjs",
build_file = "bower.BUILD",
remote = "https://github.com/polymer/webcomponentsjs.git",
tag = "v0.7.21",
tag = "v0.7.22",
)
18 changes: 18 additions & 0 deletions bower.BUILD
Expand Up @@ -348,6 +348,24 @@ filegroup(
],
)

filegroup(
name = "paper_dialog",
srcs = [
"index.html",
"paper-dialog.html",
],
)

filegroup(
name = "paper_dialog_behavior",
srcs = [
"index.html",
"paper-dialog-behavior.html",
"paper-dialog-common.css",
"paper-dialog-shared-styles.html",
],
)

filegroup(
name = "paper_dropdown_menu",
srcs = [
Expand Down
3 changes: 2 additions & 1 deletion tensorflow/BUILD
Expand Up @@ -77,12 +77,12 @@ filegroup(
"//tensorflow/contrib/distributions:all_files",
"//tensorflow/contrib/framework:all_files",
"//tensorflow/contrib/layers:all_files",
"//tensorflow/contrib/learn:all_files",
"//tensorflow/contrib/linear_optimizer:all_files",
"//tensorflow/contrib/linear_optimizer/kernels:all_files",
"//tensorflow/contrib/lookup:all_files",
"//tensorflow/contrib/losses:all_files",
"//tensorflow/contrib/metrics:all_files",
"//tensorflow/contrib/skflow:all_files",
"//tensorflow/contrib/tensor_forest:all_files",
"//tensorflow/contrib/testing:all_files",
"//tensorflow/contrib/util:all_files",
Expand All @@ -97,6 +97,7 @@ filegroup(
"//tensorflow/examples/how_tos/reading_data:all_files",
"//tensorflow/examples/image_retraining:all_files",
"//tensorflow/examples/label_image:all_files",
"//tensorflow/examples/skflow:all_files",
"//tensorflow/examples/tutorials/mnist:all_files",
"//tensorflow/examples/tutorials/word2vec:all_files",
"//tensorflow/g3doc/how_tos/adding_an_op:all_files",
Expand Down
12 changes: 11 additions & 1 deletion tensorflow/contrib/distributions/BUILD
@@ -1,5 +1,5 @@
# Description:
# Contains ops to train linear models on top of TensorFlow.
# Contains ops for statistical distributions (with pdf, cdf, sample, etc...).
# APIs here are meant to evolve over time.

licenses(["notice"]) # Apache 2.0
Expand All @@ -16,6 +16,16 @@ py_library(
srcs_version = "PY2AND3",
)

cuda_py_tests(
name = "dirichlet_multinomial_test",
srcs = ["python/kernel_tests/dirichlet_multinomial_test.py"],
additional_deps = [
":distributions_py",
"//tensorflow/python:framework_test_lib",
"//tensorflow/python:platform_test",
],
)

cuda_py_tests(
name = "gaussian_test",
size = "small",
Expand Down
2 changes: 1 addition & 1 deletion tensorflow/contrib/distributions/__init__.py
Expand Up @@ -23,6 +23,6 @@

# pylint: disable=unused-import,wildcard-import, line-too-long
from tensorflow.contrib.distributions.python.ops import gaussian_conjugate_posteriors
from tensorflow.contrib.distributions.python.ops.dirichlet_multinomial import *
from tensorflow.contrib.distributions.python.ops.gaussian import *
# from tensorflow.contrib.distributions.python.ops.dirichlet import * # pylint: disable=line-too-long
# from tensorflow.contrib.distributions.python.ops.dirichlet_multinomial import * # pylint: disable=line-too-long
@@ -0,0 +1,192 @@
# Copyright 2016 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function

import numpy as np
import tensorflow as tf


class DirichletMultinomialTest(tf.test.TestCase):

def test_num_classes(self):
with self.test_session():
for num_classes in range(3):
alpha = np.random.rand(3, num_classes)
dist = tf.contrib.distributions.DirichletMultinomial(alpha)
self.assertEqual([], dist.num_classes.get_shape())
self.assertEqual(num_classes, dist.num_classes.eval())

def test_alpha_property(self):
alpha = np.array([[1., 2, 3]])
with self.test_session():
dist = tf.contrib.distributions.DirichletMultinomial(alpha)
self.assertEqual([1, 3], dist.alpha.get_shape())
self.assertAllClose(alpha, dist.alpha.eval())

def test_empty_alpha_and_empty_counts_returns_empty(self):
with self.test_session():
alpha = [[]]
counts = [[]]
dist = tf.contrib.distributions.DirichletMultinomial(alpha)
self.assertAllEqual([], dist.pmf(counts).eval())
self.assertAllEqual([0], dist.pmf(counts).get_shape())
self.assertAllEqual([], dist.log_pmf(counts).eval())
self.assertAllEqual([0], dist.log_pmf(counts).get_shape())
self.assertAllEqual([[]], dist.mean.eval())
self.assertAllEqual([1, 0], dist.mean.get_shape())
self.assertAllEqual(0, dist.num_classes.eval())
self.assertAllEqual([], dist.num_classes.get_shape())

def test_pmf_both_zero_batches(self):
# The probabilities of one vote falling into class k is the mean for class
# k.
with self.test_session():
# Both zero-batches. No broadcast
alpha = [1., 2]
counts = [1, 0.]
dist = tf.contrib.distributions.DirichletMultinomial(alpha)
pmf = dist.pmf(counts)
self.assertAllClose(1 / 3., pmf.eval())
self.assertEqual((), pmf.get_shape())

def test_pmf_alpha_stretched_in_broadcast_when_same_rank(self):
# The probabilities of one vote falling into class k is the mean for class
# k.
with self.test_session():
alpha = [[1., 2]]
counts = [[1, 0.], [0, 1.]]
dist = tf.contrib.distributions.DirichletMultinomial(alpha)
pmf = dist.pmf(counts)
self.assertAllClose([1 / 3., 2 / 3.], pmf.eval())
self.assertEqual((2), pmf.get_shape())

def test_pmf_alpha_stretched_in_broadcast_when_lower_rank(self):
# The probabilities of one vote falling into class k is the mean for class
# k.
with self.test_session():
alpha = [1., 2]
counts = [[1, 0.], [0, 1.]]
pmf = tf.contrib.distributions.DirichletMultinomial(alpha).pmf(counts)
self.assertAllClose([1 / 3., 2 / 3.], pmf.eval())
self.assertEqual((2), pmf.get_shape())

def test_pmf_counts_stretched_in_broadcast_when_same_rank(self):
# The probabilities of one vote falling into class k is the mean for class
# k.
with self.test_session():
alpha = [[1., 2], [2., 3]]
counts = [[1, 0.]]
pmf = tf.contrib.distributions.DirichletMultinomial(alpha).pmf(counts)
self.assertAllClose([1 / 3., 2 / 5.], pmf.eval())
self.assertEqual((2), pmf.get_shape())

def test_pmf_counts_stretched_in_broadcast_when_lower_rank(self):
# The probabilities of one vote falling into class k is the mean for class
# k.
with self.test_session():
alpha = [[1., 2], [2., 3]]
counts = [1, 0.]
pmf = tf.contrib.distributions.DirichletMultinomial(alpha).pmf(counts)
self.assertAllClose([1 / 3., 2 / 5.], pmf.eval())
self.assertEqual((2), pmf.get_shape())

def test_pmf_for_one_vote_is_the_mean_with_one_record_input(self):
# The probabilities of one vote falling into class k is the mean for class
# k.
alpha = [1., 2, 3]
with self.test_session():
for class_num in range(3):
counts = np.zeros((3), dtype=np.float32)
counts[class_num] = 1.0
dist = tf.contrib.distributions.DirichletMultinomial(alpha)
mean = dist.mean.eval()
pmf = dist.pmf(counts).eval()

self.assertAllClose(mean[class_num], pmf)
self.assertTupleEqual((3,), mean.shape)
self.assertTupleEqual((), pmf.shape)

def test_zero_counts_results_in_pmf_equal_to_one(self):
# There is only one way for zero items to be selected, and this happens with
# probability 1.
alpha = [5, 0.5]
counts = [0., 0.]
with self.test_session():
dist = tf.contrib.distributions.DirichletMultinomial(alpha)
pmf = dist.pmf(counts)
self.assertAllClose(1.0, pmf.eval())
self.assertEqual((), pmf.get_shape())

def test_large_tau_gives_precise_probabilities(self):
# If tau is large, we are doing coin flips with probability mu.
mu = np.array([0.1, 0.1, 0.8], dtype=np.float32)
tau = np.array([100.], dtype=np.float32)
alpha = tau * mu

# One (three sided) coin flip. Prob[coin 3] = 0.8.
# Note that since it was one flip, value of tau didn't matter.
counts = [0., 0, 1]
with self.test_session():
dist = tf.contrib.distributions.DirichletMultinomial(alpha)
pmf = dist.pmf(counts)
self.assertAllClose(0.8, pmf.eval(), atol=1e-4)
self.assertEqual((), pmf.get_shape())

# Two (three sided) coin flips. Prob[coin 3] = 0.8.
counts = [0., 0, 2]
with self.test_session():
dist = tf.contrib.distributions.DirichletMultinomial(alpha)
pmf = dist.pmf(counts)
self.assertAllClose(0.8**2, pmf.eval(), atol=1e-2)
self.assertEqual((), pmf.get_shape())

# Three (three sided) coin flips.
counts = [1., 0, 2]
with self.test_session():
dist = tf.contrib.distributions.DirichletMultinomial(alpha)
pmf = dist.pmf(counts)
self.assertAllClose(3 * 0.1 * 0.8 * 0.8, pmf.eval(), atol=1e-2)
self.assertEqual((), pmf.get_shape())

def test_small_tau_prefers_correlated_results(self):
# If tau is small, then correlation between draws is large, so draws that
# are both of the same class are more likely.
mu = np.array([0.5, 0.5], dtype=np.float32)
tau = np.array([0.1], dtype=np.float32)
alpha = tau * mu

# If there is only one draw, it is still a coin flip, even with small tau.
counts = [1, 0.]
with self.test_session():
dist = tf.contrib.distributions.DirichletMultinomial(alpha)
pmf = dist.pmf(counts)
self.assertAllClose(0.5, pmf.eval())
self.assertEqual((), pmf.get_shape())

# If there are two draws, it is much more likely that they are the same.
counts_same = [2, 0.]
counts_different = [1, 1.]
with self.test_session():
dist = tf.contrib.distributions.DirichletMultinomial(alpha)
pmf_same = dist.pmf(counts_same)
pmf_different = dist.pmf(counts_different)
self.assertLess(5 * pmf_different.eval(), pmf_same.eval())
self.assertEqual((), pmf_same.get_shape())


if __name__ == '__main__':
tf.test.main()