Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Updated link to use HTTPS #10998

Merged
merged 1 commit into from Jun 22, 2017
Merged

Updated link to use HTTPS #10998

merged 1 commit into from Jun 22, 2017

Conversation

mbrickn
Copy link
Contributor

@mbrickn mbrickn commented Jun 22, 2017

Howdy!

I just updated a link to use https instead of http.

Thanks! ^ _ ^

Howdy!

I just updated a link to use https instead of http.

Thanks!
@tensorflow-jenkins
Copy link
Collaborator

Can one of the admins verify this patch?

@gunan gunan merged commit 2336cdf into tensorflow:master Jun 22, 2017
@mbrickn mbrickn deleted the patch-1 branch June 23, 2017 19:25
drpngx pushed a commit that referenced this pull request Jun 28, 2017
* Properly handle ops that don't have a CPU kernel

PiperOrigin-RevId: 159655906

* Selected BUILD cleanup in tensorflow/contrib/...

PiperOrigin-RevId: 159673079

* Remove redundant `get` calls on smart pointers

PiperOrigin-RevId: 159675809

* PiperOrigin-RevId: 159698321

* Migrate kernels to boosted_trees.

PiperOrigin-RevId: 159698656

* Fix a bug in the memory optimizer when two inputs to a node are both recomputed

PiperOrigin-RevId: 159700457

* Fixed memory leak that can be triggered by a failed node evaluation

PiperOrigin-RevId: 159707380

* Updates get_started tutorial.

PiperOrigin-RevId: 159709158

* [XLA] Remove unused factory in local_service

PiperOrigin-RevId: 159712806

* Fix typo in docstring

PiperOrigin-RevId: 159714414

* Migrate ops for new version of TensorForest.

PiperOrigin-RevId: 159718610

* Added parameterized tests to reduce window tests.

PiperOrigin-RevId: 159721784

* Use C API to implement Operation.device property

PiperOrigin-RevId: 159723490

* Several Estimator changes:
- support configurable input_fn calling in Estimator subclasses.
- pass params and config to the input_fn.
- allow callables for model_fn and input_fn.

PiperOrigin-RevId: 159725554

* Fixed the scalar output for shard api when outputs_from_all_shards=True.

PiperOrigin-RevId: 159726444

* Automated g4 rollback of changelist 159718610

PiperOrigin-RevId: 159728380

* Adding missing deps to targets in llvm.BUILD. This was only working in non-sandboxed builds.

PiperOrigin-RevId: 159729295

* [XLA:HLO] Move sequence functions from hlo_ordering.h to hlo_scheduling.h.

This is required for upcoming changes to convert the sequence creation functions
(and HeapSimulator and BufferAssignment) over to using the new
Hlo{Dataflow,Alias}Analysis.

It's required because otherwise there's a dependency cycle:

Hlo{Dataflow,Alias}Analysis depends on HloOrdering
CreateMemoryMinimizingSequence will depend on Hlo{Dataflow,Alias}Analysis

There's already a cycle here, if both HloOrdering and
CreateMemoryMinimizingSequence are in the same file.  Also note that:

MinimumMemoryForSequence depends on HeapSimulator
HeapSimulator will depend on Hlo{Dataflow,Alias}Analysis
Hlo{Dataflow,Alias}Analysis depends on HloOrdering

Splitting out the sequence functions resolves the cycle.

Refactoring only; no functional changes.

PiperOrigin-RevId: 159731836

* [XLA:HLO] Split Hlo{Value,Buffer} out of Hlo{Dataflow,Alias}Analysis.

This will make dependencies cleaner for upcoming CLs that will convert
HeapSimulator and HloOrdering to use the new analyses.

No change in functionality.

PiperOrigin-RevId: 159737265

* Internal change

PiperOrigin-RevId: 159738215

* Suggest people need to do some build environment ./configur'ing.

Fixes #4279

PiperOrigin-RevId: 159738412

* Rewrite SameDefinedShape function in ShapeRefiner

PiperOrigin-RevId: 159745894

* [XLA] Remove xla_cpu_*_eigen flags from CPU backends.

These flags are currently de-facto unused; parallelism should be controlled
through the cpu_parallel backend. For configuring Eigen, if needed, the options
should be piped more directly to the code.

PiperOrigin-RevId: 159746509

* Updates layers tutorial and corresponding example.

PiperOrigin-RevId: 159749528

* Further BUILD cleanup

PiperOrigin-RevId: 159749869

* Use more efficient squared_difference

PiperOrigin-RevId: 159751209

* Add log_step_count_steps to RunConfig and allow it to flow to the MonitoredSession.

PiperOrigin-RevId: 159753935

* [XLA] Remove xla_hlo_test_generate_hlo_graph, which is now redundant.

PiperOrigin-RevId: 159755688

* Do not use SSE4.1 instructions on Android builds.

PiperOrigin-RevId: 159756104

* Add nonpublic helper `tf.distributions.util.tridiag` op.

PiperOrigin-RevId: 159757904

* [XLA] Remove dead "in-client" code.
Remove Service::runs_in_client_process_ field and it's dead user. This was
previously used by the "InProcess" methods which have been replaced with
the LocalClient API.

PiperOrigin-RevId: 159759455

* [tf contrib seq2seq] Add monotonic attention mechanisms

* Add monotonic_attention and safe_cumprod helper functions.
* Add _BaseMonotonicAttentionMechanism base class.
* Add BahdanauMonotonicAttention and LuongMonotonicAttention classes.

These attention mechanisms are proposed in
Colin Raffel, Minh-Thang Luong, Peter J. Liu, Ron J. Weiss, Douglas Eck,
"Online and Linear-Time Attention by Enforcing Monotonic Alignments."
ICML 2017.  https://arxiv.org/abs/1704.00784

PiperOrigin-RevId: 159760073

* Add ability for argmax to output int32 indices.  Default remains int64.

Change is made in a backwards and forward compatible manner, since
we add a new attribute with a default that remains the same, and
simply register a few new kernels.

PiperOrigin-RevId: 159761347

* Automated g4 rollback of changelist 159746509

PiperOrigin-RevId: 159763112

* Raise ValueError if invalid dtype for random_uniform.

PiperOrigin-RevId: 159764956

* Internal change.

PiperOrigin-RevId: 159769520

* Support zero shapes for random_poisson. This matches random_uniform.

PiperOrigin-RevId: 159771215

* Blacklist the quantized ops since they have too many issues (incorrect shape
functions, memory corruptions, ...)

PiperOrigin-RevId: 159772801

* Fixed the shape functions of the QuantizedAdd and QuantizedMul ops

PiperOrigin-RevId: 159772841

* Switch from assigning namedtuple.__new__.__defaults__ to overwriting __new__.

Assigning __defaults__ relies on an implementation detail of CPython, confuses
type checkers (and developers :)), and is error-prone since it doesn't make the
relationship between parameter names and default values explicit.
This CL switches to overloading __new__ instead.

PiperOrigin-RevId: 159773922

* Made sure that we can call the constant folding code twice safely.

PiperOrigin-RevId: 159781607

* Added batch_matmul op dependence to android_extended_ops

PiperOrigin-RevId: 159787178

* Fixes a TODO in head_test.

PiperOrigin-RevId: 159789178

* When configuring per-session thread pools, allow
a pool to be a global pool. This allows a division
between large and small pools, without needing to make
new pool for each session.

PiperOrigin-RevId: 159789678

* Add a multi-head TensorForest estimator.

PiperOrigin-RevId: 159820487

* Have RestoreV2's shape fn set all outputs to unknown shape.

PiperOrigin-RevId: 159835723

* VectorExponential added to distributions.

PiperOrigin-RevId: 159840822

* Fold as many nodes as possible instead of giving up if there is any error.

PiperOrigin-RevId: 159841935

* Removed deprecated summary usage from estimators.
Made name_space usage consistent.

PiperOrigin-RevId: 159846928

* Adding missing license notice to toolchain build files

PiperOrigin-RevId: 159847551

* [XLA] Remove unused flags and move debugging flag to debug options.

PiperOrigin-RevId: 159849759

* Fixes some docstrings in feature_column.

PiperOrigin-RevId: 159850619

* TpuEstimator: Replicate the input_fn to the worker CPU for each shard.

The batch size is configured as follows:
The user may specify a global batch size in their hyperparameters. If the 'batch_size' field is set, then we convert the global batch size into a per-shard batch size by dividing by num_shards before running their input_fn.

PiperOrigin-RevId: 159851773

* Modify beam search decoder to use symbolic shape for vocab size if the static shape is not present.

PiperOrigin-RevId: 159852297

* Generalize cluster initialization to span multiple mini-batches if necessary.

PiperOrigin-RevId: 159852557

* Use a single threaded session for SDCALinearRegressorTest to
avoid incorrect threading test failures (tsan).

PiperOrigin-RevId: 159852818

* Migrate ops for new version of TensorForest.

PiperOrigin-RevId: 159852889

* Replaced constant inputs with variables to ensure most of the graph doesn't get
optimized away

PiperOrigin-RevId: 159853171

* For candidate sampling, add facility to colocate the logit computation with the sharded embeddings.

PiperOrigin-RevId: 159854706

* Added a utility to create parsing spec for regressors (canned estimator)

PiperOrigin-RevId: 159855254

* Fix cuda_kernel_helper_test. std::numeric_limits<int32>::max() doesn't pass, so
I didn't use that.

PiperOrigin-RevId: 159869169

* In tfcompile, prune nodes that are not reachable from the fetches before
building the Graph. This allows loading a graph that contains ops not
needed for the compiled binary.

PiperOrigin-RevId: 159869692

* Fix bugs related to distributions over integers.

- Ensure that the max number of categories does not exceed largest integer-form float.
- Make dtype inference consistent between Categorical and Multinomial
distributions.
- Improve documentation to better reflect that the Categorical
distribution is analogous to `argmax{OneHotCategorical}` (itself being
identical to `argmax{Multinomial(p,n=1)}` but not Multinomial.
- Fix validation_args Heisenberg uncertainty: only validation logic should live under self.validate_args. E.g., validate_args=True would sometimes imply `x=floor(x)` which changes behavior thus making debugging impossible because enabling validation *changes* values.
- Corrected `Geometric` swapping of validate_args` and `allow_nan_stats` default-values.

Fixes #10149

PiperOrigin-RevId: 159872532

* Make HloModule clonable

This CL makes HloModule clonable, which is necessary when we want to run the same compilation twice with the same input.

PiperOrigin-RevId: 159874256

* Internal change.

PiperOrigin-RevId: 159876942

* Implement alternative `monte_carlo.expectation_v2`. This function implements
the reparameterization and score-gradient tricks and does not depend on
tf.Distribution like inputs.

PiperOrigin-RevId: 159877923

* In SE_ASSIGN_OR_RETURN change ConsumeValueOrDie to the preferred std::move ValueOrDie.

PiperOrigin-RevId: 159879754

* If rank is unknown, do not add output shapes to transpose nodes.

PiperOrigin-RevId: 159879840

* Move sparse_fill_empty_rows to new, *significantly* faster, C++ kernel for everyone.

Also fix a bug in the C++ op when the input ST has 0 elements.

PiperOrigin-RevId: 159880044

* Add support of label_keys to DebugClassifier

PiperOrigin-RevId: 159883986

* Register devices under their legacy names

Because some higher level APIs continue to use the legacy name format,
when using ClusterSpec propagation, we need to ensure that we register
the devices under their legacy names as well as their canonical names.

PiperOrigin-RevId: 159885777

* [BatchNorm] Minor fixes to TF doc

PiperOrigin-RevId: 159886125

* Generating TBAA metadata causes the LLVM to miscompile after
https://reviews.llvm.org/rL305938).  Disable TBAA (to stop the miscompiles)
while we fix the root issue.

PiperOrigin-RevId: 159895736

* Improve score-trick to be a valid Csiszar f-Divergence yet numerically stable.

PiperOrigin-RevId: 159896013

* Support advisor in all places (Command line, APIs)
Add expensive operation checker

PiperOrigin-RevId: 159897279

* Added canned estimators to Tensorflow library. List of added estimators:
* DNNClassifier
* DNNRegressor
* LinearClassifer
* LinearRegressor
* DNNLinearCombinedClassifier
* DNNLinearCombinedRegressor

PiperOrigin-RevId: 159898954

* Alligned how model-fns handled params among linear/dnn/combined estimators.

PiperOrigin-RevId: 159899925

* Fixed cmake tests.

PiperOrigin-RevId: 159901417

* [XLA:CPU] Add VLOGs to cpu_compiler.cc

PiperOrigin-RevId: 159902919

* Make occurence (op run times and op definition) selectable
in all views to address the loop problem.

When a node is in loop, its execution times are accumulated, its run times
will increase.

PiperOrigin-RevId: 159912429

* [XLA] Small error message improvement in binop shape inference.

PiperOrigin-RevId: 159920109

* Follow upstream API change from r306058.

PiperOrigin-RevId: 159938416

* [TF:XLA] Update LLVM to upstream revision r306085.

PiperOrigin-RevId: 159946562

* [XLA] Remove unused xla_cpu flag and move another to DebugOptions.

PiperOrigin-RevId: 159952124

* Updates linear.md tutorial

PiperOrigin-RevId: 159956867

* Add TraceMe instrumentation of RunStep in GRPC distributed runtime.
A unique ID is added to each RunStep call that allows the client and server
events to be correlated.

PiperOrigin-RevId: 159956950

* [XLA] Add general F32 implementation for ReducePrecision operation.

This only tests with parameter inputs (which is needed to ensure we actually test on GPUs as well as CPUs); there's no point in separately testing with constants.

PiperOrigin-RevId: 159961430

* Java: NativeLibrary: Fix URL in error message.

And add some detail.
Inspired by #11015

PiperOrigin-RevId: 159962478

* Increase rtol for util_test.

PiperOrigin-RevId: 159971136

* Re-enable IR dumping for the sequential CPU backend.

PiperOrigin-RevId: 159974126

* tfdbg: a few minor fixes and improvements

* Let DumpingDebugWrapperSession and DumpingDebugHook create session_root if it doesn't exist
* Add README.md to tensorflow/python/debug
* Add section "Debugging Keras Models with TFDBG" in debugger.md

PiperOrigin-RevId: 159976070

* Add None check for save_path when restoring checkpoints as if something is wrong in tf.train.latest_checkpoint, it will often return None and it's nice to have a common sense check in restore for this. This way log.error says what has happened.

PiperOrigin-RevId: 159979481

* Don't crash if a metagraph fails to load.

PiperOrigin-RevId: 159981628

* Prepare to not include node_def.proto.h in node_def_util.h

The goal is to make kernels mostly independent of proto headers, which will let
us lock down our .so imports.  This CL makes a bunch of .cc files
either include node_def.proto.h themselves or not need the definition of
NodeDef; a second CL will make node_def_util.h not include node_def.proto.h.

RELNOTES: n/a
PiperOrigin-RevId: 159982117

* Add a few diagnostic flags to help narrow down issues with the LLVM
backends.

PiperOrigin-RevId: 159982441

* Updated wide-n-deep tutorial code to use core version of estimators and feature-columns.

PiperOrigin-RevId: 159984663

* Modify ControlFlowContext to also respect import_scope in 'values_' and keys of 'external_values_'

PiperOrigin-RevId: 159985290

* Add item's graph to partition_graphs in virtual cluster's run method.
Put node op name in timeline_label instead of node_name.

PiperOrigin-RevId: 159986583

* Use short-proto for logging purposes.

A short proto will be output on a single log line, making it
easier for certain automated tools to handle.

PiperOrigin-RevId: 159994005

* Sinh, ArcSinh, Cosh, LogCosh functions added to distributions/python/ops/trig.
Care is taken to ensure a fair bit of stability.

PiperOrigin-RevId: 159995514

* Updates some examples in examples/learn.

PiperOrigin-RevId: 159996397

* Add kernel tests for boosted_trees.

PiperOrigin-RevId: 160002696

* Avoid doing unecessary work in the OptimizeGraph() function whenever possible

PiperOrigin-RevId: 160003173

* Use std::shared_ptr instead of core::RefCounted for Node::Properties

Also changes Node::Properties to a struct and removes underscores from public member variables. This change should make it easier to work with Properties moving forward as the refcount will be automatically updated.

PiperOrigin-RevId: 160003281

* Make the CPU compiler dump optimized IR along with the unoptimized IR.

PiperOrigin-RevId: 160005257

* Disable flaky run_metadata_test.

PiperOrigin-RevId: 160015399

* BUILD cleanup in tensorflow/tools/...

PiperOrigin-RevId: 160018623

* SinhArcSinh bijector added.

This two-parameter diffeomorphism from R --> R allows for skewness and fatter
or thinner tails.  See docstring and also
http://oro.open.ac.uk/22510/1/sinhasinh.pdf

PiperOrigin-RevId: 160019380

* Avoid hardcoded names for temporary files in tests.

These tests (and examples that are run as tests) were using hardcoded names for
temporary files.  This failed when multiple copies of these tests were run in
parallel, or even successively by different users, where the second run could
not overwrite files left by the first.

This change uses the TEST_TMPDIR environment variable used by bazel's test
runner to choose a temporary directory.   If that directory is not set,
/tmp is used, as before.

PiperOrigin-RevId: 160026924

* Fix multinomial doc-string, input arg logits expects to log-probabilities and not log-odds.

PiperOrigin-RevId: 160036709

* Made TensorFlow documentation on LSTMs slightly more accurate.

PiperOrigin-RevId: 160047054

* Follow LLVM/ORC upstream API change in r306166.

PiperOrigin-RevId: 160108102

* Move resampler from sonnet to contrib.

PiperOrigin-RevId: 160134565

* [TPUEstimator] Make input_fn invoked properly with eval on CPU.

PiperOrigin-RevId: 160151890

* Deletes iris_val_based_early_stopping example, which uses deprecated ValidationMonitor.

PiperOrigin-RevId: 160154863

* [XLA] Move HLO dumping flags from service_flags to debug_options_flags

This also removes the duplication in the xla_generate_hlo_graph flag.

This CL also moves the actual dumping logic from Executable to the
hlo_graph_dumper namespace, where it belongs; this is in preparation for
removing the hlo_dumper callback altogether, since it isn't serving any role
beyond what a direct call to hlo_graph_dumper would have (b/62872831 has more
details).

PiperOrigin-RevId: 160154869

* Fix missing variable unref

Direct leak of 56 byte(s) in 1 object(s) allocated from:
    #0 0xf5ee272 in operator new(unsigned long) (/build/cas/5d2/5d2be3b530580573ff7269adcab7cbac+0xf5ee272)
    #1 0x1b51394c in tensorflow::AssignVariableOp<Eigen::ThreadPoolDevice, float>::Compute(tensorflow::OpKernelContext*)::'lambda'(tensorflow::Var**)::operator()(tensorflow::Var**) const (/build/cas/5d2/5d2be3b530580573ff7269adcab7cbac+0x1b51394c)
    #2 0x1b5136c0 in std::_Function_handler<tensorflow::Status (tensorflow::Var**), tensorflow::AssignVariableOp<Eigen::ThreadPoolDevice, float>::Compute(tensorflow::OpKernelContext*)::'lambda'(tensorflow::Var**)>::_M_invoke(std::_Any_data const&, tensorflow::Var**) (/build/cas/5d2/5d2be3b530580573ff7269adcab7cbac+0x1b5136c0)
    #3 0x1b50b289 in std::function<tensorflow::Status (tensorflow::Var**)>::operator()(tensorflow::Var**) const (/build/cas/5d2/5d2be3b530580573ff7269adcab7cbac+0x1b50b289)
    #4 0x1b50af88 in tensorflow::Status tensorflow::ResourceMgr::LookupOrCreate<tensorflow::Var>(basic_string<char, std::char_traits<char>, std::allocator<char> > const&, basic_string<char, std::char_traits<char>, std::allocator<char> > const&, tensorflow::Var**, std::function<tensorflow::Status (tensorflow::Var**)>) (/build/cas/5d2/5d2be3b530580573ff7269adcab7cbac+0x1b50af88)
    #5 0x1b50ac10 in tensorflow::Status tensorflow::LookupOrCreateResource<tensorflow::Var>(tensorflow::OpKernelContext*, tensorflow::ResourceHandle const&, tensorflow::Var**, std::function<tensorflow::Status (tensorflow::Var**)>) (/build/cas/5d2/5d2be3b530580573ff7269adcab7cbac+0x1b50ac10)
    #6 0x1b512f1e in tensorflow::AssignVariableOp<Eigen::ThreadPoolDevice, float>::Compute(tensorflow::OpKernelContext*) (/build/cas/5d2/5d2be3b530580573ff7269adcab7cbac+0x1b512f1e)
    #7 0x1d1881c7 in tensorflow::ThreadPoolDevice::Compute(tensorflow::OpKernel*, tensorflow::OpKernelContext*) (/build/cas/5d2/5d2be3b530580573ff7269adcab7cbac+0x1d1881c7)
    #8 0xf96e0fe in tensorflow::KernelAndDevice::Run(std::vector<tensorflow::Tensor, std::allocator<tensorflow::Tensor> >*, std::vector<tensorflow::Tensor, std::allocator<tensorflow::Tensor> >*) (/build/cas/5d2/5d2be3b530580573ff7269adcab7cbac+0xf96e0fe)
    #9 0xf94f9c8 in TFE_Execute (/build/cas/5d2/5d2be3b530580573ff7269adcab7cbac+0xf94f9c8)
    #10 0xf94356d in TFE_Py_Execute(TFE_Context*, int, char const*, tensorflow::gtl::InlinedVector<TFE_TensorHandle*, 4>*, _object*, tensorflow::gtl::InlinedVector<TFE_TensorHandle*, 2>*, TF_Status*) (/build/cas/5d2/5d2be3b530580573ff7269adcab7cbac+0xf94356d)

PiperOrigin-RevId: 160160101

* Simplify strided_slice's shape handling

Now that TensorShape and PartialTensorShape share memory representations, there's no need for an abstract class that makes TensorShape and TensorShapeProto look the same.

RELNOTES: n/a
PiperOrigin-RevId: 160161618

* Added a tool to report the static information that can be extracted from a TF model.

PiperOrigin-RevId: 160162256

* Properly handle RefEnter, RefExit and RefNextIteration nodes.

PiperOrigin-RevId: 160162338

* Switch tfprof to use proto3

PiperOrigin-RevId: 160163483

* Fixes to cuda_config.h.

PiperOrigin-RevId: 160168545

* Update ops-related pbtxt files.

PiperOrigin-RevId: 160171187

* Adds notes to prevent overfitting for Experiment continous_train_and_eval.

PiperOrigin-RevId: 160172692

* Go: Update generated wrapper functions for TensorFlow ops.

PiperOrigin-RevId: 160172985

* Merge changes from github.
END_PUBLIC

Note: this CL will break builds.  cl/159887762 to follow to fix all the breakages.

---
Commit 2336cdf authored by Maxwell Paul Brickner<mbrickn@users.noreply.github.com>
Committed by gunan<gunan@google.com>:
Updated link to use HTTPS (#10998)

Howdy!

I just updated a link to use https instead of http.

Thanks!
---
Commit ad0892d authored by Luke Iwanski<luke@codeplay.com>
Committed by Luke Iwanski<luke@codeplay.com>:
[OpenCL] Fixes run_metadata_test for SYCL

 This test is designed to test CUDA specific behavior

---
Commit 6b37a07 authored by Todd Wang<toddwang@gmail.com>
Committed by GitHub<noreply@github.com>:
Update comments
---
Commit 1699d90 authored by John Lawson<john@codeplay.com>
Committed by Luke Iwanski<luke@codeplay.com>:
[OpenCL] Fixes CUDA specific test run on SYCL (#56)

The testBadParentValuesOnGPU should only be run on CUDA devices, as the
test checks for particular CUDA behaviour. We don't actually provide a
SYCL kernel for GatherTree and so it's not a problem that the tests
don't target SYCL.
---
Commit 3c19462 authored by myPrecious<Moriadry@users.noreply.github.com>
Committed by Shanqing Cai<cais@google.com>:
Java API to get the size of specified input list of operations. (#10865)

* Java API to get the size of specified input list of operations

* remove unnecessary explain to avoid bring a new term to users.

---
Commit e911c74 authored by Luke Iwanski<luke@codeplay.com>
Committed by Luke Iwanski<luke@codeplay.com>:
[OpenCL] REGISTER -> REGISTER6

---
Commit fbf6c4c authored by superryanguo<superryanguo@gmail.com>
Committed by superryanguo<superryanguo@gmail.com>:
Simplify the Quickstart section with the weblink is better

---
Commit 72e2918 authored by Taehoon Lee<taehoonlee@snu.ac.kr>
Committed by Taehoon Lee<taehoonlee@snu.ac.kr>:
Fix typos

---
Commit 90c4406 authored by Rishabh Patel<patelrishabh@users.noreply.github.com>
Committed by GitHub<noreply@github.com>:
Correct the learning rate as per the code snippet
---
Commit 03da611 authored by Todd Wang<toddwang@gmail.com>
Committed by GitHub<noreply@github.com>:
Update ir_array.cc
---
Commit 2df6cd3 authored by Todd Wang<toddwang@gmail.com>
Committed by GitHub<noreply@github.com>:
Another try
---
Commit af0cbac authored by Luke Iwanski<luke@codeplay.com>
Committed by Benoit Steiner<benoitsteiner@users.noreply.github.com>:
[OpenCL] Transpose to go through Eigen (#10321)

---
Commit fc73610 authored by Luke Iwanski<luke@codeplay.com>
Committed by Benoit Steiner<benoitsteiner@users.noreply.github.com>:
[OpenCL] Registers RGBToHSV and HSVToRGB (#91) (#10848)

* [OpenCL] Added RGBToHSV and HSVToRGB

* Aligning '\'
---
Commit 832894e authored by Luke Iwanski<luke@codeplay.com>
Committed by Benoit Steiner<benoitsteiner@users.noreply.github.com>:
[OpenCL] Registers AdjustContrastv2 (#10949)

* [OpenCL] Registers AdjustContrastv2 (#93)

* [OpenCL] Extended adjust_contrast_op_benchmark_test for OpenCL (#96)

* [OpenCL] Extended adjust_contrast_op_benchmark_test for OpenCL

* simplified to #ifndef

* Changed to "#if GOOGLE_CUDA"

* Update adjust_contrast_op_benchmark_test.cc

* Added comments

---
Commit cb4c2f8 authored by Yifei Feng<yifeif@google.com>
Committed by Yifei Feng<yifeif@google.com>:
Make TransferBufferToInFeed not virual so it compiles.

---
Commit e89f04d authored by Yifei Feng<yifeif@google.com>
Committed by Yifei Feng<yifeif@google.com>:
Fix calling Literal member functions.

---
Commit 15a8df7 authored by Yifei Feng<yifeif@google.com>
Committed by Yifei Feng<yifeif@google.com>:
Fix mac build
clone from meheff's change:
[XLA] Change return type of DeviceAssignment::Deserialize to fix build
breakage on mac.
The mac build had the following error:

error: incomplete type 'xla::DeviceAssignment' used in type trait
expression

This was due to a static method returning a StatusOr<DeviceAssignment>
inside of the definition of DeviceAssignment.

---
Commit a54d43f authored by Yifei Feng<yifeif@google.com>
Committed by Yifei Feng<yifeif@google.com>:
Replace LiteralUtil to Literal in compiler/plugin/executor

---
Commit 88a6bb8 authored by Guenther Schmuelling<guschmue@microsoft.com>
Committed by Guenther Schmuelling<guschmue@microsoft.com>:
expand inline for debug builds to limit number of symbols

---
Commit 62fb49d authored by Yifei Feng<yifeif@google.com>
Committed by Yifei Feng<yifeif@google.com>:
Fix visibility error for contrib/remote_fused_graph/pylib/BUILD.

---
Commit 4c75252 authored by Mark Neumann<markn@allenai.org>
Committed by Mark Neumann<markn@allenai.org>:
fix initial test values to avoid numerical instability

---
Commit b58d983 authored by sj6077<epik03sj@gmail.com>
Committed by Benoit Steiner<benoitsteiner@users.noreply.github.com>:
Fixes of AutoParallel bug (#10368)

* Fix the bug that auto_parallel could replicate variable snapshot name

* Use NodeName in grappler:utils instead of substr, convert variables->variable_def of grappler item

* remove variable_def from grappler item, exclude snapshot nodes from dont_replicate_nodes in auto_parallel

---
Commit a286b7d authored by Yifei Feng<yifeif@google.com>
Committed by Yifei Feng<yifeif@google.com>:
Make debug_test slice integer.

---
Commit 97fcfdf authored by Toby Boyd<tobyboyd@google.com>
Committed by GitHub<noreply@github.com>:
Fixed path to seq2seq.py and minor formatting
---
Commit 63c1bef authored by Anish Shah<shah.anish07@gmail.com>
Committed by Anish Shah<shah.anish07@gmail.com>:
Improve docs for tf.nn.depthwise_conv2d_native

---
Commit 8d42202 authored by Yong Tang<yong.tang.github@outlook.com>
Committed by Yong Tang<yong.tang.github@outlook.com>:
Fix mismatched delete in mkl_tfconv_op.cc

This fix fixes mismatched new[]-delete in mkl_tfconv_op.cc

(the file went through clang-format so there are some additional
changes)

Signed-off-by: Yong Tang <yong.tang.github@outlook.com>

---
Commit 26301bd authored by Danny Goodman<goodman.danny@gmail.com>
Committed by Danny Goodman<goodman.danny@gmail.com>:
fix error format

---
Commit b3f33ad authored by Yao Zhang<yaozhang@google.com>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
Make changes to prepare for the fused option of batch norm to be set to None (None means using fused batch norm if possible).

PiperOrigin-RevId: 159649743

---
Commit a4a4698 authored by A. Unique TensorFlower<gardener@tensorflow.org>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
[XLA] Add tests for select ops and while loops that produce tuples that contain predicates.

PiperOrigin-RevId: 159645900

---
Commit 980d3f2 authored by A. Unique TensorFlower<gardener@tensorflow.org>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
Use C API to implement Operation.name property

This name property is used in many existing tests including those that
already run with C API enabled (math_ops_test, framework_ops_test,
session_test, session_partial_run_test, math_ops_test_gpu, etc).

PiperOrigin-RevId: 159645767

---
Commit 26239c7 authored by A. Unique TensorFlower<gardener@tensorflow.org>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
Previously we didn't have an implementation of BatchNormInference and BatchNormTraining, which gives a linker error if anyone ever tries to call that. A dummy implementation is friendlier than a linker error.

PiperOrigin-RevId: 159645612

---
Commit f671c5c authored by A. Unique TensorFlower<gardener@tensorflow.org>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
BEGIN_PUBLIC
Automated g4 rollback of changelist 159570549

PiperOrigin-RevId: 160182040

* Update ops-related pbtxt files.

PiperOrigin-RevId: 160183349

* Merge changes from github followup.

PiperOrigin-RevId: 160183498

* Automated g4 rollback of changelist 160183498

PiperOrigin-RevId: 160189134

* Automated g4 rollback of changelist 160182040

PiperOrigin-RevId: 160190881

* [XLA] Disallow fuse X into Y if there are paths from X to Y which don't fuse

Just because X can fuse into all of its consumers does not mean that those
consumers can fuse into anything. Depending on the structure of the graph, this
can either result in no performance win at all or, in the case of recurrent
networks, a big performance deficit.

PiperOrigin-RevId: 160194058

* First draft of Tensors segment of the programmer's guide.

PiperOrigin-RevId: 160196550

* First draft of variables unit of programmer's guide.

PiperOrigin-RevId: 160196566

* Make xla::Literal moveable.

PiperOrigin-RevId: 160197273

* Automated g4 rollback of changelist 159897279

PiperOrigin-RevId: 160198598

* Updates text_classification example.

PiperOrigin-RevId: 160200457

* Fix backward compatibility test broken by rollback.

PiperOrigin-RevId: 160222187

* Support advisor in all places (Command line, APIs)
Add expensive operation checker

PiperOrigin-RevId: 160222348

* [XLA] Simplify the fusion heuristic

We had two different aspects of the fusion heuristic:
- Don't fuse a producer into a consumer if there exists a path from the
  producer to the consumer which cannot be fused.
- Don't fuse a producer into a consumer if any consumer of the producer cannot
  fuse.

These can be combined into one, simpler, heuristic.

PiperOrigin-RevId: 160222771

* Automated g4 rollback of changelist 160196566

PiperOrigin-RevId: 160222930

* Automated g4 rollback of changelist 160196550

PiperOrigin-RevId: 160222942

* Lets the HParam parser also accept True and False as inputs, since that's how python prints booleans.

PiperOrigin-RevId: 160234658

* Automated g4 rollback of changelist 155070869

PiperOrigin-RevId: 160249526

* [TF:XLA] Inline the sigmoid operation instead of mapping it elementwise.

PiperOrigin-RevId: 160274436

* Make sure all convolution tests are testing non-trivial cases, i.e. where not all inputs are 0, leading to an all-0 output, which masks most possible bugs.
We do not check-fail on 0-sized dimensions as tests for these special cases
exist.

PiperOrigin-RevId: 160274593

* Explicitly use "dns" URI scheme when using DNS names or literal IP
addresses with gRPC.  This avoids problems in environments in which the
default URI scheme is something other than "dns".

PiperOrigin-RevId: 160276862

* Add RWSE (root weighted squared error) to the WALS estimator.

PiperOrigin-RevId: 160276937

* Don't include node_def.proto.h in node_def_util.h

The goal is to make kernels mostly independent of proto headers, which will let us lock down our .so imports.

RELNOTES: n/a
PiperOrigin-RevId: 160278032

* [XLA] Add tuple support to Literal::CreateFromShape.

PiperOrigin-RevId: 160278561

* Updates some more examples in examples/learn.

PiperOrigin-RevId: 160278757

* Automated g4 rollback of changelist 160278032

PiperOrigin-RevId: 160280961

* Fixed the bug that Estimator does not make deepcopy of params in constructor

PiperOrigin-RevId: 160281247

* Clean out the config and params in TPUEstimator.

PiperOrigin-RevId: 160281507

* [XLA] Remove the "hlo dumper" parameter of xla::Compiler and its piping.

This dumper is no longer necessary since the restructuring of HLO dumping and
the addition of MaybeDumpHloModule which heeds to the right flags. The
remaining bits didn't have additional functionality, but constituted a lot of
boilerplate that has to be propagated throughout the backends.

PiperOrigin-RevId: 160281798

* [TF:XLA] Refactor the sigmoid op as a rescaled tanh.

PiperOrigin-RevId: 160282472

* Fix uninitialized values in TensorForest code.

PiperOrigin-RevId: 160284420

* [TF:XLA] Update Tensorflow LLVM release to upstream r306370.

Fix broken XLA build.

PiperOrigin-RevId: 160284588

* tfdbg example: fix --tensor_size issue in debug_fibonacci

PiperOrigin-RevId: 160290541

* [SE] ThenConvolveWithAlgorithm vlogs algorithm configs.

PiperOrigin-RevId: 160292762

* Fix documentation of Estimator class (invalid quotes).

PiperOrigin-RevId: 160292803

* Shrink the test size to avoid OOM error on old GPUs.

PiperOrigin-RevId: 160292834

* [TF:XLA] Reject operators with resource outputs on CPU and GPU devices.

We were checking for resource inputs but not resource outputs, which led to accidental fusion of some TensorArray ops on CPU and GPU.

PiperOrigin-RevId: 160294302

* Add a functionality of remote fused graph transformation to fuse graphs by op type

PiperOrigin-RevId: 160300039

* Cudnn compatible LSTMCell and LSTMBlockCell

PiperOrigin-RevId: 160300668

* [XLA] Remove "operand" argument from HandleReducePrecision.

PiperOrigin-RevId: 160301461

* Added more reduce window tests.

PiperOrigin-RevId: 160301509

* Updates more text classification examples in examples/learn.

PiperOrigin-RevId: 160305131

* Use C API to implement Operation._output_types

This change first converts the _output_types member to a property and
then implements it using C API if it is enabled.

PiperOrigin-RevId: 160306227

* Add more tests for BatchNormTraining.
RELNOTES: n/a

PiperOrigin-RevId: 160307959

* Update path to print_selective_registration_header.py in comment

PiperOrigin-RevId: 160308173

* Migrate TensorForest v4 python to contrib.

PiperOrigin-RevId: 160308805

* Automated g4 rollback of changelist 159454657

PiperOrigin-RevId: 160314706

* TESTFIX:  distributions:trig_test wasn't passing in ASAN mode.

PiperOrigin-RevId: 160315597

* tfdbg doc: fixes and improvements

PiperOrigin-RevId: 160318411

* Add a time estimation to HloCostAnalysis and represent properties as a map so that adding more properties will be easier, e.g. in a sub-class.

PiperOrigin-RevId: 160318494

* tfdbg: revert dns:/// prefix in gRPC mode

PiperOrigin-RevId: 160319348

* Moves TensorCApi from c_api.cc to c_api_internal.h, where it can be used
by other code that require access to the underlying TensorBuffers.

PiperOrigin-RevId: 160323362

* Readd the new tensors and variables documents, with tests passing.

PiperOrigin-RevId: 160324191

* Make ResourceHandle not be a proto

I'm trying to make core/kernels independent of protos.  Currently the dtype ResourceHandle is itself a proto.  After this CL, ResourceHandle is a normal C++ type which gets converted to/from ResourceHandleProto at (de)serialization time.

RELNOTES: n/a
PiperOrigin-RevId: 160329002

* Minor cleanup: remove unused dependencies and inclusions

PiperOrigin-RevId: 160334030

* Add name_scopes to mnist_deep.py for a cleaner graph layout.

PiperOrigin-RevId: 160338775

* Add note about `tf.test.mock` to docs for `tf.test`

PiperOrigin-RevId: 160338811

* Internal change.

PiperOrigin-RevId: 160339087

* Fix bugs in ScatterNd and add ScatterNdNonAliasingAdd.

tf.scatter_nd_non_aliasing_add acts similarly to tf.scatter_nd_add but
works on non-ref objects (i.e., Tensors -- not Variables).  This means
it has a gradient with respect to the primary input as well as the
updates.  It does its best to avoid making extra copies of the input.

PiperOrigin-RevId: 160339328

* Update ops-related pbtxt files.

PiperOrigin-RevId: 160340888

* Add checkpoint conversion for models that use the attention mechanism implemented in tensorflow/contrib/legacy_seq2seq/python/ops/seq2seq.py.

PiperOrigin-RevId: 160340994

* Go: Update generated wrapper functions for TensorFlow ops.

PiperOrigin-RevId: 160341769

* Merge changes from github.

PiperOrigin-RevId: 160344052

* Update ops-related pbtxt files.

PiperOrigin-RevId: 160346151

* Load py_test in tensorflow/contrib/boosted_trees/BUILD to fix pip test
visibility failures.

* Disable boosted_trees tests on mac while they are being debugged.
allenlavoie pushed a commit to allenlavoie/tensorflow that referenced this pull request Jul 15, 2017
Howdy!

I just updated a link to use https instead of http.

Thanks!
allenlavoie pushed a commit to allenlavoie/tensorflow that referenced this pull request Jul 15, 2017
* Properly handle ops that don't have a CPU kernel

PiperOrigin-RevId: 159655906

* Selected BUILD cleanup in tensorflow/contrib/...

PiperOrigin-RevId: 159673079

* Remove redundant `get` calls on smart pointers

PiperOrigin-RevId: 159675809

* PiperOrigin-RevId: 159698321

* Migrate kernels to boosted_trees.

PiperOrigin-RevId: 159698656

* Fix a bug in the memory optimizer when two inputs to a node are both recomputed

PiperOrigin-RevId: 159700457

* Fixed memory leak that can be triggered by a failed node evaluation

PiperOrigin-RevId: 159707380

* Updates get_started tutorial.

PiperOrigin-RevId: 159709158

* [XLA] Remove unused factory in local_service

PiperOrigin-RevId: 159712806

* Fix typo in docstring

PiperOrigin-RevId: 159714414

* Migrate ops for new version of TensorForest.

PiperOrigin-RevId: 159718610

* Added parameterized tests to reduce window tests.

PiperOrigin-RevId: 159721784

* Use C API to implement Operation.device property

PiperOrigin-RevId: 159723490

* Several Estimator changes:
- support configurable input_fn calling in Estimator subclasses.
- pass params and config to the input_fn.
- allow callables for model_fn and input_fn.

PiperOrigin-RevId: 159725554

* Fixed the scalar output for shard api when outputs_from_all_shards=True.

PiperOrigin-RevId: 159726444

* Automated g4 rollback of changelist 159718610

PiperOrigin-RevId: 159728380

* Adding missing deps to targets in llvm.BUILD. This was only working in non-sandboxed builds.

PiperOrigin-RevId: 159729295

* [XLA:HLO] Move sequence functions from hlo_ordering.h to hlo_scheduling.h.

This is required for upcoming changes to convert the sequence creation functions
(and HeapSimulator and BufferAssignment) over to using the new
Hlo{Dataflow,Alias}Analysis.

It's required because otherwise there's a dependency cycle:

Hlo{Dataflow,Alias}Analysis depends on HloOrdering
CreateMemoryMinimizingSequence will depend on Hlo{Dataflow,Alias}Analysis

There's already a cycle here, if both HloOrdering and
CreateMemoryMinimizingSequence are in the same file.  Also note that:

MinimumMemoryForSequence depends on HeapSimulator
HeapSimulator will depend on Hlo{Dataflow,Alias}Analysis
Hlo{Dataflow,Alias}Analysis depends on HloOrdering

Splitting out the sequence functions resolves the cycle.

Refactoring only; no functional changes.

PiperOrigin-RevId: 159731836

* [XLA:HLO] Split Hlo{Value,Buffer} out of Hlo{Dataflow,Alias}Analysis.

This will make dependencies cleaner for upcoming CLs that will convert
HeapSimulator and HloOrdering to use the new analyses.

No change in functionality.

PiperOrigin-RevId: 159737265

* Internal change

PiperOrigin-RevId: 159738215

* Suggest people need to do some build environment ./configur'ing.

Fixes tensorflow#4279

PiperOrigin-RevId: 159738412

* Rewrite SameDefinedShape function in ShapeRefiner

PiperOrigin-RevId: 159745894

* [XLA] Remove xla_cpu_*_eigen flags from CPU backends.

These flags are currently de-facto unused; parallelism should be controlled
through the cpu_parallel backend. For configuring Eigen, if needed, the options
should be piped more directly to the code.

PiperOrigin-RevId: 159746509

* Updates layers tutorial and corresponding example.

PiperOrigin-RevId: 159749528

* Further BUILD cleanup

PiperOrigin-RevId: 159749869

* Use more efficient squared_difference

PiperOrigin-RevId: 159751209

* Add log_step_count_steps to RunConfig and allow it to flow to the MonitoredSession.

PiperOrigin-RevId: 159753935

* [XLA] Remove xla_hlo_test_generate_hlo_graph, which is now redundant.

PiperOrigin-RevId: 159755688

* Do not use SSE4.1 instructions on Android builds.

PiperOrigin-RevId: 159756104

* Add nonpublic helper `tf.distributions.util.tridiag` op.

PiperOrigin-RevId: 159757904

* [XLA] Remove dead "in-client" code.
Remove Service::runs_in_client_process_ field and it's dead user. This was
previously used by the "InProcess" methods which have been replaced with
the LocalClient API.

PiperOrigin-RevId: 159759455

* [tf contrib seq2seq] Add monotonic attention mechanisms

* Add monotonic_attention and safe_cumprod helper functions.
* Add _BaseMonotonicAttentionMechanism base class.
* Add BahdanauMonotonicAttention and LuongMonotonicAttention classes.

These attention mechanisms are proposed in
Colin Raffel, Minh-Thang Luong, Peter J. Liu, Ron J. Weiss, Douglas Eck,
"Online and Linear-Time Attention by Enforcing Monotonic Alignments."
ICML 2017.  https://arxiv.org/abs/1704.00784

PiperOrigin-RevId: 159760073

* Add ability for argmax to output int32 indices.  Default remains int64.

Change is made in a backwards and forward compatible manner, since
we add a new attribute with a default that remains the same, and
simply register a few new kernels.

PiperOrigin-RevId: 159761347

* Automated g4 rollback of changelist 159746509

PiperOrigin-RevId: 159763112

* Raise ValueError if invalid dtype for random_uniform.

PiperOrigin-RevId: 159764956

* Internal change.

PiperOrigin-RevId: 159769520

* Support zero shapes for random_poisson. This matches random_uniform.

PiperOrigin-RevId: 159771215

* Blacklist the quantized ops since they have too many issues (incorrect shape
functions, memory corruptions, ...)

PiperOrigin-RevId: 159772801

* Fixed the shape functions of the QuantizedAdd and QuantizedMul ops

PiperOrigin-RevId: 159772841

* Switch from assigning namedtuple.__new__.__defaults__ to overwriting __new__.

Assigning __defaults__ relies on an implementation detail of CPython, confuses
type checkers (and developers :)), and is error-prone since it doesn't make the
relationship between parameter names and default values explicit.
This CL switches to overloading __new__ instead.

PiperOrigin-RevId: 159773922

* Made sure that we can call the constant folding code twice safely.

PiperOrigin-RevId: 159781607

* Added batch_matmul op dependence to android_extended_ops

PiperOrigin-RevId: 159787178

* Fixes a TODO in head_test.

PiperOrigin-RevId: 159789178

* When configuring per-session thread pools, allow
a pool to be a global pool. This allows a division
between large and small pools, without needing to make
new pool for each session.

PiperOrigin-RevId: 159789678

* Add a multi-head TensorForest estimator.

PiperOrigin-RevId: 159820487

* Have RestoreV2's shape fn set all outputs to unknown shape.

PiperOrigin-RevId: 159835723

* VectorExponential added to distributions.

PiperOrigin-RevId: 159840822

* Fold as many nodes as possible instead of giving up if there is any error.

PiperOrigin-RevId: 159841935

* Removed deprecated summary usage from estimators.
Made name_space usage consistent.

PiperOrigin-RevId: 159846928

* Adding missing license notice to toolchain build files

PiperOrigin-RevId: 159847551

* [XLA] Remove unused flags and move debugging flag to debug options.

PiperOrigin-RevId: 159849759

* Fixes some docstrings in feature_column.

PiperOrigin-RevId: 159850619

* TpuEstimator: Replicate the input_fn to the worker CPU for each shard.

The batch size is configured as follows:
The user may specify a global batch size in their hyperparameters. If the 'batch_size' field is set, then we convert the global batch size into a per-shard batch size by dividing by num_shards before running their input_fn.

PiperOrigin-RevId: 159851773

* Modify beam search decoder to use symbolic shape for vocab size if the static shape is not present.

PiperOrigin-RevId: 159852297

* Generalize cluster initialization to span multiple mini-batches if necessary.

PiperOrigin-RevId: 159852557

* Use a single threaded session for SDCALinearRegressorTest to
avoid incorrect threading test failures (tsan).

PiperOrigin-RevId: 159852818

* Migrate ops for new version of TensorForest.

PiperOrigin-RevId: 159852889

* Replaced constant inputs with variables to ensure most of the graph doesn't get
optimized away

PiperOrigin-RevId: 159853171

* For candidate sampling, add facility to colocate the logit computation with the sharded embeddings.

PiperOrigin-RevId: 159854706

* Added a utility to create parsing spec for regressors (canned estimator)

PiperOrigin-RevId: 159855254

* Fix cuda_kernel_helper_test. std::numeric_limits<int32>::max() doesn't pass, so
I didn't use that.

PiperOrigin-RevId: 159869169

* In tfcompile, prune nodes that are not reachable from the fetches before
building the Graph. This allows loading a graph that contains ops not
needed for the compiled binary.

PiperOrigin-RevId: 159869692

* Fix bugs related to distributions over integers.

- Ensure that the max number of categories does not exceed largest integer-form float.
- Make dtype inference consistent between Categorical and Multinomial
distributions.
- Improve documentation to better reflect that the Categorical
distribution is analogous to `argmax{OneHotCategorical}` (itself being
identical to `argmax{Multinomial(p,n=1)}` but not Multinomial.
- Fix validation_args Heisenberg uncertainty: only validation logic should live under self.validate_args. E.g., validate_args=True would sometimes imply `x=floor(x)` which changes behavior thus making debugging impossible because enabling validation *changes* values.
- Corrected `Geometric` swapping of validate_args` and `allow_nan_stats` default-values.

Fixes tensorflow#10149

PiperOrigin-RevId: 159872532

* Make HloModule clonable

This CL makes HloModule clonable, which is necessary when we want to run the same compilation twice with the same input.

PiperOrigin-RevId: 159874256

* Internal change.

PiperOrigin-RevId: 159876942

* Implement alternative `monte_carlo.expectation_v2`. This function implements
the reparameterization and score-gradient tricks and does not depend on
tf.Distribution like inputs.

PiperOrigin-RevId: 159877923

* In SE_ASSIGN_OR_RETURN change ConsumeValueOrDie to the preferred std::move ValueOrDie.

PiperOrigin-RevId: 159879754

* If rank is unknown, do not add output shapes to transpose nodes.

PiperOrigin-RevId: 159879840

* Move sparse_fill_empty_rows to new, *significantly* faster, C++ kernel for everyone.

Also fix a bug in the C++ op when the input ST has 0 elements.

PiperOrigin-RevId: 159880044

* Add support of label_keys to DebugClassifier

PiperOrigin-RevId: 159883986

* Register devices under their legacy names

Because some higher level APIs continue to use the legacy name format,
when using ClusterSpec propagation, we need to ensure that we register
the devices under their legacy names as well as their canonical names.

PiperOrigin-RevId: 159885777

* [BatchNorm] Minor fixes to TF doc

PiperOrigin-RevId: 159886125

* Generating TBAA metadata causes the LLVM to miscompile after
https://reviews.llvm.org/rL305938).  Disable TBAA (to stop the miscompiles)
while we fix the root issue.

PiperOrigin-RevId: 159895736

* Improve score-trick to be a valid Csiszar f-Divergence yet numerically stable.

PiperOrigin-RevId: 159896013

* Support advisor in all places (Command line, APIs)
Add expensive operation checker

PiperOrigin-RevId: 159897279

* Added canned estimators to Tensorflow library. List of added estimators:
* DNNClassifier
* DNNRegressor
* LinearClassifer
* LinearRegressor
* DNNLinearCombinedClassifier
* DNNLinearCombinedRegressor

PiperOrigin-RevId: 159898954

* Alligned how model-fns handled params among linear/dnn/combined estimators.

PiperOrigin-RevId: 159899925

* Fixed cmake tests.

PiperOrigin-RevId: 159901417

* [XLA:CPU] Add VLOGs to cpu_compiler.cc

PiperOrigin-RevId: 159902919

* Make occurence (op run times and op definition) selectable
in all views to address the loop problem.

When a node is in loop, its execution times are accumulated, its run times
will increase.

PiperOrigin-RevId: 159912429

* [XLA] Small error message improvement in binop shape inference.

PiperOrigin-RevId: 159920109

* Follow upstream API change from r306058.

PiperOrigin-RevId: 159938416

* [TF:XLA] Update LLVM to upstream revision r306085.

PiperOrigin-RevId: 159946562

* [XLA] Remove unused xla_cpu flag and move another to DebugOptions.

PiperOrigin-RevId: 159952124

* Updates linear.md tutorial

PiperOrigin-RevId: 159956867

* Add TraceMe instrumentation of RunStep in GRPC distributed runtime.
A unique ID is added to each RunStep call that allows the client and server
events to be correlated.

PiperOrigin-RevId: 159956950

* [XLA] Add general F32 implementation for ReducePrecision operation.

This only tests with parameter inputs (which is needed to ensure we actually test on GPUs as well as CPUs); there's no point in separately testing with constants.

PiperOrigin-RevId: 159961430

* Java: NativeLibrary: Fix URL in error message.

And add some detail.
Inspired by tensorflow#11015

PiperOrigin-RevId: 159962478

* Increase rtol for util_test.

PiperOrigin-RevId: 159971136

* Re-enable IR dumping for the sequential CPU backend.

PiperOrigin-RevId: 159974126

* tfdbg: a few minor fixes and improvements

* Let DumpingDebugWrapperSession and DumpingDebugHook create session_root if it doesn't exist
* Add README.md to tensorflow/python/debug
* Add section "Debugging Keras Models with TFDBG" in debugger.md

PiperOrigin-RevId: 159976070

* Add None check for save_path when restoring checkpoints as if something is wrong in tf.train.latest_checkpoint, it will often return None and it's nice to have a common sense check in restore for this. This way log.error says what has happened.

PiperOrigin-RevId: 159979481

* Don't crash if a metagraph fails to load.

PiperOrigin-RevId: 159981628

* Prepare to not include node_def.proto.h in node_def_util.h

The goal is to make kernels mostly independent of proto headers, which will let
us lock down our .so imports.  This CL makes a bunch of .cc files
either include node_def.proto.h themselves or not need the definition of
NodeDef; a second CL will make node_def_util.h not include node_def.proto.h.

RELNOTES: n/a
PiperOrigin-RevId: 159982117

* Add a few diagnostic flags to help narrow down issues with the LLVM
backends.

PiperOrigin-RevId: 159982441

* Updated wide-n-deep tutorial code to use core version of estimators and feature-columns.

PiperOrigin-RevId: 159984663

* Modify ControlFlowContext to also respect import_scope in 'values_' and keys of 'external_values_'

PiperOrigin-RevId: 159985290

* Add item's graph to partition_graphs in virtual cluster's run method.
Put node op name in timeline_label instead of node_name.

PiperOrigin-RevId: 159986583

* Use short-proto for logging purposes.

A short proto will be output on a single log line, making it
easier for certain automated tools to handle.

PiperOrigin-RevId: 159994005

* Sinh, ArcSinh, Cosh, LogCosh functions added to distributions/python/ops/trig.
Care is taken to ensure a fair bit of stability.

PiperOrigin-RevId: 159995514

* Updates some examples in examples/learn.

PiperOrigin-RevId: 159996397

* Add kernel tests for boosted_trees.

PiperOrigin-RevId: 160002696

* Avoid doing unecessary work in the OptimizeGraph() function whenever possible

PiperOrigin-RevId: 160003173

* Use std::shared_ptr instead of core::RefCounted for Node::Properties

Also changes Node::Properties to a struct and removes underscores from public member variables. This change should make it easier to work with Properties moving forward as the refcount will be automatically updated.

PiperOrigin-RevId: 160003281

* Make the CPU compiler dump optimized IR along with the unoptimized IR.

PiperOrigin-RevId: 160005257

* Disable flaky run_metadata_test.

PiperOrigin-RevId: 160015399

* BUILD cleanup in tensorflow/tools/...

PiperOrigin-RevId: 160018623

* SinhArcSinh bijector added.

This two-parameter diffeomorphism from R --> R allows for skewness and fatter
or thinner tails.  See docstring and also
http://oro.open.ac.uk/22510/1/sinhasinh.pdf

PiperOrigin-RevId: 160019380

* Avoid hardcoded names for temporary files in tests.

These tests (and examples that are run as tests) were using hardcoded names for
temporary files.  This failed when multiple copies of these tests were run in
parallel, or even successively by different users, where the second run could
not overwrite files left by the first.

This change uses the TEST_TMPDIR environment variable used by bazel's test
runner to choose a temporary directory.   If that directory is not set,
/tmp is used, as before.

PiperOrigin-RevId: 160026924

* Fix multinomial doc-string, input arg logits expects to log-probabilities and not log-odds.

PiperOrigin-RevId: 160036709

* Made TensorFlow documentation on LSTMs slightly more accurate.

PiperOrigin-RevId: 160047054

* Follow LLVM/ORC upstream API change in r306166.

PiperOrigin-RevId: 160108102

* Move resampler from sonnet to contrib.

PiperOrigin-RevId: 160134565

* [TPUEstimator] Make input_fn invoked properly with eval on CPU.

PiperOrigin-RevId: 160151890

* Deletes iris_val_based_early_stopping example, which uses deprecated ValidationMonitor.

PiperOrigin-RevId: 160154863

* [XLA] Move HLO dumping flags from service_flags to debug_options_flags

This also removes the duplication in the xla_generate_hlo_graph flag.

This CL also moves the actual dumping logic from Executable to the
hlo_graph_dumper namespace, where it belongs; this is in preparation for
removing the hlo_dumper callback altogether, since it isn't serving any role
beyond what a direct call to hlo_graph_dumper would have (b/62872831 has more
details).

PiperOrigin-RevId: 160154869

* Fix missing variable unref

Direct leak of 56 byte(s) in 1 object(s) allocated from:
    #0 0xf5ee272 in operator new(unsigned long) (/build/cas/5d2/5d2be3b530580573ff7269adcab7cbac+0xf5ee272)
    tensorflow#1 0x1b51394c in tensorflow::AssignVariableOp<Eigen::ThreadPoolDevice, float>::Compute(tensorflow::OpKernelContext*)::'lambda'(tensorflow::Var**)::operator()(tensorflow::Var**) const (/build/cas/5d2/5d2be3b530580573ff7269adcab7cbac+0x1b51394c)
    tensorflow#2 0x1b5136c0 in std::_Function_handler<tensorflow::Status (tensorflow::Var**), tensorflow::AssignVariableOp<Eigen::ThreadPoolDevice, float>::Compute(tensorflow::OpKernelContext*)::'lambda'(tensorflow::Var**)>::_M_invoke(std::_Any_data const&, tensorflow::Var**) (/build/cas/5d2/5d2be3b530580573ff7269adcab7cbac+0x1b5136c0)
    tensorflow#3 0x1b50b289 in std::function<tensorflow::Status (tensorflow::Var**)>::operator()(tensorflow::Var**) const (/build/cas/5d2/5d2be3b530580573ff7269adcab7cbac+0x1b50b289)
    tensorflow#4 0x1b50af88 in tensorflow::Status tensorflow::ResourceMgr::LookupOrCreate<tensorflow::Var>(basic_string<char, std::char_traits<char>, std::allocator<char> > const&, basic_string<char, std::char_traits<char>, std::allocator<char> > const&, tensorflow::Var**, std::function<tensorflow::Status (tensorflow::Var**)>) (/build/cas/5d2/5d2be3b530580573ff7269adcab7cbac+0x1b50af88)
    tensorflow#5 0x1b50ac10 in tensorflow::Status tensorflow::LookupOrCreateResource<tensorflow::Var>(tensorflow::OpKernelContext*, tensorflow::ResourceHandle const&, tensorflow::Var**, std::function<tensorflow::Status (tensorflow::Var**)>) (/build/cas/5d2/5d2be3b530580573ff7269adcab7cbac+0x1b50ac10)
    tensorflow#6 0x1b512f1e in tensorflow::AssignVariableOp<Eigen::ThreadPoolDevice, float>::Compute(tensorflow::OpKernelContext*) (/build/cas/5d2/5d2be3b530580573ff7269adcab7cbac+0x1b512f1e)
    tensorflow#7 0x1d1881c7 in tensorflow::ThreadPoolDevice::Compute(tensorflow::OpKernel*, tensorflow::OpKernelContext*) (/build/cas/5d2/5d2be3b530580573ff7269adcab7cbac+0x1d1881c7)
    tensorflow#8 0xf96e0fe in tensorflow::KernelAndDevice::Run(std::vector<tensorflow::Tensor, std::allocator<tensorflow::Tensor> >*, std::vector<tensorflow::Tensor, std::allocator<tensorflow::Tensor> >*) (/build/cas/5d2/5d2be3b530580573ff7269adcab7cbac+0xf96e0fe)
    tensorflow#9 0xf94f9c8 in TFE_Execute (/build/cas/5d2/5d2be3b530580573ff7269adcab7cbac+0xf94f9c8)
    tensorflow#10 0xf94356d in TFE_Py_Execute(TFE_Context*, int, char const*, tensorflow::gtl::InlinedVector<TFE_TensorHandle*, 4>*, _object*, tensorflow::gtl::InlinedVector<TFE_TensorHandle*, 2>*, TF_Status*) (/build/cas/5d2/5d2be3b530580573ff7269adcab7cbac+0xf94356d)

PiperOrigin-RevId: 160160101

* Simplify strided_slice's shape handling

Now that TensorShape and PartialTensorShape share memory representations, there's no need for an abstract class that makes TensorShape and TensorShapeProto look the same.

RELNOTES: n/a
PiperOrigin-RevId: 160161618

* Added a tool to report the static information that can be extracted from a TF model.

PiperOrigin-RevId: 160162256

* Properly handle RefEnter, RefExit and RefNextIteration nodes.

PiperOrigin-RevId: 160162338

* Switch tfprof to use proto3

PiperOrigin-RevId: 160163483

* Fixes to cuda_config.h.

PiperOrigin-RevId: 160168545

* Update ops-related pbtxt files.

PiperOrigin-RevId: 160171187

* Adds notes to prevent overfitting for Experiment continous_train_and_eval.

PiperOrigin-RevId: 160172692

* Go: Update generated wrapper functions for TensorFlow ops.

PiperOrigin-RevId: 160172985

* Merge changes from github.
END_PUBLIC

Note: this CL will break builds.  cl/159887762 to follow to fix all the breakages.

---
Commit 2336cdf authored by Maxwell Paul Brickner<mbrickn@users.noreply.github.com>
Committed by gunan<gunan@google.com>:
Updated link to use HTTPS (tensorflow#10998)

Howdy!

I just updated a link to use https instead of http.

Thanks!
---
Commit ad0892d authored by Luke Iwanski<luke@codeplay.com>
Committed by Luke Iwanski<luke@codeplay.com>:
[OpenCL] Fixes run_metadata_test for SYCL

 This test is designed to test CUDA specific behavior

---
Commit 6b37a07 authored by Todd Wang<toddwang@gmail.com>
Committed by GitHub<noreply@github.com>:
Update comments
---
Commit 1699d90 authored by John Lawson<john@codeplay.com>
Committed by Luke Iwanski<luke@codeplay.com>:
[OpenCL] Fixes CUDA specific test run on SYCL (tensorflow#56)

The testBadParentValuesOnGPU should only be run on CUDA devices, as the
test checks for particular CUDA behaviour. We don't actually provide a
SYCL kernel for GatherTree and so it's not a problem that the tests
don't target SYCL.
---
Commit 3c19462 authored by myPrecious<Moriadry@users.noreply.github.com>
Committed by Shanqing Cai<cais@google.com>:
Java API to get the size of specified input list of operations. (tensorflow#10865)

* Java API to get the size of specified input list of operations

* remove unnecessary explain to avoid bring a new term to users.

---
Commit e911c74 authored by Luke Iwanski<luke@codeplay.com>
Committed by Luke Iwanski<luke@codeplay.com>:
[OpenCL] REGISTER -> REGISTER6

---
Commit fbf6c4c authored by superryanguo<superryanguo@gmail.com>
Committed by superryanguo<superryanguo@gmail.com>:
Simplify the Quickstart section with the weblink is better

---
Commit 72e2918 authored by Taehoon Lee<taehoonlee@snu.ac.kr>
Committed by Taehoon Lee<taehoonlee@snu.ac.kr>:
Fix typos

---
Commit 90c4406 authored by Rishabh Patel<patelrishabh@users.noreply.github.com>
Committed by GitHub<noreply@github.com>:
Correct the learning rate as per the code snippet
---
Commit 03da611 authored by Todd Wang<toddwang@gmail.com>
Committed by GitHub<noreply@github.com>:
Update ir_array.cc
---
Commit 2df6cd3 authored by Todd Wang<toddwang@gmail.com>
Committed by GitHub<noreply@github.com>:
Another try
---
Commit af0cbac authored by Luke Iwanski<luke@codeplay.com>
Committed by Benoit Steiner<benoitsteiner@users.noreply.github.com>:
[OpenCL] Transpose to go through Eigen (tensorflow#10321)

---
Commit fc73610 authored by Luke Iwanski<luke@codeplay.com>
Committed by Benoit Steiner<benoitsteiner@users.noreply.github.com>:
[OpenCL] Registers RGBToHSV and HSVToRGB (tensorflow#91) (tensorflow#10848)

* [OpenCL] Added RGBToHSV and HSVToRGB

* Aligning '\'
---
Commit 832894e authored by Luke Iwanski<luke@codeplay.com>
Committed by Benoit Steiner<benoitsteiner@users.noreply.github.com>:
[OpenCL] Registers AdjustContrastv2 (tensorflow#10949)

* [OpenCL] Registers AdjustContrastv2 (tensorflow#93)

* [OpenCL] Extended adjust_contrast_op_benchmark_test for OpenCL (tensorflow#96)

* [OpenCL] Extended adjust_contrast_op_benchmark_test for OpenCL

* simplified to #ifndef

* Changed to "#if GOOGLE_CUDA"

* Update adjust_contrast_op_benchmark_test.cc

* Added comments

---
Commit cb4c2f8 authored by Yifei Feng<yifeif@google.com>
Committed by Yifei Feng<yifeif@google.com>:
Make TransferBufferToInFeed not virual so it compiles.

---
Commit e89f04d authored by Yifei Feng<yifeif@google.com>
Committed by Yifei Feng<yifeif@google.com>:
Fix calling Literal member functions.

---
Commit 15a8df7 authored by Yifei Feng<yifeif@google.com>
Committed by Yifei Feng<yifeif@google.com>:
Fix mac build
clone from meheff's change:
[XLA] Change return type of DeviceAssignment::Deserialize to fix build
breakage on mac.
The mac build had the following error:

error: incomplete type 'xla::DeviceAssignment' used in type trait
expression

This was due to a static method returning a StatusOr<DeviceAssignment>
inside of the definition of DeviceAssignment.

---
Commit a54d43f authored by Yifei Feng<yifeif@google.com>
Committed by Yifei Feng<yifeif@google.com>:
Replace LiteralUtil to Literal in compiler/plugin/executor

---
Commit 88a6bb8 authored by Guenther Schmuelling<guschmue@microsoft.com>
Committed by Guenther Schmuelling<guschmue@microsoft.com>:
expand inline for debug builds to limit number of symbols

---
Commit 62fb49d authored by Yifei Feng<yifeif@google.com>
Committed by Yifei Feng<yifeif@google.com>:
Fix visibility error for contrib/remote_fused_graph/pylib/BUILD.

---
Commit 4c75252 authored by Mark Neumann<markn@allenai.org>
Committed by Mark Neumann<markn@allenai.org>:
fix initial test values to avoid numerical instability

---
Commit b58d983 authored by sj6077<epik03sj@gmail.com>
Committed by Benoit Steiner<benoitsteiner@users.noreply.github.com>:
Fixes of AutoParallel bug (tensorflow#10368)

* Fix the bug that auto_parallel could replicate variable snapshot name

* Use NodeName in grappler:utils instead of substr, convert variables->variable_def of grappler item

* remove variable_def from grappler item, exclude snapshot nodes from dont_replicate_nodes in auto_parallel

---
Commit a286b7d authored by Yifei Feng<yifeif@google.com>
Committed by Yifei Feng<yifeif@google.com>:
Make debug_test slice integer.

---
Commit 97fcfdf authored by Toby Boyd<tobyboyd@google.com>
Committed by GitHub<noreply@github.com>:
Fixed path to seq2seq.py and minor formatting
---
Commit 63c1bef authored by Anish Shah<shah.anish07@gmail.com>
Committed by Anish Shah<shah.anish07@gmail.com>:
Improve docs for tf.nn.depthwise_conv2d_native

---
Commit 8d42202 authored by Yong Tang<yong.tang.github@outlook.com>
Committed by Yong Tang<yong.tang.github@outlook.com>:
Fix mismatched delete in mkl_tfconv_op.cc

This fix fixes mismatched new[]-delete in mkl_tfconv_op.cc

(the file went through clang-format so there are some additional
changes)

Signed-off-by: Yong Tang <yong.tang.github@outlook.com>

---
Commit 26301bd authored by Danny Goodman<goodman.danny@gmail.com>
Committed by Danny Goodman<goodman.danny@gmail.com>:
fix error format

---
Commit b3f33ad authored by Yao Zhang<yaozhang@google.com>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
Make changes to prepare for the fused option of batch norm to be set to None (None means using fused batch norm if possible).

PiperOrigin-RevId: 159649743

---
Commit a4a4698 authored by A. Unique TensorFlower<gardener@tensorflow.org>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
[XLA] Add tests for select ops and while loops that produce tuples that contain predicates.

PiperOrigin-RevId: 159645900

---
Commit 980d3f2 authored by A. Unique TensorFlower<gardener@tensorflow.org>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
Use C API to implement Operation.name property

This name property is used in many existing tests including those that
already run with C API enabled (math_ops_test, framework_ops_test,
session_test, session_partial_run_test, math_ops_test_gpu, etc).

PiperOrigin-RevId: 159645767

---
Commit 26239c7 authored by A. Unique TensorFlower<gardener@tensorflow.org>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
Previously we didn't have an implementation of BatchNormInference and BatchNormTraining, which gives a linker error if anyone ever tries to call that. A dummy implementation is friendlier than a linker error.

PiperOrigin-RevId: 159645612

---
Commit f671c5c authored by A. Unique TensorFlower<gardener@tensorflow.org>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
BEGIN_PUBLIC
Automated g4 rollback of changelist 159570549

PiperOrigin-RevId: 160182040

* Update ops-related pbtxt files.

PiperOrigin-RevId: 160183349

* Merge changes from github followup.

PiperOrigin-RevId: 160183498

* Automated g4 rollback of changelist 160183498

PiperOrigin-RevId: 160189134

* Automated g4 rollback of changelist 160182040

PiperOrigin-RevId: 160190881

* [XLA] Disallow fuse X into Y if there are paths from X to Y which don't fuse

Just because X can fuse into all of its consumers does not mean that those
consumers can fuse into anything. Depending on the structure of the graph, this
can either result in no performance win at all or, in the case of recurrent
networks, a big performance deficit.

PiperOrigin-RevId: 160194058

* First draft of Tensors segment of the programmer's guide.

PiperOrigin-RevId: 160196550

* First draft of variables unit of programmer's guide.

PiperOrigin-RevId: 160196566

* Make xla::Literal moveable.

PiperOrigin-RevId: 160197273

* Automated g4 rollback of changelist 159897279

PiperOrigin-RevId: 160198598

* Updates text_classification example.

PiperOrigin-RevId: 160200457

* Fix backward compatibility test broken by rollback.

PiperOrigin-RevId: 160222187

* Support advisor in all places (Command line, APIs)
Add expensive operation checker

PiperOrigin-RevId: 160222348

* [XLA] Simplify the fusion heuristic

We had two different aspects of the fusion heuristic:
- Don't fuse a producer into a consumer if there exists a path from the
  producer to the consumer which cannot be fused.
- Don't fuse a producer into a consumer if any consumer of the producer cannot
  fuse.

These can be combined into one, simpler, heuristic.

PiperOrigin-RevId: 160222771

* Automated g4 rollback of changelist 160196566

PiperOrigin-RevId: 160222930

* Automated g4 rollback of changelist 160196550

PiperOrigin-RevId: 160222942

* Lets the HParam parser also accept True and False as inputs, since that's how python prints booleans.

PiperOrigin-RevId: 160234658

* Automated g4 rollback of changelist 155070869

PiperOrigin-RevId: 160249526

* [TF:XLA] Inline the sigmoid operation instead of mapping it elementwise.

PiperOrigin-RevId: 160274436

* Make sure all convolution tests are testing non-trivial cases, i.e. where not all inputs are 0, leading to an all-0 output, which masks most possible bugs.
We do not check-fail on 0-sized dimensions as tests for these special cases
exist.

PiperOrigin-RevId: 160274593

* Explicitly use "dns" URI scheme when using DNS names or literal IP
addresses with gRPC.  This avoids problems in environments in which the
default URI scheme is something other than "dns".

PiperOrigin-RevId: 160276862

* Add RWSE (root weighted squared error) to the WALS estimator.

PiperOrigin-RevId: 160276937

* Don't include node_def.proto.h in node_def_util.h

The goal is to make kernels mostly independent of proto headers, which will let us lock down our .so imports.

RELNOTES: n/a
PiperOrigin-RevId: 160278032

* [XLA] Add tuple support to Literal::CreateFromShape.

PiperOrigin-RevId: 160278561

* Updates some more examples in examples/learn.

PiperOrigin-RevId: 160278757

* Automated g4 rollback of changelist 160278032

PiperOrigin-RevId: 160280961

* Fixed the bug that Estimator does not make deepcopy of params in constructor

PiperOrigin-RevId: 160281247

* Clean out the config and params in TPUEstimator.

PiperOrigin-RevId: 160281507

* [XLA] Remove the "hlo dumper" parameter of xla::Compiler and its piping.

This dumper is no longer necessary since the restructuring of HLO dumping and
the addition of MaybeDumpHloModule which heeds to the right flags. The
remaining bits didn't have additional functionality, but constituted a lot of
boilerplate that has to be propagated throughout the backends.

PiperOrigin-RevId: 160281798

* [TF:XLA] Refactor the sigmoid op as a rescaled tanh.

PiperOrigin-RevId: 160282472

* Fix uninitialized values in TensorForest code.

PiperOrigin-RevId: 160284420

* [TF:XLA] Update Tensorflow LLVM release to upstream r306370.

Fix broken XLA build.

PiperOrigin-RevId: 160284588

* tfdbg example: fix --tensor_size issue in debug_fibonacci

PiperOrigin-RevId: 160290541

* [SE] ThenConvolveWithAlgorithm vlogs algorithm configs.

PiperOrigin-RevId: 160292762

* Fix documentation of Estimator class (invalid quotes).

PiperOrigin-RevId: 160292803

* Shrink the test size to avoid OOM error on old GPUs.

PiperOrigin-RevId: 160292834

* [TF:XLA] Reject operators with resource outputs on CPU and GPU devices.

We were checking for resource inputs but not resource outputs, which led to accidental fusion of some TensorArray ops on CPU and GPU.

PiperOrigin-RevId: 160294302

* Add a functionality of remote fused graph transformation to fuse graphs by op type

PiperOrigin-RevId: 160300039

* Cudnn compatible LSTMCell and LSTMBlockCell

PiperOrigin-RevId: 160300668

* [XLA] Remove "operand" argument from HandleReducePrecision.

PiperOrigin-RevId: 160301461

* Added more reduce window tests.

PiperOrigin-RevId: 160301509

* Updates more text classification examples in examples/learn.

PiperOrigin-RevId: 160305131

* Use C API to implement Operation._output_types

This change first converts the _output_types member to a property and
then implements it using C API if it is enabled.

PiperOrigin-RevId: 160306227

* Add more tests for BatchNormTraining.
RELNOTES: n/a

PiperOrigin-RevId: 160307959

* Update path to print_selective_registration_header.py in comment

PiperOrigin-RevId: 160308173

* Migrate TensorForest v4 python to contrib.

PiperOrigin-RevId: 160308805

* Automated g4 rollback of changelist 159454657

PiperOrigin-RevId: 160314706

* TESTFIX:  distributions:trig_test wasn't passing in ASAN mode.

PiperOrigin-RevId: 160315597

* tfdbg doc: fixes and improvements

PiperOrigin-RevId: 160318411

* Add a time estimation to HloCostAnalysis and represent properties as a map so that adding more properties will be easier, e.g. in a sub-class.

PiperOrigin-RevId: 160318494

* tfdbg: revert dns:/// prefix in gRPC mode

PiperOrigin-RevId: 160319348

* Moves TensorCApi from c_api.cc to c_api_internal.h, where it can be used
by other code that require access to the underlying TensorBuffers.

PiperOrigin-RevId: 160323362

* Readd the new tensors and variables documents, with tests passing.

PiperOrigin-RevId: 160324191

* Make ResourceHandle not be a proto

I'm trying to make core/kernels independent of protos.  Currently the dtype ResourceHandle is itself a proto.  After this CL, ResourceHandle is a normal C++ type which gets converted to/from ResourceHandleProto at (de)serialization time.

RELNOTES: n/a
PiperOrigin-RevId: 160329002

* Minor cleanup: remove unused dependencies and inclusions

PiperOrigin-RevId: 160334030

* Add name_scopes to mnist_deep.py for a cleaner graph layout.

PiperOrigin-RevId: 160338775

* Add note about `tf.test.mock` to docs for `tf.test`

PiperOrigin-RevId: 160338811

* Internal change.

PiperOrigin-RevId: 160339087

* Fix bugs in ScatterNd and add ScatterNdNonAliasingAdd.

tf.scatter_nd_non_aliasing_add acts similarly to tf.scatter_nd_add but
works on non-ref objects (i.e., Tensors -- not Variables).  This means
it has a gradient with respect to the primary input as well as the
updates.  It does its best to avoid making extra copies of the input.

PiperOrigin-RevId: 160339328

* Update ops-related pbtxt files.

PiperOrigin-RevId: 160340888

* Add checkpoint conversion for models that use the attention mechanism implemented in tensorflow/contrib/legacy_seq2seq/python/ops/seq2seq.py.

PiperOrigin-RevId: 160340994

* Go: Update generated wrapper functions for TensorFlow ops.

PiperOrigin-RevId: 160341769

* Merge changes from github.

PiperOrigin-RevId: 160344052

* Update ops-related pbtxt files.

PiperOrigin-RevId: 160346151

* Load py_test in tensorflow/contrib/boosted_trees/BUILD to fix pip test
visibility failures.

* Disable boosted_trees tests on mac while they are being debugged.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants