Skip to content

Commit

Permalink
Merge pull request tensorflow#1505 from ROCmSoftwarePlatform/develop-…
Browse files Browse the repository at this point in the history
…upstream-sync-211206

Develop upstream sync 211206
  • Loading branch information
deven-amd committed Dec 6, 2021
2 parents 191d2c7 + b3d2b2f commit a1c5181
Show file tree
Hide file tree
Showing 788 changed files with 31,934 additions and 11,151 deletions.
12 changes: 12 additions & 0 deletions .bazelrc
Original file line number Diff line number Diff line change
Expand Up @@ -594,6 +594,12 @@ build:release_cpu_linux --config=avx_linux
build:release_cpu_linux --crosstool_top="@ubuntu18.04-gcc7_manylinux2010-cuda11.2-cudnn8.1-tensorrt7.2_config_cuda//crosstool:toolchain"
test:release_cpu_linux --test_env=LD_LIBRARY_PATH

# manylinux2014 config for cpu
build:release_cpu_linux_manylinux2014 --config=release_base
build:release_cpu_linux_manylinux2014 --config=avx_linux
build:release_cpu_linux_manylinux2014 --crosstool_top="@ubuntu18.04-gcc8_manylinux2014-cuda11.2-cudnn8.1-tensorrt7.2_config_cuda//crosstool:toolchain"
test:release_cpu_linux_manylinux2014 --test_env=LD_LIBRARY_PATH

build:release_cpu_macos --config=release_base
build:release_cpu_macos --config=avx_linux

Expand All @@ -616,6 +622,12 @@ build:release_gpu_linux_11_4 --action_env=TF_CUDA_VERSION="11.4"
build:release_gpu_linux_11_4 --action_env=TF_CUDNN_VERSION="8.2"
build:release_gpu_linux_11_4 --crosstool_top=@ubuntu18.04-gcc7_manylinux2010-cuda11.4-cudnn8.2-tensorrt7.2_config_cuda//crosstool:toolchain

# manylinux2014 config for gpu
build:release_gpu_linux_manylinux2014 --config=release_gpu_linux
build:release_gpu_linux_manylinux2014 --action_env=GCC_HOST_COMPILER_PATH="/dt8/usr/bin/gcc"
build:release_gpu_linux_manylinux2014 --crosstool_top=@ubuntu18.04-gcc8_manylinux2014-cuda11.2-cudnn8.1-tensorrt7.2_config_cuda//crosstool:toolchain


build:release_cpu_windows --config=release_base
build:release_cpu_windows --config=avx_win
build:release_cpu_windows --define=no_tensorflow_py_deps=true
Expand Down
2 changes: 1 addition & 1 deletion .bazelversion
Original file line number Diff line number Diff line change
@@ -1 +1 @@
3.7.2
4.2.1
50 changes: 0 additions & 50 deletions ACKNOWLEDGMENTS

This file was deleted.

83 changes: 25 additions & 58 deletions LICENSE
Original file line number Diff line number Diff line change
Expand Up @@ -200,31 +200,27 @@
See the License for the specific language governing permissions and
limitations under the License.

------------------
Files: third_party/compute_library/...

MIT License

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

------------------
Files: ACKNOWLEDGEMENTS
## Some of TensorFlow's code is derived from Caffe, which is subject to the following copyright notice:

COPYRIGHT

All contributions by the University of California:

Copyright (c) 2014, The Regents of the University of California (Regents)
All rights reserved.

All other contributions:

Copyright (c) 2014, the respective contributors
All rights reserved.

Caffe uses a shared copyright model: each contributor holds copyright over
their contributions to Caffe. The project versioning records all such
contribution and copyright details. If a contributor wants to further mark
their specific copyright on a particular contribution, they should indicate
their copyright solely in the commit message of the change when it is
committed.

LICENSE

Redistribution and use in source and binary forms, with or without
Expand All @@ -248,37 +244,8 @@ modification, are permitted provided that the following conditions are met:
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

------------------
Files: third_party/hexagon
CONTRIBUTION AGREEMENT

Copyright (c) 2016-2019, The Linux Foundation. All rights reserved.

Redistribution and use in source and binary forms, with or without
modification, are permitted (subject to the limitations in the
disclaimer below) provided that the following conditions are met:

* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.

* Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following
disclaimer in the documentation and/or other materials provided
with the distribution.

* Neither the name of The Linux Foundation nor the names of its
contributors may be used to endorse or promote products derived
from this software without specific prior written permission.

NO EXPRESS OR IMPLIED LICENSES TO ANY PARTY'S PATENT RIGHTS ARE
GRANTED BY THIS LICENSE. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT
HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED
WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR
ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE
GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER
IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR
OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN
IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
By contributing to the BVLC/caffe repository through pull-request, comment,
or otherwise, the contributor releases their content to the
license and copyright terms herein.
23 changes: 20 additions & 3 deletions RELEASE.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,9 +16,12 @@
# Major Features and Improvements

* `tf.lite`:
* Where operation support is added for these data types
'int32/uint32/int8/uint8/int64'
* Add builtin support for `Bucketize` op on CPU.
* Added TFLite builtin op support for the following TF ops:
* `tf.raw_ops.Bucketize` op on CPU.
* `tf.where` op for data types `tf.int32`/`tf.uint32`/`tf.int8`/`tf.uint8`/`tf.int64`.
* `tf.random.normal` op for output data type `tf.float32` on CPU.
* `tf.random.uniform` op for output data type `tf.float32` on CPU.
* `tf.random.categorical` op for output data type `tf.int64` on CPU.
* `tensorflow.experimental.tensorrt`:

* `conversion_params` is now deprecated inside `TrtGraphConverterV2` in
Expand All @@ -29,6 +32,16 @@
`.save()` function inside `TrtGraphConverterV2`. When `False`, the
`.save()` function won't save any TRT engines that have been built. When
`True` (default), the original behavior is preserved.
* `tf.tpu.experimental.embedding`:
* `tf.tpu.experimental.embedding.FeatureConfig` now takes an additional
argument `output_shape` which can specify the shape of the output
activation for the feature.
* `tf.tpu.experimental.embedding.TPUEmbedding` now has the same behavior
as `tf.tpu.experimental.embedding.serving_embedding_lookup` which can
take arbitrary rank of dense and sparse tensor. For ragged tensor,
though the input tensor remains to be rank 2, the activations now can be
rank 2 or above by specifying the output shape in the feature config
or via the build method.

* <INSERT MAJOR FEATURE HERE, USING MARKDOWN SYNTAX>

Expand All @@ -42,6 +55,9 @@
* `tf.data`:
* The optimization `parallel_batch` now becomes default if not disabled by
users, which will parallelize copying of batch elements.
* Added the ability for `TensorSliceDataset` to identify and handle inputs
that are files. This enables creating hermetic SavedModels when using
datasets created from files.

* `tf.lite`:
* GPU
Expand Down Expand Up @@ -161,6 +177,7 @@ This release contains contributions from many people at Google, as well as:
* `tf.lite`:
* Add experimental API `experimental_from_jax` to support conversion from Jax models to TensorFlow Lite.
* Support uint32 data type for cast op.
* Support int8 data type for cast op.
* Add experimental quantization debugger `tf.lite.QuantizationDebugger`
* Add lite.experimental.authoring.compatible API
* A Python decorator to provide a way to check TFLite compatibility
Expand Down
2 changes: 1 addition & 1 deletion configure.py
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@
_TF_WORKSPACE_ROOT = ''
_TF_BAZELRC = ''
_TF_CURRENT_BAZEL_VERSION = None
_TF_MIN_BAZEL_VERSION = '3.7.2'
_TF_MIN_BAZEL_VERSION = '4.2.1'
_TF_MAX_BAZEL_VERSION = '4.99.0'

NCCL_LIB_PATHS = [
Expand Down
1 change: 0 additions & 1 deletion tensorflow/BUILD
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,6 @@ licenses(["notice"])

exports_files([
"LICENSE",
"ACKNOWLEDGMENTS",
# The leakr files are used by //third_party/cloud_tpu and
# //third_party/tensorboard/google:copybara_config_test.
"leakr_badwords.dic",
Expand Down
2 changes: 1 addition & 1 deletion tensorflow/c/eager/abstract_context.h
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ class AbstractContext {
// Release any underlying resources, including the interface object.
//
// WARNING: The destructor of this class is marked as protected to disallow
// clients from directly destroying this object since it may manage it's own
// clients from directly destroying this object since it may manage its own
// lifetime through ref counting. Thus clients MUST call Release() in order to
// destroy an instance of this class.
virtual void Release() = 0;
Expand Down
6 changes: 3 additions & 3 deletions tensorflow/c/eager/c_api.h
Original file line number Diff line number Diff line change
Expand Up @@ -119,7 +119,7 @@ TF_CAPI_EXPORT extern TFE_ContextDevicePlacementPolicy
TFE_ContextGetDevicePlacementPolicy(TFE_Context* ctx);

// A tensorflow.ServerDef specifies remote workers (in addition to the current
// workers name). Operations created on this context can then be executed on
// workers name). Operations created in this context can then be executed on
// any of these remote workers by setting an appropriate device.
//
// If the following is set, all servers identified by the
Expand All @@ -134,7 +134,7 @@ TF_CAPI_EXPORT extern void TFE_ContextSetServerDef(TFE_Context* ctx,
//
// Like a TF_Tensor, a TFE_TensorHandle refers to a tensor with a value, shape,
// type etc. Unlike a TF_Tensor, a TFE_TensorHandle may refer to such tensors
// placed in memory of different devices or remote address spaces.
// placed in the memory of different devices or remote address spaces.
typedef struct TFE_TensorHandle TFE_TensorHandle;

TF_CAPI_EXPORT extern TFE_TensorHandle* TFE_NewTensorHandle(const TF_Tensor* t,
Expand Down Expand Up @@ -442,7 +442,7 @@ TF_CAPI_EXPORT extern void TFE_ContextStartStep(TFE_Context* ctx);

// Ends a step. When there is no active step (that is, every started step has
// been ended) step containers will be cleared. Note: it is not safe to call
// TFE_ContextEndStep while ops which rely on the step container may be running.
// TFE_ContextEndStep while ops that rely on the step container may be running.
TF_CAPI_EXPORT extern void TFE_ContextEndStep(TFE_Context* ctx);

#ifdef __cplusplus
Expand Down
2 changes: 1 addition & 1 deletion tensorflow/c/eager/c_api_distributed_test.cc
Original file line number Diff line number Diff line change
Expand Up @@ -161,7 +161,7 @@ void TestFunctionWithPackedInput(const bool remote) {
TFE_TensorHandle* h1 = TestVariable(ctx, 2.0, task2_name);
TFE_TensorHandle* h2 = TestVariable(ctx, 3.0, task0_name);

// Add a sync point in order to make sure that variables have been initialized
// Add a sync point to make sure that variables have been initialized
// before the function execution starts.
TFE_ContextAsyncWait(ctx, status);
EXPECT_EQ(TF_OK, TF_GetCode(status)) << TF_Message(status);
Expand Down
24 changes: 12 additions & 12 deletions tensorflow/c/eager/c_api_experimental.h
Original file line number Diff line number Diff line change
Expand Up @@ -140,7 +140,7 @@ TFE_MonitoringGetCellIntGauge2(TFE_MonitoringIntGauge2* gauge,
typedef struct TFE_MonitoringStringGaugeCell TFE_MonitoringStringGaugeCell;
TF_CAPI_EXPORT extern void TFE_MonitoringStringGaugeCellSet(
TFE_MonitoringStringGaugeCell* cell, const char* value);
// Retrieves the string value and saves it in buffer.
// Retrieves the string value and saves it in the buffer.
TF_CAPI_EXPORT extern const void TFE_MonitoringStringGaugeCellValue(
TFE_MonitoringStringGaugeCell* cell, TF_Buffer* buf);

Expand Down Expand Up @@ -248,7 +248,7 @@ TF_CAPI_EXPORT extern void TFE_MonitoringSamplerCellAdd(
TFE_MonitoringSamplerCell* cell, double value);

// Retrieves the current value of the cell. The return value is a HistogramProto
// saved in buffer.
// saved in the buffer.
TF_CAPI_EXPORT extern void TFE_MonitoringSamplerCellValue(
TFE_MonitoringSamplerCell* cell, TF_Buffer* buf);

Expand Down Expand Up @@ -353,7 +353,7 @@ TF_CAPI_EXPORT extern bool TFE_ExecutorIsAsync(TFE_Executor*);
TF_CAPI_EXPORT extern void TFE_ExecutorWaitForAllPendingNodes(
TFE_Executor*, TF_Status* status);

// When an error happens, any pending operations are discarded and newly issued
// When an error happens, any pending operations are discarded, and newly issued
// ops return an error. This call clears the error state and re-enables
// execution of newly issued ops.
//
Expand All @@ -362,12 +362,12 @@ TF_CAPI_EXPORT extern void TFE_ExecutorWaitForAllPendingNodes(
// TODO(agarwal): mark the affected handles and raise errors if they are used.
TF_CAPI_EXPORT extern void TFE_ExecutorClearError(TFE_Executor*);

// Sets a custom Executor for current thread. All nodes created by this thread
// will be added to this Executor. It will override current executor.
// Sets a custom Executor for the current thread. All nodes created by this
// thread will be added to this Executor. It will override the current executor.
TF_CAPI_EXPORT extern void TFE_ContextSetExecutorForThread(TFE_Context*,
TFE_Executor*);

// Returns the Executor for current thread.
// Returns the Executor for the current thread.
TF_CAPI_EXPORT extern TFE_Executor* TFE_ContextGetExecutorForThread(
TFE_Context*);

Expand All @@ -376,7 +376,7 @@ TF_CAPI_EXPORT extern TFE_Executor* TFE_ContextGetExecutorForThread(

// Update an existing context with a new set of servers defined in a ServerDef
// proto. Servers can be added to and removed from the list of remote workers
// in the context. New set of servers identified by the ServerDef must be up
// in the context. A New set of servers identified by the ServerDef must be up
// when the context is updated.
//
// This API is for experimental usage and may be subject to change.
Expand Down Expand Up @@ -527,8 +527,8 @@ typedef struct TFE_CustomDevice {
// names of wrapped devices.
//
// There are currently no graph semantics implemented for registered custom
// devices, so executing tf.functions which contain operations placed on custom
// devices will fail.
// devices, so executing tf.functions which contain operations placed on the
// custom devices will fail.
//
// `device_name` must not name an existing physical or custom device. It must
// follow the format:
Expand Down Expand Up @@ -646,8 +646,8 @@ TF_CAPI_EXPORT extern int TFE_TensorHandleDeviceID(TFE_TensorHandle* h,
TF_Status* status);

// Returns the status for the tensor handle. In TFRT, a tensor handle can carry
// error info if error happens. If so, status will be set with the error info.
// If not, status will be set as OK.
// error info if error happens. If so, the status will be set with the error
// info. If not, status will be set as OK.
TF_CAPI_EXPORT extern void TFE_TensorHandleGetStatus(TFE_TensorHandle* h,
TF_Status* status);

Expand All @@ -673,7 +673,7 @@ TF_CAPI_EXPORT extern void TFE_SetLogicalCpuDevices(TFE_Context* ctx,
// setting the same key will lead to errors.
//
// Note that the key-values are only expected to be used for cluster
// configuration data, and should not be used for storing large amount of data
// configuration data, and should not be used for storing a large amount of data
// or being accessed very frequently.
TF_CAPI_EXPORT extern void TFE_InsertConfigKeyValue(TFE_Context* ctx,
const char* key,
Expand Down

0 comments on commit a1c5181

Please sign in to comment.