Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

NFC - minor spelling tweaks in documents #37852

Merged
merged 1 commit into from
Mar 25, 2020
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
2 changes: 1 addition & 1 deletion tensorflow/compiler/mlir/lite/ir/tfl_ops.td
Original file line number Diff line number Diff line change
Expand Up @@ -3397,7 +3397,7 @@ def TFL_BidirectionalSequenceLSTMOp :
let summary = "Bidirectional sequence lstm operator";

let description = [{
Bidirectional lstm is essentiallay two lstms, one running forward & the
Bidirectional lstm is essentially two lstms, one running forward & the
other running backward. And the output is the concatenation of the two
lstms.
}];
Expand Down
2 changes: 1 addition & 1 deletion tensorflow/compiler/mlir/xla/ir/hlo_client_ops.td
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ class HLOClient_Op<string mnemonic, list<OpTrait> traits> :
// broadcasting (via the broadcast_dimensions attribute) and implicit degenerate
// shape broadcasting.
//
// These have 1:1 correspondance with same-named ops in the xla_hlo dialect;
// These have 1:1 correspondence with same-named ops in the xla_hlo dialect;
// however, those operations do not support broadcasting.
//
// See:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -382,7 +382,7 @@ class createIotaOp<string dim>: NativeCodeCall<
def createConvertOp: NativeCodeCall<
"CreateConvertOp(&($_builder), $0.getOwner()->getLoc(), $1, $2)">;

// Performs a substitution of MatrixBandPartOp for XLA HLO ops. Psuedocode is
// Performs a substitution of MatrixBandPartOp for XLA HLO ops. Pseudocode is
// shown below, given a tensor `input` with k dimensions [I, J, K, ..., M, N]
// and two integers, `num_lower` and `num_upper`:
//
Expand Down Expand Up @@ -454,14 +454,14 @@ def : Pat<(TF_ConstOp:$res ElementsAttr:$value), (HLO_ConstOp $value),
// TODO(hinsu): Make these patterns to TF to TF lowering. Relu6 lowering will
// require HLO canonicalization of min and max on a tensor to ClampOp.

// TODO(hinsu): Lower unsinged and quantized types after supporting
// TODO(hinsu): Lower unsigned and quantized types after supporting
// them in GetScalarOfType.
def : Pat<(TF_ReluOp AnyRankedTensor:$input),
(HLO_MaxOp (HLO_ConstOp:$zero (GetScalarOfType<0> $input)), $input,
(BinBroadcastDimensions $zero, $input)),
[(TF_SintOrFpTensor $input)]>;

// TODO(hinsu): Lower unsinged and quantized types after supporting
// TODO(hinsu): Lower unsigned and quantized types after supporting
// them in GetScalarOfType.
def : Pat<(TF_Relu6Op AnyRankedTensor:$input),
(HLO_ClampOp (HLO_ConstOp (GetScalarOfType<0> $input)), $input,
Expand Down
2 changes: 1 addition & 1 deletion tensorflow/lite/delegates/xnnpack/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ interpreter->Invoke()

...

// IMPORTANT: release the interpreter before destroing the delegate
// IMPORTANT: release the interpreter before destroying the delegate
interpreter.reset();
TfLiteXNNPackDelegateDelete(xnnpack_delegate);
```
Expand Down
2 changes: 1 addition & 1 deletion tensorflow/lite/experimental/ruy/profiler/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -133,7 +133,7 @@ But also the following advantages:
The philosophy underlying this profiler is that software performance depends on
software engineers profiling often, and a key factor limiting that in practice
is the difficulty or cumbersome aspects of profiling with more serious profilers
such as Linux's "perf", espectially in embedded/mobile development: multiple
such as Linux's "perf", especially in embedded/mobile development: multiple
command lines are involved to copy symbol files to devices, retrieve profile
data from the device, etc. In that context, it is useful to make profiling as
easy as benchmarking, even on embedded targets, even if the price to pay for
Expand Down
4 changes: 2 additions & 2 deletions tensorflow/lite/g3doc/convert/python_api.md
Original file line number Diff line number Diff line change
Expand Up @@ -171,7 +171,7 @@ TensorFlow Lite metadata provides a standard for model descriptions. The
metadata is an important source of knowledge about what the model does and its
input / output information. This makes it easier for other developers to
understand the best practices and for code generators to create platform
specific wrapper code. For more infomation, please refer to the
specific wrapper code. For more information, please refer to the
[TensorFlow Lite Metadata](metadata.md) section.

## Installing TensorFlow <a name="versioning"></a>
Expand All @@ -192,7 +192,7 @@ either install the nightly build with
[Docker](https://www.tensorflow.org/install/docker), or
[build the pip package from source](https://www.tensorflow.org/install/source).

### Custom ops in the experimenal new converter
### Custom ops in the experimental new converter

There is a behavior change in how models containing
[custom ops](https://www.tensorflow.org/lite/guide/ops_custom) (those for which
Expand Down
2 changes: 1 addition & 1 deletion tensorflow/lite/g3doc/performance/best_practices.md
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,7 @@ operator is executed. Check out our

Model optimization aims to create smaller models that are generally faster and
more energy efficient, so that they can be deployed on mobile devices. There are
multiple optimization techniques suppored by TensorFlow Lite, such as
multiple optimization techniques supported by TensorFlow Lite, such as
quantization.

Check out our [model optimization docs](model_optimization.md) for details.
Expand Down
2 changes: 1 addition & 1 deletion tensorflow/lite/micro/examples/micro_speech/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -420,7 +420,7 @@ using [ARM Mbed](https://github.com/ARMmbed/mbed-cli).
```
mbed compile --target K66F --toolchain GCC_ARM --profile release
```
8. For some mbed compliers, you may get compile error in mbed_rtc_time.cpp.
8. For some mbed compilers, you may get compile error in mbed_rtc_time.cpp.
Go to `mbed-os/platform/mbed_rtc_time.h` and comment line 32 and line 37:

```
Expand Down
2 changes: 1 addition & 1 deletion tensorflow/lite/micro/examples/person_detection/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -202,7 +202,7 @@ The next steps assume that the

* The `IDF_PATH` environment variable is set
* `idf.py` and Xtensa-esp32 tools (e.g. `xtensa-esp32-elf-gcc`) are in `$PATH`
* `esp32-camera` should be downloaded in `comopnents/` dir of example as
* `esp32-camera` should be downloaded in `components/` dir of example as
explained in `Building the example`(below)

### Generate the examples
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ The next steps assume that the
[IDF environment variables are set](https://docs.espressif.com/projects/esp-idf/en/latest/get-started/index.html#step-4-set-up-the-environment-variables) :
* The `IDF_PATH` environment variable is set. * `idf.py` and Xtensa-esp32 tools
(e.g., `xtensa-esp32-elf-gcc`) are in `$PATH`. * `esp32-camera` should be
downloaded in `comopnents/` dir of example as explained in `Build the
downloaded in `components/` dir of example as explained in `Build the
example`(below)

## Build the example
Expand Down
4 changes: 2 additions & 2 deletions tensorflow/lite/tools/benchmark/android/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ bazel build -c opt \
adb install -r -d -g bazel-bin/tensorflow/lite/tools/benchmark/android/benchmark_model.apk
```
Note: Make sure to install with "-g" option to grant the permission for reading
extenal storage.
external storage.

(3) Push the compute graph that you need to test.

Expand Down Expand Up @@ -119,6 +119,6 @@ a trace file,
between tracing formats and
[create](https://developer.android.com/topic/performance/tracing/on-device#create-html-report)
an HTML report.
Note that, the catured tracing file format is either in Perfetto format or in
Note that, the captured tracing file format is either in Perfetto format or in
Systrace format depending on the Android version of your device. Select the
appropriate method to handle the generated file.
2 changes: 1 addition & 1 deletion tensorflow/tools/ci_build/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -83,7 +83,7 @@ this UI, to see the logs for a failed build:

* Submit special pull request (PR) comment to trigger CI: **bot:mlx:test**
* Test session is run automatically.
* Test results and artefacts (log files) are reported via PR comments
* Test results and artifacts (log files) are reported via PR comments

##### CI Steps

Expand Down