Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[lite/micro] fix various typos. #28593

Merged
merged 1 commit into from Jun 27, 2019
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
2 changes: 1 addition & 1 deletion tensorflow/lite/experimental/micro/README.md
Expand Up @@ -615,7 +615,7 @@ As mentioned above, the one function you will need to implement for a completely
new platform is debug logging. If your device is just a variation on an existing
platform you may be able to reuse code that's already been written. To
understand what's available, begin with the default reference implementation at
[tensorflow/lite/experimental/micro/debug_log.cc](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/experimental/micro/debug_log.cc]),
[tensorflow/lite/experimental/micro/debug_log.cc](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/experimental/micro/debug_log.cc),
which uses fprintf and stderr. If your platform has this level of support for
the C standard library in its toolchain, then you can just reuse this.
Otherwise, you'll need to do some research into how your platform and device can
Expand Down
Expand Up @@ -5,7 +5,7 @@
* **hanning.cc**: Precomputed
[Hann window](https://en.wikipedia.org/wiki/Hann_function) for use in the
preprocessor. This file is created in ../create_constants.py
* **hanning.h**: Header file fro hanning.cc
* **hanning.h**: Header file for hanning.cc
* **preprocessor.cc**: CMSIS version of the preprocessor
* **sin_1k.cc**: A 1 kHZ sinusoid used for comparing the CMSIS preprocessor
with the Micro-Lite fixed_point preprocessor
Expand Down
Expand Up @@ -48,7 +48,7 @@ def to_h(_, varname, directory=''):
xstr += '#include <cstdint>\n\n'
xstr += 'extern const int g_{}_size;\n'.format(varname)
xstr += 'extern const int16_t g_{}[];\n\n'.format(varname)
xstr += '#endif'
xstr += '#endif // {}{}_H_'.format(tf_prepend, varname.upper())

with open(directory + varname + '.h', 'w') as f:
f.write(xstr)
Expand Down
Expand Up @@ -251,7 +251,7 @@ tensorflow/examples/speech_commands:train -- \
--wanted_words="yes,no" --silence_percentage=25 --unknown_percentage=25 --quantize=1
```

After build is over follow the rest of the instrucitons from this tutorial. And
After build is over follow the rest of the instructions from this tutorial. And
finally do not forget to remove the instance when training is done:

```
Expand Down
Expand Up @@ -36,7 +36,7 @@ TfLiteStatus GetAudioSamples(tflite::ErrorReporter* error_reporter,
// Returns the time that audio data was last captured in milliseconds. There's
// no contract about what time zero represents, the accuracy, or the granularity
// of the result. Subsequent calls will generally not return a lower value, but
// even that's not guaranteed if there's an overflow wraparound.
// even that's not guaranteed if there's an overflow wraparound.
// The reference implementation of this function just returns a constantly
// incrementing value for each call, since it would need a non-portable platform
// call to access time information. For real applications, you'll need to write
Expand Down
6 changes: 1 addition & 5 deletions tensorflow/lite/experimental/micro/tools/make/Makefile
Expand Up @@ -150,17 +150,13 @@ ARDUINO_PROJECT_FILES := \

ALL_PROJECT_TARGETS :=

KEIL_PROJECT_FILES := \
README_KEIL.md \
keil_project.uvprojx

include $(MAKEFILE_DIR)/third_party_downloads.inc
THIRD_PARTY_DOWNLOADS :=
$(eval $(call add_third_party_download,$(GEMMLOWP_URL),$(GEMMLOWP_MD5),gemmlowp,))
$(eval $(call add_third_party_download,$(FLATBUFFERS_URL),$(FLATBUFFERS_MD5),flatbuffers,))

# These target-specific makefiles should modify or replace options like
# CXXFLAGS or LIBS to work for a specific targetted architecture. All logic
# CXXFLAGS or LIBS to work for a specific targeted architecture. All logic
# based on platforms or architectures should happen within these files, to
# keep this main makefile focused on the sources and dependencies.
include $(wildcard $(MAKEFILE_DIR)/targets/*_makefile.inc)
Expand Down
2 changes: 1 addition & 1 deletion third_party/flatbuffers/build_defs.bzl
Expand Up @@ -37,7 +37,7 @@ def flatbuffer_library_public(
flatc_args: Optional, list of additional arguments to pass to flatc.
reflection_name: Optional, if set this will generate the flatbuffer
reflection binaries for the schemas.
reflection_visiblity: The visibility of the generated reflection Fileset.
reflection_visibility: The visibility of the generated reflection Fileset.
output_to_bindir: Passed to genrule for output to bin directory.
"""
include_paths_cmd = ["-I %s" % (s) for s in include_paths]
Expand Down