Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use official emsdk bazel toolchain #4769

Merged
merged 30 commits into from
Mar 30, 2021
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
30 commits
Select commit Hold shift + click to select a range
53ca3fe
Compile a simple hello world example using emsdk's bazel toolchain
mattsoulanille Feb 25, 2021
35761df
xnnpack hello world not working with emsdk toolchain
mattsoulanille Feb 25, 2021
2e719ca
Remove unused variables
mattsoulanille Feb 26, 2021
9785982
WASM simd compiling locally with -copt='-msimd128'
mattsoulanille Mar 1, 2021
e9530cf
Add msimd128 copt when simd is true
mattsoulanille Mar 1, 2021
6438cd4
Compile wasm bundles from the same cc target
mattsoulanille Mar 1, 2021
602e2c5
Use a remote repo instead of local for patched XNNPACK
mattsoulanille Mar 1, 2021
b434002
Enable incompatible_strict_action_env
mattsoulanille Mar 2, 2021
3eb59d4
Enable remote cache in CI
mattsoulanille Mar 2, 2021
ee64690
Cleanup and format
mattsoulanille Mar 2, 2021
927ec38
Apply buildifier lints
mattsoulanille Mar 2, 2021
30b0ac9
Remove hello world test cc files
mattsoulanille Mar 2, 2021
f9dc556
Remove old emscripten toolchain
mattsoulanille Mar 2, 2021
d2ae7b6
Update dockerfiles
mattsoulanille Mar 2, 2021
a81863f
Add trailing slash to shell command
mattsoulanille Mar 2, 2021
f643328
Revert "Add trailing slash to shell command"
mattsoulanille Mar 2, 2021
7dfdb99
Revert "Update dockerfiles"
mattsoulanille Mar 2, 2021
98ec22a
Merge branch 'master' into wasm_toolchain_2.0.14
mattsoulanille Mar 3, 2021
e09b523
Use local spawn strategy for bazel build performance
mattsoulanille Mar 3, 2021
a01ee3c
Update XNNPACK
mattsoulanille Mar 4, 2021
fab5bc6
Merge branch 'master' into wasm_toolchain_2.0.14
mattsoulanille Mar 4, 2021
95e0a46
Stop accidentally building twice
mattsoulanille Mar 4, 2021
d81c077
Merge remote-tracking branch 'upstream/master' into wasm_toolchain_2.…
mattsoulanille Mar 9, 2021
ca4f269
'remote_http_cache' -> 'remote_cache'
mattsoulanille Mar 9, 2021
5ef1bc8
Merge branch 'master' into wasm_toolchain_2.0.14
mattsoulanille Mar 9, 2021
5a4f3bc
Fix typo
mattsoulanille Mar 10, 2021
df6f4db
Merge branch 'wasm_toolchain_2.0.14' of github.com:mattsoulanille/tfj…
mattsoulanille Mar 10, 2021
4c19957
Update emscripten toolchain
mattsoulanille Mar 29, 2021
dc69450
Merge branch 'master' into wasm_toolchain_2.0.14
mattsoulanille Mar 29, 2021
dde7a90
Remove unused variables
mattsoulanille Mar 29, 2021
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
43 changes: 28 additions & 15 deletions .bazelrc
Original file line number Diff line number Diff line change
Expand Up @@ -6,22 +6,35 @@
# editor's search path.
build --symlink_prefix=dist/

# Use our custom-configured c++ toolchain.
build:wasm --crosstool_top=//toolchain:emscripten
# These compile flags are active no matter which build mode we are in
# (dbg vs opt). For flags specific to build mode, see cc_toolchain_config.bzl.
build --cxxopt="-std=c++11"
build --cxxopt="-fno-rtti"
build --cxxopt="-fno-exceptions"
build --cxxopt="-fomit-frame-pointer"

# Use --cpu as a differentiator.
build:wasm --cpu=wasm
# Remote cache config. Users can add credentials in their .bazelrc.user files.
build:remote --remote_cache=https://storage.googleapis.com/bazel-remote-cache-tfjs

# Use the default C++ toolchain to build the tools used during the build.
build:wasm --host_crosstool_top=@bazel_tools//tools/cpp:toolchain
# Config for Google Cloud continuous integration that uses default credentials.
build:ci --config=remote
build:ci --google_default_credentials

# These compile flags are active no matter which build mode we are in
# (dbg vs opt). For flags specific to build mode, see cc_toolchain_config.bzl.
build:wasm --cxxopt="-std=c++11"
build:wasm --cxxopt="-fno-rtti"
build:wasm --cxxopt="-fno-exceptions"
build:wasm --cxxopt="-fomit-frame-pointer"
# This flag is needed to prevent the bazel cache from being invalidated when
# running bazel via `yarn bazel`.
# See https://github.com/angular/angular/issues/27514.
build --incompatible_strict_action_env
run --incompatible_strict_action_env

# Don't use a sandboxed build since it hurts performance.
# When sandboxed, the wasm backend takes hours to build on a 32 core machine.
build --spawn_strategy=worker,local

# Disable sandbox environment because emsdk caches files by writing to
# home directory.
build:wasm --spawn_strategy=local
# Load any settings specific to the current user.
# .bazelrc.user should appear in .gitignore so that settings are not shared with
# team members. This needs to be last statement in this config, as the user
# configuration should be able to overwrite flags from this file.
# See https://docs.bazel.build/versions/master/best-practices.html#bazelrc
# (Note that we use .bazelrc.user so the file appears next to .bazelrc in
# directory listing, rather than user.bazelrc as suggested in the Bazel docs).
try-import %workspace%/.bazelrc.user
5 changes: 4 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -59,4 +59,7 @@ tfjs-backend-wasm/wasm-out/*.js
tfjs-backend-wasm/wasm-out/*.wasm
yalc.lock
yarn-error.log
cloudbuild_generated.yml
cloudbuild_generated.yml

# User-specific .bazelrc
.bazelrc.user
22 changes: 16 additions & 6 deletions WORKSPACE
Original file line number Diff line number Diff line change
Expand Up @@ -12,18 +12,28 @@ yarn_install(
yarn_lock = "//:yarn.lock",
)

# Make all files under $HOME/emsdk/* visible to the toolchain. The files are
# available as external/emsdk/emsdk/*
load("//toolchain:cc_toolchain_config.bzl", "emsdk_configure")
emsdk_configure(name = "emsdk")
# Emscripten toolchain
http_archive(
name = "emsdk",
strip_prefix = "emsdk-c1589b55641787d55d53e883852035beea9aec3f/bazel",
url = "https://github.com/emscripten-core/emsdk/archive/c1589b55641787d55d53e883852035beea9aec3f.tar.gz",
sha256 = "7a58a9996b113d3e0675df30b5f17e28aa47de2e684a844f05394fe2f6f12e8e",
)

load("@emsdk//:deps.bzl", emsdk_deps = "deps")
emsdk_deps()

load("@emsdk//:emscripten_deps.bzl", emsdk_emscripten_deps = "emscripten_deps")
emsdk_emscripten_deps()


load("@bazel_tools//tools/build_defs/repo:git.bzl", "git_repository")
# xnnpack used for fast vectorized wasm operations
git_repository(
name = "xnnpack",
commit = "55d53a4e7079d38e90acd75dd9e4f9e781d2da35",
commit = "3bfbdaf00211b313b143af39279bb6bf1f7effc0",
remote = "https://github.com/google/XNNPACK.git",
shallow_since = "1614036677 -0800",
shallow_since = "1617056836 -0700",
)

# The libraries below are transitive dependencies of XNNPACK that we need to
Expand Down
3 changes: 1 addition & 2 deletions tfjs-backend-wasm/scripts/build-ci.sh
Original file line number Diff line number Diff line change
Expand Up @@ -17,5 +17,4 @@
set -e

yarn tsc

./scripts/build-wasm.sh
BAZEL_REMOTE="--config=ci" ./scripts/build-wasm.sh
31 changes: 17 additions & 14 deletions tfjs-backend-wasm/scripts/build-wasm.sh
Original file line number Diff line number Diff line change
Expand Up @@ -24,25 +24,28 @@ set -e
set -x

# Default build.
yarn bazel build -c opt //tfjs-backend-wasm/src/cc:tfjs-backend-wasm.js --config=wasm
yarn bazel build $BAZEL_REMOTE -c opt //tfjs-backend-wasm/src/cc:tfjs-backend-wasm
# The typescript code and karma config expect the output of emscripten to be in
# wasm-out/ so we copy the bazel output there.
cp -f ../../dist/bin/tfjs-backend-wasm/src/cc/tfjs-backend-wasm.js \
../../dist/bin/tfjs-backend-wasm/src/cc/tfjs-backend-wasm.wasm \
cp -f ../../dist/bin/tfjs-backend-wasm/src/cc/tfjs-backend-wasm/tfjs-backend-wasm.js \
../../dist/bin/tfjs-backend-wasm/src/cc/tfjs-backend-wasm/tfjs-backend-wasm.wasm \
../wasm-out/

if [[ "$1" != "--dev" ]]; then
# SIMD build.
yarn bazel build -c opt //tfjs-backend-wasm/src/cc:tfjs-backend-wasm-simd.js --config=wasm --copt="-msimd128"
cp -f ../../dist/bin/tfjs-backend-wasm/src/cc/tfjs-backend-wasm-simd.wasm \
../wasm-out/

# Threaded + SIMD build.
yarn bazel build -c opt //tfjs-backend-wasm/src/cc:tfjs-backend-wasm-threaded-simd.js --config=wasm --copt="-pthread" --copt="-msimd128"
cp -f ../../dist/bin/tfjs-backend-wasm/src/cc/tfjs-backend-wasm-threaded-simd.js \
../../dist/bin/tfjs-backend-wasm/src/cc/tfjs-backend-wasm-threaded-simd.worker.js \
../../dist/bin/tfjs-backend-wasm/src/cc/tfjs-backend-wasm-threaded-simd.wasm \
../wasm-out/
# SIMD and threaded + SIMD builds.
yarn bazel build $BAZEL_REMOTE -c opt --copt="-msimd128" //tfjs-backend-wasm/src/cc:tfjs-backend-wasm-simd \
//tfjs-backend-wasm/src/cc:tfjs-backend-wasm-threaded-simd
# Copy SIMD
cp -f ../../dist/bin/tfjs-backend-wasm/src/cc/tfjs-backend-wasm-simd/tfjs-backend-wasm.wasm \
../wasm-out/tfjs-backend-wasm-simd.wasm

# Copy threaded
cp -f ../../dist/bin/tfjs-backend-wasm/src/cc/tfjs-backend-wasm-threaded-simd/tfjs-backend-wasm.js \
../wasm-out/tfjs-backend-wasm-threaded-simd.js
cp -f ../../dist/bin/tfjs-backend-wasm/src/cc/tfjs-backend-wasm-threaded-simd/tfjs-backend-wasm.worker.js \
../wasm-out/tfjs-backend-wasm-threaded-simd.worker.js
cp -f ../../dist/bin/tfjs-backend-wasm/src/cc/tfjs-backend-wasm-threaded-simd/tfjs-backend-wasm.wasm \
../wasm-out/tfjs-backend-wasm-threaded-simd.wasm

node ./create-worker-module.js
node ./patch-threaded-simd-module.js
Expand Down
1 change: 1 addition & 0 deletions tfjs-backend-wasm/scripts/create-worker-module.js
Original file line number Diff line number Diff line change
Expand Up @@ -26,5 +26,6 @@ const BASE_PATH = '../wasm-out/';
const WORKER_PATH = `${BASE_PATH}tfjs-backend-wasm-threaded-simd.worker.js`;

const workerContents = fs.readFileSync(WORKER_PATH, "utf8");
fs.chmodSync(WORKER_PATH, 0o644);
fs.writeFileSync(`${WORKER_PATH}`,
`export const wasmWorkerContents = '${workerContents.trim()}';`);
61 changes: 15 additions & 46 deletions tfjs-backend-wasm/src/cc/BUILD
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
load("@emsdk//emscripten_toolchain:wasm_rules.bzl", "wasm_cc_binary")
load(":build_defs.bzl", "tfjs_cc_library", "tfjs_unit_test")

# Emcripten produces a much larger wasm bundle unless the cc_binary has srcs
Expand Down Expand Up @@ -36,54 +37,22 @@ cc_binary(
],
)

# This build rule generates tfjs-backend-wasm-simd.{js,wasm}.
#
# We only need the .wasm file, not the .js file, because it will be loadsed by
# the tfjs-backend-wasm.js file (generated from the previous build rule).
#
# See scripts/build-wasm.sh where we only copy the .wasm file to wasm-out.
cc_binary(
name = "tfjs-backend-wasm-simd.js",
srcs = ["backend.cc"] + KERNELS_WITH_KEEPALIVE,
linkopts = [
"-s ALLOW_MEMORY_GROWTH=1",
"-s DEFAULT_LIBRARY_FUNCS_TO_INCLUDE=[]",
"-s DISABLE_EXCEPTION_CATCHING=1",
"-s FILESYSTEM=0",
"-s EXIT_RUNTIME=0",
"-s EXPORTED_FUNCTIONS='[\"_malloc\", \"_free\"]'",
"-s EXTRA_EXPORTED_RUNTIME_METHODS='[\"cwrap\"]'",
"-s MALLOC=emmalloc",
],
deps = [
":all_kernels",
":backend",
],
wasm_cc_binary(
name = "tfjs-backend-wasm",
cc_target = ":tfjs-backend-wasm.js",
)

# This build rule generates tfjs-backend-wasm-threaded-simd.{js,wasm} and
# tfjs-backend-wasm-threaded-simd.worker.js.
cc_binary(
name = "tfjs-backend-wasm-threaded-simd.js",
srcs = ["backend.cc"] + KERNELS_WITH_KEEPALIVE,
linkopts = [
"-s ALLOW_MEMORY_GROWTH=1",
"-s DEFAULT_LIBRARY_FUNCS_TO_INCLUDE=[]",
"-s DISABLE_EXCEPTION_CATCHING=1",
"-s FILESYSTEM=0",
"-s EXIT_RUNTIME=0",
"-s EXPORTED_FUNCTIONS='[\"_malloc\", \"_free\"]'",
"-s EXTRA_EXPORTED_RUNTIME_METHODS='[\"cwrap\"]'",
"-s MODULARIZE=1",
"-s EXPORT_NAME=WasmBackendModuleThreadedSimd",
"-s MALLOC=emmalloc",
"-s USE_PTHREADS=1",
"-s PROXY_TO_PTHREAD=1",
],
deps = [
":all_kernels",
":backend",
],
wasm_cc_binary(
name = "tfjs-backend-wasm-simd",
cc_target = ":tfjs-backend-wasm.js",
simd = True,
)

wasm_cc_binary(
name = "tfjs-backend-wasm-threaded-simd",
cc_target = ":tfjs-backend-wasm.js",
simd = True,
threads = "emscripten",
)

test_suite(
Expand Down
1 change: 0 additions & 1 deletion tfjs-backend-wasm/src/cc/binary.cc
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,6 @@ void binary_xnn_f32(const size_t a_id, const size_t* a_shape_ptr,
} else {
binary_op = cache_result->second;
}
const size_t batch_size = out_info.size;
xnn_status status =
setup_op(binary_op, a_shape_len, a_shape_ptr, b_shape_len, b_shape_ptr,
a_buf, b_buf, out_buf, tfjs::backend::threadpool);
Expand Down
1 change: 0 additions & 1 deletion tfjs-backend-wasm/src/cc/kernels/All.cc
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,6 @@ void All(const size_t x_id, const size_t reduce_size, const size_t out_id) {
auto& out_info = backend::get_tensor_info_out(out_id);

const bool* x_buf = x_info.b();
const size_t x_size = x_info.size;

bool* out_buf = out_info.b_write();
const size_t out_size = out_info.size;
Expand Down
1 change: 0 additions & 1 deletion tfjs-backend-wasm/src/cc/kernels/Any.cc
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,6 @@ void Any(const size_t x_id, const size_t reduce_size, const size_t out_id) {
auto& out_info = backend::get_tensor_info_out(out_id);

const bool* x_buf = x_info.b();
const size_t x_size = x_info.size;

bool* out_buf = out_info.b_write();
const size_t out_size = out_info.size;
Expand Down
4 changes: 0 additions & 4 deletions tfjs-backend-wasm/src/cc/kernels/CropAndResize.cc
Original file line number Diff line number Diff line change
Expand Up @@ -91,16 +91,12 @@ void CropAndResize(size_t images_id, size_t boxes_id, size_t box_ind_id,
auto& out_info = backend::get_tensor_info_out(out_id);

const float* images_buf = images_info.f32();
const size_t images_size = images_info.size;

const float* boxes_buf = boxes_info.f32();
const size_t boxes_size = boxes_info.size;

const int* box_ind_buf = box_ind_info.i32();
const size_t box_ind_size = box_ind_info.size;

float* out_buf = out_info.f32_write();
const size_t out_size = out_info.size;

const size_t batch = images_shape[0];
const size_t image_height = images_shape[1];
Expand Down
1 change: 0 additions & 1 deletion tfjs-backend-wasm/src/cc/kernels/FloorDiv.cc
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,6 @@ void FloorDiv(const size_t a_id, const size_t* a_shape_ptr,
const size_t a_shape_len, const size_t b_id,
const size_t* b_shape_ptr, const size_t b_shape_len,
const DType dtype, const size_t out_id) {
auto& a_info = backend::get_tensor_info(a_id);
switch (dtype) {
case DType::float32:
binary_f32(a_id, b_id, out_id,
Expand Down
1 change: 0 additions & 1 deletion tfjs-backend-wasm/src/cc/kernels/Max.cc
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,6 @@ void Max(const size_t x_id, const size_t reduce_size, const size_t out_id) {
auto& out_info = backend::get_tensor_info_out(out_id);

const float* x_buf = x_info.f32();
const size_t x_size = x_info.size;

float* out_buf = out_info.f32_write();
const size_t out_size = out_info.size;
Expand Down
2 changes: 0 additions & 2 deletions tfjs-backend-wasm/src/cc/kernels/Mean.cc
Original file line number Diff line number Diff line change
Expand Up @@ -34,15 +34,13 @@ void Mean(const size_t x_id, const size_t reduce_size, const size_t out_id) {
auto& out_info = backend::get_tensor_info_out(out_id);

const float* x_buf = x_info.f32();
const size_t x_size = x_info.size;

float* out_buf = out_info.f32_write();
const size_t out_size = out_info.size;

const float* x_offset = x_buf;

for (size_t i = 0; i < out_size; ++i) {
const size_t offset = i * reduce_size;
float sum = 0;

const float* x_iter_end = x_offset + reduce_size;
Expand Down
1 change: 0 additions & 1 deletion tfjs-backend-wasm/src/cc/kernels/Min.cc
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,6 @@ void Min(const size_t x_id, const size_t reduce_size, const size_t out_id) {
auto& out_info = backend::get_tensor_info_out(out_id);

const float* x_buf = x_info.f32();
const size_t x_size = x_info.size;

float* out_buf = out_info.f32_write();
const size_t out_size = out_info.size;
Expand Down
7 changes: 6 additions & 1 deletion tfjs-backend-wasm/src/cc/kernels/Pow.cc
Original file line number Diff line number Diff line change
Expand Up @@ -27,8 +27,13 @@ template <class T>
inline T power(T a, T b) {
return pow(a, b);
}

inline bool power_bool(bool a, bool b) {
return static_cast<bool>(pow(static_cast<float>(a), static_cast<float>(b)));
}
} // namespace


namespace tfjs {
namespace wasm {
// We use C-style API to interface with Javascript.
Expand All @@ -48,7 +53,7 @@ void Pow(const size_t a_id, const size_t* a_shape_ptr, const size_t a_shape_len,
binary_i32(a_id, b_id, out_id, power<int32_t>);
break;
case DType::boolean:
binary_bool(a_id, b_id, out_id, power<bool>);
binary_bool(a_id, b_id, out_id, power_bool);
break;
default:
util::warn("Pow for tensor ids %d and %d failed. Unknown dtype %d", a_id,
Expand Down
2 changes: 0 additions & 2 deletions tfjs-backend-wasm/src/cc/kernels/Prod.cc
Original file line number Diff line number Diff line change
Expand Up @@ -29,15 +29,13 @@ void prod(const size_t x_id, const size_t reduce_size, const size_t out_id) {
auto& out_info = tfjs::backend::get_tensor_info_out(out_id);

const T* x_buf = reinterpret_cast<const T *>(x_info.memory_offset);
const size_t x_size = x_info.size;

T* out_buf = reinterpret_cast<T *>(out_info.memory_offset);
const size_t out_size = out_info.size;

const T* x_offset = x_buf;

for (size_t i = 0; i < out_size; ++i) {
const size_t offset = i * reduce_size;
T product = 1;

const T* x_iter_end = x_offset + reduce_size;
Expand Down
2 changes: 0 additions & 2 deletions tfjs-backend-wasm/src/cc/kernels/Sum.cc
Original file line number Diff line number Diff line change
Expand Up @@ -34,15 +34,13 @@ void Sum(const size_t x_id, const size_t reduce_size, const size_t out_id) {
auto& out_info = backend::get_tensor_info_out(out_id);

const float* x_buf = x_info.f32();
const size_t x_size = x_info.size;

float* out_buf = out_info.f32_write();
const size_t out_size = out_info.size;

const float* x_offset = x_buf;

for (size_t i = 0; i < out_size; ++i) {
const size_t offset = i * reduce_size;
float sum = 0;

const float* x_iter_end = x_offset + reduce_size;
Expand Down
2 changes: 1 addition & 1 deletion tfjs-backend-wasm/src/cc/non_max_suppression_impl.cc
Original file line number Diff line number Diff line change
Expand Up @@ -116,7 +116,7 @@ const NonMaxSuppressionResult* non_max_suppression_impl(
std::vector<int32_t> selected_indices;
std::vector<float> selected_scores;
Candidate candidate;
float iou, original_score;
float original_score;

while (selected_indices.size() < max_out_size &&
!candidate_priority_queue.empty()) {
Expand Down
11 changes: 2 additions & 9 deletions tfjs-backend-wasm/yarn.lock
Original file line number Diff line number Diff line change
Expand Up @@ -984,18 +984,11 @@

"@tensorflow/tfjs-backend-cpu@link:../tfjs-backend-cpu":
version "0.0.0"
dependencies:
"@types/seedrandom" "2.4.27"
seedrandom "2.4.3"
uid ""

"@tensorflow/tfjs-core@link:../tfjs-core":
version "0.0.0"
dependencies:
"@types/offscreencanvas" "~2019.3.0"
"@types/seedrandom" "2.4.27"
"@types/webgl-ext" "0.0.30"
node-fetch "~2.6.1"
seedrandom "2.4.3"
uid ""

"@types/emscripten@~0.0.34":
version "0.0.34"
Expand Down
Loading