-
Notifications
You must be signed in to change notification settings - Fork 74.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Node.js (JavaScript) Wrapper API #37
Comments
+1! |
+1 |
2 similar comments
👍 |
+1 |
Just what I was searching for. 👍 As quoted from the offical website http://www.tensorflow.org/
I am new to this whole SWIG thing but searched around and found this. http://www.swig.org/Doc3.0/Javascript.html Not really sure how this works. Do we need to write swig interface file specifically for Javascript or is it auto-generated when running some commands or is somebody already working on this (this would be awesome) ? |
+1 👍 |
+1 |
5 similar comments
+1 |
+1 |
+1 |
👍 |
👍 |
+1! Just starting out on one, but new to writing a nodejs addon. Checking out the swig interface files to see if they're going to be helpful, or if I should just use the c++ API. |
+1 |
3 similar comments
+1 |
+1 |
+1 |
This is something the core TensorFlow team is unlikely to tackle in the near future, so if you want to contribute it, please go ahead! I would recommend circulating a proposed implementation on the discuss mailing list early on, so that a consensus about where such API might live (in repo / off repo / in 'contrib' directory) can be reached ahead of time. |
Anyone up to write a NodeJS library? 👍 |
👍 |
@tngan The slack channel is private, however I was able to join with the herokuapp link. 👍 |
We hope more developers to discuss and contribute. Now we have a slack channel named as nodejs (see #31), and a Github repository node-tensorflow is reserved. Thanks @Foorack ! |
I am willing to contribute. Thanks for the initiative guys! |
@miguelalche Glad to see you're interested! Please join the slack channel and someone will add you to the repository. ^^ |
I look forward for contributing ( specially along with #132 ) !! |
+1 |
Hooray for node! Let's do this. |
propelml.org - Looks interesting. I've not used it but its GPU enabled and runs in both the browser and on node |
@7ammer propelml.org looks rather promising. Thanks for sharing this with us ;-) |
Because NodeJS it's fast! ;D |
If an ambitious member of the community wants the glory of solving this problem, and having it merged into the TensorFlow contrib codebase, here are some tips on how I would do it. Please note I'm not going to do this. You can add Node to workspace.bzl just like TensorBoard did in js.bzl. load("@io_bazel_rules_closure//closure:defs.bzl", "filegroup_external")
filegroup_external(
name = "org_nodejs",
# MIT with portions licensed:
# - MIT
# - Old MIT
# - 2-Clause-BSD
# - 3-Clause-BSD
# - ISC
# - Unicode
# - zlib
# - Artistic 2.0
licenses = ["notice"],
sha256_urls_extract_macos = {
"910395e1e98fb351c62b5702a9deef22aaecf05d6df1d7edc283337542207f3f": [
"https://mirror.bazel.build/nodejs.org/dist/v6.9.1/node-v6.9.1-darwin-x64.tar.xz",
"http://nodejs.org/dist/v6.9.1/node-v6.9.1-darwin-x64.tar.xz",
],
},
sha256_urls_windows = {
"1914bfb950be8d576ce9e49c8a0e51c9f2402560fe3c19093e69bc1306a56e9e": [
"https://mirror.bazel.build/raw.githubusercontent.com/nodejs/node/v6.9.1/LICENSE",
"https://raw.githubusercontent.com/nodejs/node/v6.9.1/LICENSE",
],
"513923b0490ebb7466a56483a62595814ed9d036d6f35476debb0cd606bec526": [
"https://mirror.bazel.build/nodejs.org/dist/v6.9.1/win-x64/node.exe",
"http://nodejs.org/dist/v6.9.1/win-x64/node.exe",
],
"3951aefa4afd6fb836ab06468b1fc2a69fa75bd66ec2f5a0e08c4e32547681e3": [
"https://mirror.bazel.build/nodejs.org/dist/v6.9.1/win-x64/node.lib",
"http://nodejs.org/dist/v6.9.1/win-x64/node.lib",
],
},
sha256_urls_extract = {
"d4eb161e4715e11bbef816a6c577974271e2bddae9cf008744627676ff00036a": [
"https://mirror.bazel.build/nodejs.org/dist/v6.9.1/node-v6.9.1-linux-x64.tar.xz",
"http://nodejs.org/dist/v6.9.1/node-v6.9.1-linux-x64.tar.xz",
],
},
strip_prefix = {
"node-v6.9.1-darwin-x64.tar.xz": "node-v6.9.1-darwin-x64",
"node-v6.9.1-linux-x64.tar.xz": "node-v6.9.1-linux-x64",
},
executable = [
"node",
"node.exe",
],
default_visibility = ["//tensorflow/contrib/node:__subpackages__"],
) Now let's say you want you have a Node program, e.g. tsc.js, which you want to turn into something you can def node_binary(name, srcs, data=None, visibility=None, testonly=None, **kwargs):
native.sh_binary(
name = name,
srcs = [name + ".sh"],
data = srcs + data + ["@org_nodejs"],
testonly = testonly,
visibility = visibility,
**kwargs
)
native.genrule(
name = name + "_sh",
srcs = [srcs[0]],
outs = [name + ".sh"],
cmd = "cat >$@ <<'EOF'\n" +
"#!/bin/bash\n" +
"NODE=external/org_nodejs/bin/node\n" +
"if [[ -e external/org_nodejs/node.exe ]]; then\n" +
" NODE=external/org_nodejs/node.exe\n" +
"fi\n" +
"exec $${NODE} $(location " + srcs[0] + ") \"$$@\"\n" +
"EOF",
executable = True,
testonly = testonly,
visibility = ["//visibility:private"],
) Now for the fun part. I would write a single .js file (even if it had to be 30,000 lines long like tex.web) with zero dependencies other than the Node standard library. The inputs for this program would be ops.pbtxt and all the other pbtxt files in api_def/base_api. The output to this program would be exactly one gigantic C++ file that talks to TensorFlow C API and Node C++ Addon API based on this example. load("//tensorflow/contrib/node:defs.bzl", "node_binary")
load("@domain_registry//java/google/registry/builddefs:zip_file.bzl", "zip_file")
node_binary(
name = "generate",
srcs = ["generate.js"],
data = [
"//tensorflow/core:ops/ops.pbtxt",
"//tensorflow/core/api_def:base_api_def",
],
)
genrule(
name = "api",
srcs = [
"//tensorflow/core:ops/ops.pbtxt",
"//tensorflow/core/api_def:base_api_def",
],
cmd = "$(location :generate) $(location api.cc) $(SRCS)",
outs = ["api.cc"],
tools = [":generate"],
)
zip_file(
name = "tfnode",
srcs = [
"package.json",
"README.md",
"api.cc",
"binding.gyp",
"tfnode.js",
],
mappings = {"tensorflow/contrib/node": "package"},
) Then you If I wrote this (which I won't) it would be a barebones direct mapping of the TensorFlow API definitions. Then I would encourage our friends in the community to veneer the library. There's a diversity of visions out there on friendly modern high-level idiomatic JS and ML APIs, each catering to different use cases. However they could all share this binding in common. Please note there are examples of where we already generate language bindings. See tensorflow/go/genop/main.go and tensorflow/go/op/generate.go for inspiration. |
Looks like the TensorFlow team is making this a top priority now: https://js.tensorflow.org/faq/ |
We might want to move this discussion to here: tensorflow/tfjs#36 Progress on Node.js bindings to the C API will be tracked at that issue. |
As update to this issue - we have open-sourced the Node.js binding for TFJS: https://github.com/tensorflow/tfjs-node We are working hard at getting a proper NPM build and will release it soon! |
I will close this issue. Please track tensorflow/tfjs and tensorflow/tfjs-node for further updates. |
Related and possibly of interest: I managed to get TF running in the browser via Webassembly. See https://humantoanimal.com for a demo; I will be providing more details in the future. |
@nuchi, so did you compile the necessary TensorFlow code from the C API to WebAssembly? Or are you using TensorFlow.js? |
@lastmjs I explain in more detail in the link I provided. Short version: I added Webassembly as an XLA compilation target. I did not use Tensorflow.js in any way. |
@nuchi Great work! and I know another WebAssemble research on TensorFlow at here: |
Glad to see that there's official progress on this. I'd love to have fast, parallel GPU compute power at my fingertips with the ease and composability of JS. I started working on a NodeJS binding for TensorFlow a while ago, but a haven't had much free time to devote to it lately. The concept is similar to @jart's suggested approach. I had three goals in mind for the project: 1. Don't require building or installing tensorflowInstead, it should download and use the pre-built, multi-platform python binaries and download any needed source files on the fly. 2. Don't require a complete C++ or JS reproduction or abstraction of the APIInstead, it should provide a complete 1-to-1 interface with the C API, providing convenient JS abstractions as much as possible. 3. Don't maintain the C API bindings by handInstead, it should use a swig script to map the core data structures between Tensorflow/stdc++/V8/node and the rest will follow. I got pretty far along with this, but last I remember a was having issues with TF_Session related segfaults. Right now it's just collecting dust, so if someone wants to jump in and help with this I'd gladly accept PRs. |
Closing as this is resolved |
Add use_explicit_batch parameter available in OpConverterParams and other places Formatting and make const bool everywhere Enable use_explicit_batch for TRT 6.0 Revise validation checks to account for use_explicit_batch. Propagate flag to ConversionParams and TRTEngineOp Rename use_explicit_batch/use_implicit_batch Formatting Add simple activtion test for testing dynamic input shapes. Second test with None dims is disabled Update ConvertAxis to account for use_implicit batch fix use of use_implicit_batch (tensorflow#7) * fix use of use_implicit_batch * change order of parameters in ConvertAxis function fix build (tensorflow#8) Update converters for ResNet50 (except Binary ops) (tensorflow#9) * Update RN50 converters for use_implicit_batch: Conv2D, BiasAdd, Transpose, MaxPool, Squeeze, MatMul, Pad * Fix compilation errors * Fix tests Use TRT6 API's for dynamic shape (tensorflow#11) * adding changes for addnetworkv2 * add plugin utils header file in build * optimization profile api added * fix optimization profile * TRT 6.0 api changes + clang format * Return valid errors in trt_engine_op * add/fix comments * Changes to make sure activation test passes with TRT trunk * use HasStaticShape API, add new line at EOF Allow opt profiles to be set via env variables temporarily. Undo accidental change fix segfault by properly returning the status from OverwriteStaticDims function Update GetTrtBroadcastShapes for use_implicit_batch (tensorflow#14) * Update GetTrtBroadcastShapes for use_implicit_batch * Formatting Update activation test Fix merge errors Update converter for reshape (tensorflow#17) Allow INT32 for elementwise (tensorflow#18) Add Shape op (tensorflow#19) * Add Shape op * Add #if guards for Shape. Fix formatting Support dynamic shapes for strided slice (tensorflow#20) Support dynamic shapes for strided slice Support const scalars + Pack on constants (tensorflow#21) Support const scalars and pack with constants in TRT6 Fixes/improvements for BERT (tensorflow#22) * Support shrink_axis_mask for StridedSlice * Use a pointer for final_shape arg in ConvertStridedSliceHelper. Use final_shape for unpack/unstack * Support BatchMatMulV2. * Remove TODO and update comments * Remove unused include * Update Gather for TRT 6 * Update BatchMatMul for TRT6 - may need more changes * Update StridedSlice shrink_axis for TRT6 * Fix bugs with ConvertAxis, StridedSlice shrink_axis, Gather * Fix FC and broadcast * Compile issue and matmul fix * Use nullptr for empty weights * Update Slice * Fix matmul for TRT6 * Use enqueueV2. Don't limit to 1 input per engine Change INetworkConfig to IBuilderConfig Allow expand dims to work on dynamic inputs by slicing shape. Catch problems with DepthwiseConv. Don't try to verify dynamic shapes in CheckValidSize (tensorflow#24) Update CombinedNMS converter (tensorflow#23) * Support CombinedNMS in non implicit batch mode. The squeeze will not work if multiple dimensions are unknown * Fix compile error and formatting Support squeeze when input dims are unknown Support an additional case of StridedSlice where some dims aren't known Use new API for createNetworkV2 Fix flag type for createNetworkV2 Use tensor inputs for strided slice Allow squeeze to work on -1 dims Add TRT6 checks to new API spliting ConvertGraphDefToEngine (tensorflow#29) * spliting ConvertGraphDefToEngine into ConvertGraphDefToNetwork and BuildEngineFromNetwork * some compiler error * fix format Squeeze Helper function (tensorflow#31) * Add squeeze helper * Fix compile issues * Use squeeze helper for CombinedNMS Update Split & Unpack for dynamic shapes (tensorflow#32) * Update Unpack for dynamic shapes * Fix compilation error Temporary hack to fix bug in config while finding TRT library Fix errors from rebasing Remove GatherV2 limitations for TRT6 Fix BiasAdd elementwise for NCHW case with explicit batch mode (tensorflow#34) Update TRT6 headers, Make tests compile (tensorflow#35) * Change header files for TRT6 in configure script * Fix bug with size of scalars. Use implicit batch mode based on the converter flag when creating network * Fix compilation of tests and Broadcast tests Properly fix biasadd nchw (tensorflow#36) Revert tensorflow#29 to fix weight corruption (tensorflow#37) * Revert tensorflow#29 to fix weight corruption * Revert change in test Fix bug with converters and get all tests passing for TRT6 (tensorflow#39) Update DepthToSpace and SpaceToTest for TRT6 + dynamic shapes (tensorflow#40) Add new C++ tests for TRT6 converters (tensorflow#41) * Remove third shuffle layer since bug with transpose was fixed * Add new tests for TRT6 features * Update TRT6 headers list Fix compilation errors Remove bazel_build.sh Enable quantization mnist test back Disabled by mistake I believe Remove undesirable changes in quantization_mnist_test Add code back that was missed during rebase Fix bug: change "type" to type_key
Enable FFT operations on the GPU for ROCm
Feature/bconv2d int8
Because JavaScript is Awesome
The text was updated successfully, but these errors were encountered: