Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Node.js (JavaScript) Wrapper API #37

Closed
keon opened this issue Nov 9, 2015 · 246 comments
Closed

Node.js (JavaScript) Wrapper API #37

keon opened this issue Nov 9, 2015 · 246 comments
Labels
stat:contribution welcome Status - Contributions welcome

Comments

@keon
Copy link

keon commented Nov 9, 2015

Because JavaScript is Awesome

@keon keon changed the title JavaScript (Node.js) Wrapper JavaScript (Node.js) Wrapper API Nov 9, 2015
@ZECTBynmo
Copy link

+1!

@dhritzkiv
Copy link

+1

2 similar comments
@fiws
Copy link

fiws commented Nov 9, 2015

👍

@mikealche
Copy link

+1

@jagandecapri
Copy link

Just what I was searching for. 👍

As quoted from the offical website http://www.tensorflow.org/

we’re hoping to entice you to contribute SWIG interfaces to your favorite language -- be it Go, Java, Lua, Javascript, or R.

I am new to this whole SWIG thing but searched around and found this. http://www.swig.org/Doc3.0/Javascript.html

Not really sure how this works. Do we need to write swig interface file specifically for Javascript or is it auto-generated when running some commands or is somebody already working on this (this would be awesome) ?

@imgntn
Copy link

imgntn commented Nov 10, 2015

+1 👍

@prathamesh7pute
Copy link

+1

5 similar comments
@chenzxcvb
Copy link

+1

@dmitriykharchenko
Copy link

+1

@marcbaetica
Copy link

+1

@lowe0292
Copy link

👍

@mattkosoy
Copy link

👍

@nikhilk
Copy link

nikhilk commented Nov 10, 2015

+1!

Just starting out on one, but new to writing a nodejs addon. Checking out the swig interface files to see if they're going to be helpful, or if I should just use the c++ API.

@vnglst
Copy link

vnglst commented Nov 10, 2015

+1

3 similar comments
@jalona
Copy link

jalona commented Nov 10, 2015

+1

@Foorack
Copy link

Foorack commented Nov 10, 2015

+1

@tngan
Copy link

tngan commented Nov 10, 2015

+1

@vincentvanhoucke
Copy link
Contributor

This is something the core TensorFlow team is unlikely to tackle in the near future, so if you want to contribute it, please go ahead! I would recommend circulating a proposed implementation on the discuss mailing list early on, so that a consensus about where such API might live (in repo / off repo / in 'contrib' directory) can be reached ahead of time.

@Foorack
Copy link

Foorack commented Nov 11, 2015

Anyone up to write a NodeJS library? 👍
I think it would be better with a official NodeJS API however a community one will be as (if not more) interesting in my opinion. I know there are multiple ways of approaching this however I strongly recommend node-gyp for performance. I will gladly contribute in any way I can, however, this is something I will not be able to do alone. Would be best if a few other people is interested as well, specially someone with C++ knowledge.

@augbog
Copy link

augbog commented Nov 11, 2015

👍

@tngan
Copy link

tngan commented Nov 11, 2015

@Foorack I am willing to contribute it if some people are interested as well. Is it possible to move the discussion to a slack channel ? (see #31)

@Foorack
Copy link

Foorack commented Nov 11, 2015

@tngan The slack channel is private, however I was able to join with the herokuapp link. 👍

@tngan
Copy link

tngan commented Nov 11, 2015

We hope more developers to discuss and contribute. Now we have a slack channel named as nodejs (see #31), and a Github repository node-tensorflow is reserved. Thanks @Foorack !

@marcbaetica
Copy link

I am willing to contribute. Thanks for the initiative guys!

@Foorack
Copy link

Foorack commented Nov 11, 2015

@miguelalche Glad to see you're interested! Please join the slack channel and someone will add you to the repository. ^^

@cauerego
Copy link

I look forward for contributing ( specially along with #132 ) !!

@gpresland
Copy link

+1

@mpj
Copy link

mpj commented Dec 1, 2015

Hooray for node! Let's do this.

@7immer
Copy link

7immer commented Mar 1, 2018

propelml.org - Looks interesting. I've not used it but its GPU enabled and runs in both the browser and on node

@thefill
Copy link

thefill commented Mar 6, 2018

@7ammer propelml.org looks rather promising. Thanks for sharing this with us ;-)

@troyam
Copy link

troyam commented Mar 24, 2018

Because NodeJS it's fast! ;D

@jart
Copy link
Contributor

jart commented Mar 24, 2018

If an ambitious member of the community wants the glory of solving this problem, and having it merged into the TensorFlow contrib codebase, here are some tips on how I would do it. Please note I'm not going to do this.

You can add Node to workspace.bzl just like TensorBoard did in js.bzl.
Please note TensorFlow can not depend on rules_nodejs.

load("@io_bazel_rules_closure//closure:defs.bzl", "filegroup_external")

filegroup_external(
    name = "org_nodejs",
    # MIT with portions licensed:
    # - MIT
    # - Old MIT
    # - 2-Clause-BSD
    # - 3-Clause-BSD
    # - ISC
    # - Unicode
    # - zlib
    # - Artistic 2.0
    licenses = ["notice"],
    sha256_urls_extract_macos = {
        "910395e1e98fb351c62b5702a9deef22aaecf05d6df1d7edc283337542207f3f": [
            "https://mirror.bazel.build/nodejs.org/dist/v6.9.1/node-v6.9.1-darwin-x64.tar.xz",
            "http://nodejs.org/dist/v6.9.1/node-v6.9.1-darwin-x64.tar.xz",
        ],
    },
    sha256_urls_windows = {
        "1914bfb950be8d576ce9e49c8a0e51c9f2402560fe3c19093e69bc1306a56e9e": [
            "https://mirror.bazel.build/raw.githubusercontent.com/nodejs/node/v6.9.1/LICENSE",
            "https://raw.githubusercontent.com/nodejs/node/v6.9.1/LICENSE",
        ],
        "513923b0490ebb7466a56483a62595814ed9d036d6f35476debb0cd606bec526": [
            "https://mirror.bazel.build/nodejs.org/dist/v6.9.1/win-x64/node.exe",
            "http://nodejs.org/dist/v6.9.1/win-x64/node.exe",
        ],
        "3951aefa4afd6fb836ab06468b1fc2a69fa75bd66ec2f5a0e08c4e32547681e3": [
            "https://mirror.bazel.build/nodejs.org/dist/v6.9.1/win-x64/node.lib",
            "http://nodejs.org/dist/v6.9.1/win-x64/node.lib",
        ],
    },
    sha256_urls_extract = {
        "d4eb161e4715e11bbef816a6c577974271e2bddae9cf008744627676ff00036a": [
            "https://mirror.bazel.build/nodejs.org/dist/v6.9.1/node-v6.9.1-linux-x64.tar.xz",
            "http://nodejs.org/dist/v6.9.1/node-v6.9.1-linux-x64.tar.xz",
        ],
    },
    strip_prefix = {
        "node-v6.9.1-darwin-x64.tar.xz": "node-v6.9.1-darwin-x64",
        "node-v6.9.1-linux-x64.tar.xz": "node-v6.9.1-linux-x64",
    },
    executable = [
        "node",
        "node.exe",
    ],
    default_visibility = ["//tensorflow/contrib/node:__subpackages__"],
)

Now let's say you want you have a Node program, e.g. tsc.js, which you want to turn into something you can bazel run //tensorflow/contrib/node:generate. One quick way you could do this in Bazel is by defining a macro in tensorflow/contrib/node/defs.bzl:

def node_binary(name, srcs, data=None, visibility=None, testonly=None, **kwargs):
  native.sh_binary(
      name = name,
      srcs = [name + ".sh"],
      data = srcs + data + ["@org_nodejs"],
      testonly = testonly,
      visibility = visibility,
      **kwargs
  )
  
  native.genrule(
      name = name + "_sh",
      srcs = [srcs[0]],
      outs = [name + ".sh"],
      cmd = "cat >$@ <<'EOF'\n" +
            "#!/bin/bash\n" +
            "NODE=external/org_nodejs/bin/node\n" +
            "if [[ -e external/org_nodejs/node.exe ]]; then\n" +
            "  NODE=external/org_nodejs/node.exe\n" +
            "fi\n" +
            "exec $${NODE} $(location " + srcs[0] + ") \"$$@\"\n" +
            "EOF",
      executable = True,
      testonly = testonly,
      visibility = ["//visibility:private"],
  )

Now for the fun part. I would write a single .js file (even if it had to be 30,000 lines long like tex.web) with zero dependencies other than the Node standard library. The inputs for this program would be ops.pbtxt and all the other pbtxt files in api_def/base_api. The output to this program would be exactly one gigantic C++ file that talks to TensorFlow C API and Node C++ Addon API based on this example.

load("//tensorflow/contrib/node:defs.bzl", "node_binary")
load("@domain_registry//java/google/registry/builddefs:zip_file.bzl", "zip_file")

node_binary(
    name = "generate",
    srcs = ["generate.js"],
    data = [
        "//tensorflow/core:ops/ops.pbtxt",
        "//tensorflow/core/api_def:base_api_def",
    ],
)

genrule(
    name = "api",
    srcs = [
        "//tensorflow/core:ops/ops.pbtxt",
        "//tensorflow/core/api_def:base_api_def",
    ],
    cmd = "$(location :generate) $(location api.cc) $(SRCS)",
    outs = ["api.cc"],
    tools = [":generate"],
)

zip_file(
    name = "tfnode",
    srcs = [
        "package.json",
        "README.md",
        "api.cc",
        "binding.gyp",
        "tfnode.js",
    ],
    mappings = {"tensorflow/contrib/node": "package"},
)

Then you bazel build //tensorflow/contrib/node:tfnode.zip and bam you've got your NodeJS project all bundled and ready for distribution to places like NPM.

If I wrote this (which I won't) it would be a barebones direct mapping of the TensorFlow API definitions. Then I would encourage our friends in the community to veneer the library. There's a diversity of visions out there on friendly modern high-level idiomatic JS and ML APIs, each catering to different use cases. However they could all share this binding in common.

Please note there are examples of where we already generate language bindings. See tensorflow/go/genop/main.go and tensorflow/go/op/generate.go for inspiration.

@lastmjs
Copy link

lastmjs commented Mar 30, 2018

Looks like the TensorFlow team is making this a top priority now: https://js.tensorflow.org/faq/

@lastmjs
Copy link

lastmjs commented Mar 31, 2018

We might want to move this discussion to here: tensorflow/tfjs#36

Progress on Node.js bindings to the C API will be tracked at that issue.

@nkreeger
Copy link
Contributor

As update to this issue - we have open-sourced the Node.js binding for TFJS: https://github.com/tensorflow/tfjs-node

We are working hard at getting a proper NPM build and will release it soon!

@martinwicke
Copy link
Member

I will close this issue. Please track tensorflow/tfjs and tensorflow/tfjs-node for further updates.

@nuchi
Copy link
Contributor

nuchi commented May 8, 2018

Related and possibly of interest: I managed to get TF running in the browser via Webassembly. See https://humantoanimal.com for a demo; I will be providing more details in the future.

@lastmjs
Copy link

lastmjs commented May 8, 2018

@nuchi, so did you compile the necessary TensorFlow code from the C API to WebAssembly? Or are you using TensorFlow.js?

@nuchi
Copy link
Contributor

nuchi commented May 8, 2018

@lastmjs I explain in more detail in the link I provided. Short version: I added Webassembly as an XLA compilation target. I did not use Tensorflow.js in any way.

@huan
Copy link
Contributor

huan commented May 9, 2018

@nuchi Great work! and I know another WebAssemble research on TensorFlow at here:
https://medium.com/@tomasreimers/compiling-tensorflow-for-the-browser-f3387b8e1e1c

@rchipka
Copy link

rchipka commented Jun 7, 2018

Glad to see that there's official progress on this. I'd love to have fast, parallel GPU compute power at my fingertips with the ease and composability of JS.

I started working on a NodeJS binding for TensorFlow a while ago, but a haven't had much free time to devote to it lately.

The concept is similar to @jart's suggested approach.

I had three goals in mind for the project:

1. Don't require building or installing tensorflow

Instead, it should download and use the pre-built, multi-platform python binaries and download any needed source files on the fly.

2. Don't require a complete C++ or JS reproduction or abstraction of the API

Instead, it should provide a complete 1-to-1 interface with the C API, providing convenient JS abstractions as much as possible.

3. Don't maintain the C API bindings by hand

Instead, it should use a swig script to map the core data structures between Tensorflow/stdc++/V8/node and the rest will follow.


I got pretty far along with this, but last I remember a was having issues with TF_Session related segfaults.

Right now it's just collecting dust, so if someone wants to jump in and help with this I'd gladly accept PRs.

@wt-huang
Copy link

wt-huang commented Nov 9, 2018

Closing as this is resolved

pooyadavoodi pushed a commit to pooyadavoodi/tensorflow that referenced this issue Oct 16, 2019
Add use_explicit_batch parameter available in OpConverterParams and other places

Formatting and make const bool everywhere

Enable use_explicit_batch for TRT 6.0

Revise validation checks to account for use_explicit_batch. Propagate flag to ConversionParams and TRTEngineOp

Rename use_explicit_batch/use_implicit_batch

Formatting

Add simple activtion test for testing dynamic input shapes. Second test with None dims is disabled

Update ConvertAxis to account for use_implicit batch

fix use of use_implicit_batch (tensorflow#7)

* fix use of use_implicit_batch

* change order of parameters in ConvertAxis function

fix build (tensorflow#8)

Update converters for ResNet50 (except Binary ops) (tensorflow#9)

* Update RN50 converters for use_implicit_batch: Conv2D, BiasAdd, Transpose, MaxPool, Squeeze, MatMul, Pad

* Fix compilation errors

* Fix tests

Use TRT6 API's for dynamic shape (tensorflow#11)

* adding changes for addnetworkv2

* add plugin utils header file in build

* optimization profile api added

* fix optimization profile

* TRT 6.0 api changes + clang format

* Return valid errors in trt_engine_op

* add/fix comments

* Changes to make sure activation test passes with TRT trunk

* use HasStaticShape API, add new line at EOF

Allow opt profiles to be set via env variables temporarily.

Undo accidental change

 fix segfault by properly returning the status from OverwriteStaticDims function

Update GetTrtBroadcastShapes for use_implicit_batch (tensorflow#14)

* Update GetTrtBroadcastShapes for use_implicit_batch

* Formatting

Update activation test

Fix merge errors

Update converter for reshape (tensorflow#17)

Allow INT32 for elementwise (tensorflow#18)

Add Shape op (tensorflow#19)

* Add Shape op

* Add #if guards for Shape. Fix formatting

Support dynamic shapes for strided slice (tensorflow#20)

Support dynamic shapes for strided slice

Support const scalars + Pack on constants (tensorflow#21)

Support const scalars and pack with constants in TRT6

Fixes/improvements for BERT (tensorflow#22)

* Support shrink_axis_mask for StridedSlice

* Use a pointer for final_shape arg in ConvertStridedSliceHelper. Use final_shape for unpack/unstack

* Support BatchMatMulV2.

* Remove TODO and update comments

* Remove unused include

* Update Gather for TRT 6

* Update BatchMatMul for TRT6 - may need more changes

* Update StridedSlice shrink_axis for TRT6

* Fix bugs with ConvertAxis, StridedSlice shrink_axis, Gather

* Fix FC and broadcast

* Compile issue and matmul fix

* Use nullptr for empty weights

* Update Slice

* Fix matmul for TRT6

* Use enqueueV2. Don't limit to 1 input per engine

Change INetworkConfig to IBuilderConfig

Allow expand dims to work on dynamic inputs by slicing shape. Catch problems with DepthwiseConv. Don't try to verify dynamic shapes in CheckValidSize (tensorflow#24)

Update CombinedNMS converter (tensorflow#23)

* Support CombinedNMS in non implicit batch mode. The squeeze will not work if multiple dimensions are unknown

* Fix compile error and formatting

Support squeeze when input dims are unknown

Support an additional case of StridedSlice where some dims aren't known

Use new API for createNetworkV2

Fix flag type for createNetworkV2

Use tensor inputs for strided slice

Allow squeeze to work on -1 dims

Add TRT6 checks to new API

spliting ConvertGraphDefToEngine  (tensorflow#29)

* spliting ConvertGraphDefToEngine into ConvertGraphDefToNetwork and BuildEngineFromNetwork

* some compiler error

* fix format

Squeeze Helper function (tensorflow#31)

* Add squeeze helper

* Fix compile issues

* Use squeeze helper for CombinedNMS

Update Split & Unpack for dynamic shapes (tensorflow#32)

* Update Unpack for dynamic shapes

* Fix compilation error

Temporary hack to fix bug in config while finding TRT library

Fix errors from rebasing

Remove GatherV2 limitations for TRT6

Fix BiasAdd elementwise for NCHW case with explicit batch mode (tensorflow#34)

Update TRT6 headers, Make tests compile (tensorflow#35)

* Change header files for TRT6 in configure script

* Fix bug with size of scalars. Use implicit batch mode based on the converter flag when creating network

* Fix compilation of tests and Broadcast tests

Properly fix biasadd nchw (tensorflow#36)

Revert tensorflow#29 to fix weight corruption (tensorflow#37)

* Revert tensorflow#29 to fix weight corruption

* Revert change in test

Fix bug with converters and get all tests passing for TRT6 (tensorflow#39)

Update DepthToSpace and SpaceToTest for TRT6 + dynamic shapes (tensorflow#40)

Add new C++ tests for TRT6 converters (tensorflow#41)

* Remove third shuffle layer since bug with transpose was fixed

* Add new tests for TRT6 features

* Update TRT6 headers list

Fix compilation errors

Remove bazel_build.sh

Enable quantization mnist test back

Disabled by mistake I believe

Remove undesirable changes in quantization_mnist_test

Add code back that was missed during rebase

Fix bug: change "type" to type_key
cjolivier01 pushed a commit to Cerebras/tensorflow that referenced this issue Dec 6, 2019
Enable FFT operations on the GPU for ROCm
keithm-xmos pushed a commit to xmos/tensorflow that referenced this issue Feb 1, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
stat:contribution welcome Status - Contributions welcome
Projects
None yet
Development

No branches or pull requests