New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Running in a web worker #102

Open
bkazi opened this Issue Apr 5, 2018 · 10 comments

Comments

Projects
None yet
8 participants
@bkazi

bkazi commented Apr 5, 2018

Are there any plans to make tfjs able to run on a worker using offscreen canvas? Is it already possible? (sorry if I'm still stuck in the deeplearn.js days)

Would it be possible to do so now by manually creating a GPGPUContext and using it in the backend somehow?

@nsthorat

This comment has been minimized.

Show comment
Hide comment
@nsthorat

nsthorat Apr 5, 2018

Collaborator

Offscreen canvas is definitely something we want to support, we haven't prioritized it because it's still experimental.

If you want to send us a PR, we'd happily accept it. Just make sure you do feature testing inside environment.ts.

Collaborator

nsthorat commented Apr 5, 2018

Offscreen canvas is definitely something we want to support, we haven't prioritized it because it's still experimental.

If you want to send us a PR, we'd happily accept it. Just make sure you do feature testing inside environment.ts.

easadler pushed a commit to easadler/tfjs that referenced this issue Apr 12, 2018

Add RMSProp and Adagrad optimizers. (tensorflow#102)
* Sgd+ momentum optimizer added

* momentum optimizer extended from sgd

* momentum optimizer used in model-builder

* cleanup

* -_-

* redundant code removed in momentumOptimizer

* tabs replaced with spaces

* space added

* space added

* resolved conflicts

* rmsprop and adagrad optimizer added

*  resolved texture leakage and optimizers inherited from optimizer.ts

* Merge branch 'master' into master

* Merge remote-tracking branch 'upstream/master'

* minor changes in optimizers

* Merge branch 'master' into master

*  resolved conflicts

* Merge branch 'master' of https://github.com/mnottheone/deeplearnjs

* formatting done

* license updated

* cache -> accumulatedSquaredGradients

* formatted

* formatted
@prijindal

This comment has been minimized.

Show comment
Hide comment
@prijindal

prijindal Apr 25, 2018

If you mean, simply training a model in web worker, It seems that is already possible.

prijindal commented Apr 25, 2018

If you mean, simply training a model in web worker, It seems that is already possible.

@Giorat

This comment has been minimized.

Show comment
Hide comment
@Giorat

Giorat May 23, 2018

Is it possible to run the prediction of a model in web worker? Or is there a way to predict without freezing the browser UI cause GIFs in the page and other things remain still until prediction is completed.

Is there a workaround? @prijindal only training is possible inside a web worker?

Giorat commented May 23, 2018

Is it possible to run the prediction of a model in web worker? Or is there a way to predict without freezing the browser UI cause GIFs in the page and other things remain still until prediction is completed.

Is there a workaround? @prijindal only training is possible inside a web worker?

@nsthorat

This comment has been minimized.

Show comment
Hide comment
@nsthorat

nsthorat May 23, 2018

Collaborator

If the page is freezing because of WebGL, you won't really get much from a web worker.

Have you tried sprinkling await tf.nextFrame between calls to TF.js and using .data() instead of .dataSync()?

Collaborator

nsthorat commented May 23, 2018

If the page is freezing because of WebGL, you won't really get much from a web worker.

Have you tried sprinkling await tf.nextFrame between calls to TF.js and using .data() instead of .dataSync()?

@prijindal

This comment has been minimized.

Show comment
Hide comment
@prijindal

prijindal May 26, 2018

@Giorat Yes, it is possible to do prediction also inside a web worker.
Till now i couldn't find anything in tfjs that specifically requires it to have a dom environment, so i am guessing the whole library should work inside a web worker.

prijindal commented May 26, 2018

@Giorat Yes, it is possible to do prediction also inside a web worker.
Till now i couldn't find anything in tfjs that specifically requires it to have a dom environment, so i am guessing the whole library should work inside a web worker.

@brannondorsey

This comment has been minimized.

Show comment
Hide comment
@brannondorsey

brannondorsey Jun 5, 2018

@Giorat and @prijindal, tfjs does indeed work in web workers as of today, but unfortunately only using the cpu backend, not webgl.

Supporting the webgl backend inside web workers seems like a huge advantage if it can be managed with the OffscreenCanvas API. As of today, it seems to me that there are two options for running tfjs with the webgl backend:

  1. Train/infer on small batch sizes being careful to await tf.nextFrame() so as to not block the main UI thread.
  2. Ignore tf.nextFrame() and run your tfjs operations with no throttling via requestAnimationFrame() (web devs & users will hate you for this).

tf.nextFrame() (which uses requestAnimationFrame() underneath) is an interesting solution to the unique problem that arises when doing ML in browser which other batch-processing based ML frameworks in Python/C++ don't suffer from. Without using web workers, tfjs will always have to share the main UI thread, and as a result, the tfjs operations will always be throttled/slowed down. From where I see it, support for the webgl backend inside of web workers would be a huge step in making tfjs a first-class citizen when it comes to ML frameworks. I don't mean to say it isn't amazing as is, but without "multithreaded" support that can also leverage WebGL, tfjs will always be limited/bound to using a throttling API that was developed for video games and animation, not the kind of batch programming standard in machine learning.

Have there been any recent movements or a road map to add support for the WebGL backend in web workers?

brannondorsey commented Jun 5, 2018

@Giorat and @prijindal, tfjs does indeed work in web workers as of today, but unfortunately only using the cpu backend, not webgl.

Supporting the webgl backend inside web workers seems like a huge advantage if it can be managed with the OffscreenCanvas API. As of today, it seems to me that there are two options for running tfjs with the webgl backend:

  1. Train/infer on small batch sizes being careful to await tf.nextFrame() so as to not block the main UI thread.
  2. Ignore tf.nextFrame() and run your tfjs operations with no throttling via requestAnimationFrame() (web devs & users will hate you for this).

tf.nextFrame() (which uses requestAnimationFrame() underneath) is an interesting solution to the unique problem that arises when doing ML in browser which other batch-processing based ML frameworks in Python/C++ don't suffer from. Without using web workers, tfjs will always have to share the main UI thread, and as a result, the tfjs operations will always be throttled/slowed down. From where I see it, support for the webgl backend inside of web workers would be a huge step in making tfjs a first-class citizen when it comes to ML frameworks. I don't mean to say it isn't amazing as is, but without "multithreaded" support that can also leverage WebGL, tfjs will always be limited/bound to using a throttling API that was developed for video games and animation, not the kind of batch programming standard in machine learning.

Have there been any recent movements or a road map to add support for the WebGL backend in web workers?

@oeway

This comment has been minimized.

Show comment
Hide comment
@oeway

oeway Jun 29, 2018

I think it's definitely necessary to support web workers, having cpu and webgl computation running in webworks would be awesome.
The situation I am facing right now is: I have webworkers doing preprocessing for my data which will be passed to tfjs for training, I encountered two related issues:

  1. in a webworker, I just tried with the cdn tfjs
importScripts("https://cdn.jsdelivr.net/npm/@tensorflow/tfjs")

And I get Error: Script error., doesn't even give me the chance to set the backend to cpu.
@brannondorsey how did you get tfjs to work in the webworker with cpu backend?

  1. I try to see if tensors can be sent through postMessage, so I run code with tfjs in a sandboxed iframe, when I create a tensor and send it with postMessage, I got this object {isDisposedInternal: false, size: 4, shape: Array(2), dtype: "float32", strides: Array(1), …} but it can't be used in tfjs, the data is lost.
    Would it be possible to send tensors through postMessage?

oeway commented Jun 29, 2018

I think it's definitely necessary to support web workers, having cpu and webgl computation running in webworks would be awesome.
The situation I am facing right now is: I have webworkers doing preprocessing for my data which will be passed to tfjs for training, I encountered two related issues:

  1. in a webworker, I just tried with the cdn tfjs
importScripts("https://cdn.jsdelivr.net/npm/@tensorflow/tfjs")

And I get Error: Script error., doesn't even give me the chance to set the backend to cpu.
@brannondorsey how did you get tfjs to work in the webworker with cpu backend?

  1. I try to see if tensors can be sent through postMessage, so I run code with tfjs in a sandboxed iframe, when I create a tensor and send it with postMessage, I got this object {isDisposedInternal: false, size: 4, shape: Array(2), dtype: "float32", strides: Array(1), …} but it can't be used in tfjs, the data is lost.
    Would it be possible to send tensors through postMessage?

@akofman akofman referenced this issue Jul 16, 2018

Open

Webworker #47

@sandipdeveloper

This comment has been minimized.

Show comment
Hide comment
@sandipdeveloper

sandipdeveloper Jul 24, 2018

Hi All,

Is there a sample code to use tfjs in web worker, I have tried something like this, does not work

importScripts("https://cdn.jsdelivr.net/npm/@tensorflow/tfjs")

sandipdeveloper commented Jul 24, 2018

Hi All,

Is there a sample code to use tfjs in web worker, I have tried something like this, does not work

importScripts("https://cdn.jsdelivr.net/npm/@tensorflow/tfjs")

@brannondorsey

This comment has been minimized.

Show comment
Hide comment
@brannondorsey

brannondorsey Aug 10, 2018

It looks like the OffscreenCanvas API will be supported by default in the upcoming Chrome 69 release and beyond. Once this occurs, what would be the necessary steps to get WebGL support in a web worker with tfjs?

brannondorsey commented Aug 10, 2018

It looks like the OffscreenCanvas API will be supported by default in the upcoming Chrome 69 release and beyond. Once this occurs, what would be the necessary steps to get WebGL support in a web worker with tfjs?

@woudsma

This comment has been minimized.

Show comment
Hide comment
@woudsma

woudsma Aug 20, 2018

It's too bad that the OffscreenCanvas API isn't available yet / soon in most browsers.
Loading a small MobileNetV2 model (~1MB) converted with tensorflowjs_converter and running predict adds noticeable UI lag when using a single thread.

@sandipdeveloper @oeway This is my very hacky solution so far. (I'm using React so don't mind the WorkerProxy workaround). The model should work inside a Web Worker by setting tf.setBackend('cpu').

Getting a prediction using a MobileNetV2_0.50_224 model takes ~18 seconds on my Macbook Pro..
(compared to ~50ms using the default webgl backend). UI lag is gone though.
Ideas/improvements are greatly appreciated!

Link to the steps I took to retrain the model.

ModelWorker.js

/* eslint-disable */
export default function ModelWorker() {
  this.window = this
  importScripts('https://cdn.jsdelivr.net/npm/setimmediate@1.0.5/setImmediate.min.js')
  importScripts('https://cdn.jsdelivr.net/npm/@tensorflow/tfjs-core')
  this.tfc = this.tf
  importScripts('https://cdn.jsdelivr.net/npm/@tensorflow/tfjs@0.10.3')
  importScripts('https://cdn.jsdelivr.net/npm/@tensorflow/tfjs-converter')

  this.tf = _objectSpread(this.tf, this.tfc) // this.tf = { ...this.tf, ...this.tfc }
  tf.setBackend('cpu')

  onmessage = async (e) => {
    postMessage('(worker) Loading model')
    const { MODEL_URL, WEIGHTS_URL, IMAGE_SIZE } = e.data
    const model = await tf.loadFrozenModel(MODEL_URL, WEIGHTS_URL)
    postMessage('(worker) Model loaded')
    const input = tf.zeros([1, IMAGE_SIZE, IMAGE_SIZE, 3])
    const t0 = performance.now()
    postMessage('(worker) Predicting..')
    await model.predict({ Placeholder: input })
    postMessage(`(worker) Prediction took ${(performance.now() - t0).toFixed(1)} ms`)
  }

  // ES6 polyfills
  function _defineProperty(obj, key, value) {
    return key in obj
      ? Object.defineProperty(obj, key, {
        value,
        enumerable: true,
        configurable: true,
        writable: true,
      })
      : obj[key] = value
  }

  function _objectSpread(target) {
    for (let i = 1; i < arguments.length; i += 1) {
      const source = arguments[i] != null ? arguments[i] : {}
      let ownKeys = Object.keys(source)
      if (typeof Object.getOwnPropertySymbols === 'function') {
        ownKeys = ownKeys.concat(Object
          .getOwnPropertySymbols(source)
          .filter(sym => Object.getOwnPropertyDescriptor(source, sym).enumerable))
      }
      ownKeys.forEach(key => _defineProperty(target, key, source[key]))
    }
    return target
  }
}

WorkerProxy.js

export default class WorkerProxy {
  constructor(worker) {
    const code = worker.toString()
    const src = code.substring(code.indexOf('{') + 1, code.lastIndexOf('}'))
    const blob = new Blob([src], { type: 'application/javascript' })
    return new Worker(URL.createObjectURL(blob))
  }
}

(this should be easier with a future version of react-scripts..)

SomeComponent.js

import WorkerProxy from './WorkerProxy'
import ModelWorker from './ModelWorker'

const ASSETS_URL = `${window.location.origin}/assets`
const MODEL_URL = `${ASSETS_URL}/model/tensorflowjs_model.pb`
const WEIGHTS_URL = `${ASSETS_URL}/model/weights_manifest.json`
const LABELS_URL = `${ASSETS_URL}/model/labels.json`
const IMAGE_SIZE = 224

if (!!window.Worker) {
  const worker = new WorkerProxy(ModelWorker)
  worker.addEventListener('message', e => console.log(e.data))
  worker.postMessage({ MODEL_URL, WEIGHTS_URL, IMAGE_SIZE })
  // Load labels, etc.
}

...

It would be nice if there was a version of something like webgl-worker which can be used with TensorFlow.js.

As the adoption of OffscreenCanvas API will take time (longer than i can afford), all suggestions on possible workarounds are very welcome!

woudsma commented Aug 20, 2018

It's too bad that the OffscreenCanvas API isn't available yet / soon in most browsers.
Loading a small MobileNetV2 model (~1MB) converted with tensorflowjs_converter and running predict adds noticeable UI lag when using a single thread.

@sandipdeveloper @oeway This is my very hacky solution so far. (I'm using React so don't mind the WorkerProxy workaround). The model should work inside a Web Worker by setting tf.setBackend('cpu').

Getting a prediction using a MobileNetV2_0.50_224 model takes ~18 seconds on my Macbook Pro..
(compared to ~50ms using the default webgl backend). UI lag is gone though.
Ideas/improvements are greatly appreciated!

Link to the steps I took to retrain the model.

ModelWorker.js

/* eslint-disable */
export default function ModelWorker() {
  this.window = this
  importScripts('https://cdn.jsdelivr.net/npm/setimmediate@1.0.5/setImmediate.min.js')
  importScripts('https://cdn.jsdelivr.net/npm/@tensorflow/tfjs-core')
  this.tfc = this.tf
  importScripts('https://cdn.jsdelivr.net/npm/@tensorflow/tfjs@0.10.3')
  importScripts('https://cdn.jsdelivr.net/npm/@tensorflow/tfjs-converter')

  this.tf = _objectSpread(this.tf, this.tfc) // this.tf = { ...this.tf, ...this.tfc }
  tf.setBackend('cpu')

  onmessage = async (e) => {
    postMessage('(worker) Loading model')
    const { MODEL_URL, WEIGHTS_URL, IMAGE_SIZE } = e.data
    const model = await tf.loadFrozenModel(MODEL_URL, WEIGHTS_URL)
    postMessage('(worker) Model loaded')
    const input = tf.zeros([1, IMAGE_SIZE, IMAGE_SIZE, 3])
    const t0 = performance.now()
    postMessage('(worker) Predicting..')
    await model.predict({ Placeholder: input })
    postMessage(`(worker) Prediction took ${(performance.now() - t0).toFixed(1)} ms`)
  }

  // ES6 polyfills
  function _defineProperty(obj, key, value) {
    return key in obj
      ? Object.defineProperty(obj, key, {
        value,
        enumerable: true,
        configurable: true,
        writable: true,
      })
      : obj[key] = value
  }

  function _objectSpread(target) {
    for (let i = 1; i < arguments.length; i += 1) {
      const source = arguments[i] != null ? arguments[i] : {}
      let ownKeys = Object.keys(source)
      if (typeof Object.getOwnPropertySymbols === 'function') {
        ownKeys = ownKeys.concat(Object
          .getOwnPropertySymbols(source)
          .filter(sym => Object.getOwnPropertyDescriptor(source, sym).enumerable))
      }
      ownKeys.forEach(key => _defineProperty(target, key, source[key]))
    }
    return target
  }
}

WorkerProxy.js

export default class WorkerProxy {
  constructor(worker) {
    const code = worker.toString()
    const src = code.substring(code.indexOf('{') + 1, code.lastIndexOf('}'))
    const blob = new Blob([src], { type: 'application/javascript' })
    return new Worker(URL.createObjectURL(blob))
  }
}

(this should be easier with a future version of react-scripts..)

SomeComponent.js

import WorkerProxy from './WorkerProxy'
import ModelWorker from './ModelWorker'

const ASSETS_URL = `${window.location.origin}/assets`
const MODEL_URL = `${ASSETS_URL}/model/tensorflowjs_model.pb`
const WEIGHTS_URL = `${ASSETS_URL}/model/weights_manifest.json`
const LABELS_URL = `${ASSETS_URL}/model/labels.json`
const IMAGE_SIZE = 224

if (!!window.Worker) {
  const worker = new WorkerProxy(ModelWorker)
  worker.addEventListener('message', e => console.log(e.data))
  worker.postMessage({ MODEL_URL, WEIGHTS_URL, IMAGE_SIZE })
  // Load labels, etc.
}

...

It would be nice if there was a version of something like webgl-worker which can be used with TensorFlow.js.

As the adoption of OffscreenCanvas API will take time (longer than i can afford), all suggestions on possible workarounds are very welcome!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment