Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Browser support (discussion) #3

Open
mishushakov opened this issue Dec 6, 2020 · 10 comments
Open

Browser support (discussion) #3

mishushakov opened this issue Dec 6, 2020 · 10 comments

Comments

@mishushakov
Copy link
Contributor

mishushakov commented Dec 6, 2020

Hey there,
i was able to load the model successfully in browser using tensorflow.js

Screenshot 2020-12-06 at 15 18 29

to convert the models you can use TensorFlow.js converter

pip install tensorflowjs
tensorflowjs_converter --input_format keras models/ts9x/ts9.h5 models/ts9/lite

this issue is a backlog for inferencing and (possibly) training models in browsers

@mishushakov mishushakov changed the title Browser capabilities (discussion) Browser support (discussion) Dec 6, 2020
@GuitarML
Copy link
Owner

GuitarML commented Dec 8, 2020

Very cool! I'll need to read up more on how that would work, but the fact that it loads is a good start. Also the colab notebook seems to work fairly well for training with the split_data param I added, need to test that more though.

@mishushakov
Copy link
Contributor Author

Sharing some great progress on the browser runtime:

  1. Loaded the models (and weights) into a webpage

  2. Modified predict.py to save the X_ordered tensor as json file (for explanation see Prepare data with tensorflow? #5)

    X_ordered = tf.gather(X,indices)

    Note: due to big file size i was only able to export one second of the audio (65 megabytes)

  3. Loaded the tensor into the TensorFlow.JS and was able to run the prediction
    first one is the input tensor and the second one is the output tensor

    Screenshot 2020-12-08 at 04 55 29
  4. Loaded the resulting float32 data using Audacity's "Import Raw" function

    Screenshot 2020-12-08 at 06 06 48

The prediction run slowly on my 13-inch MacBook Pro (iGPU), but i will make a test on a PC with GPU
i'd love to share the code once i figure out how to save the bytes as wav, i tried just writing them into a wav file, but didn't work

In the zip file you'll find the in.wav and predict.wav (again, generated through a browser)

Demo.zip

@mishushakov
Copy link
Contributor Author

actually spent whole night getting that to work, but totally worth it

@GuitarML
Copy link
Owner

GuitarML commented Dec 8, 2020

Great stuff, yeah once you get on a roll it's hard to stop, get some sleep!

@mishushakov
Copy link
Contributor Author

mishushakov commented Dec 8, 2020

Kept my promise and made the web browser version possible
try out yourself: https://mishushakov.github.com/GuitarLSTM-browser

get the example tensor here (it's so big i had to use git-lfs): https://github.com/mishushakov/GuitarLSTM-browser/raw/master/samples/tensor.json

if you're interested how it all works, feel free to fork the repo: https://github.com/mishushakov/GuitarLSTM-browser

thank you everyone for your help in making this possible
to me this is just the beginning

😃

@mishushakov
Copy link
Contributor Author

so cool, even works on my iPhone lol

@GuitarML
Copy link
Owner

GuitarML commented Dec 8, 2020

@mishushakov Can't wait to try it out! Nice work!

@mishushakov
Copy link
Contributor Author

on the training side it is possible to convert the the js models to keras (h5)
this means we can train models in the browser and load them into plugin

tensorflowjs_converter --input_format=tfjs_layers_model --output_format=keras models/ts9/model.json models/ts9/ts9.h5

taken from https://github.com/tensorflow/tfjs/tree/master/tfjs-converter#javascript-to-python

@GuitarML
Copy link
Owner

I think so, but for the plugin I was planning on using json format for the models anyway. H5 is just another data format, as long as all the weights are in the json file we can make it work. And we can work together to make sure the models trained from the browser can be loaded into the future plugin.

H5 is nice because it compresses the data, but I think json is the better option because it's readable in any text editor, and I'm more familiar with loading json in c++.

@mishushakov
Copy link
Contributor Author

mishushakov commented Dec 13, 2020

@GuitarML not sure if it's a good idea
give SavedModel a try

once you have that, you can load the model in every version of tensorflow (you read it right, tensorflow, not just keras)
in C++ specifically you can use the tensorflow C++ API or tensorflow Lite C++ API

if this is still isn't enough you could use tensorflow-onnx to convert your model to onnx and load it with onnx runtime

i've been actually wondering whether we can make both networks run in your plugin
one solution would be adding wavenet to the tensorflow version
the other solution would be converting both to onnx

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants