Skip to content

hideya/tfjs-doodle-recognition-pwa

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

31 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Doodle Recognition PWA with TensorFlow.js

This is an example PWA that uses TensorFlow.js and performs doodle recognition.

It loads a pre-trained CNN model that was converted to TensorFlow.js format by tfjs-converter. The training code is shared at: https://github.com/maru-labo/doodle (actually this repo is a temporal copy of this example in the repo)

It runs on reasonably new Chrome, Safari, Firefox, Mobile Safari and Android Chrome (not on Edge nor IE. Safari and Mobile Safari have a start up issue; take a long time until getting ready, but work OK once started).

You can try it out at: https://tfjs-doodle-recognition-pwa.netlify.com/

Screenshot

Building and Running

The following commands will start a web server on localhost:8080 and open a browser page with the demo.

cd tfjs-doodle-recognition-pwa
yarn        # Installs dependencies.
yarn start  # Starts a web server and opens a page. Also watches for changes.

After yarn build, public directory holds the deployable files. Note that those files need to be served via https to enable PWA features (unless they are served from localhost).

Notes on Debugging

This is a PWA, so the involved files will be cached (it is different from the browser cache; it is Service Worker controlled cache). Attention is required to make sure that each debug execution respects the latest file changes. Please reference the document "Debugging Service Workers" for details including the other aspects of debugging PWA and Service Workers.

Updating Pre-trained Model Data

For the convenience, this example includes a pre-trained model files under public/model (the original model is of this .tar.gz file). The following steps illustrate how to update those model files:

  1. Obtain saved model files of newly trained model.
  2. Find out model information, necessary for conversion to TensorFlow.js format and subsequent execution with TensorFlow.js, by using SavedModel CLI (Command-Line Interface)
  3. Convert the saved model into TensorFlow.js format by using tfjs-converter.
  4. Replace the model files under public/model with the updated ones.
  5. Update the source code accordingly, if required.

SavedModel CLI and tfjs-converter need to be installed to move forward.

# This is an example; update the command lines as needed.
# Step 1
tar -xzf model.tar.gz
# This will yield a directory ('export') that contains the model files.

# Step 2
saved_model_cli show --dir export/Servo/* --all
# This will emits lines like the followings:
#     MetaGraphDef with tag-set: 'serve' contains the following SignatureDefs:
#
#     signature_def['serving_default']:
#       The given SavedModel SignatureDef contains the following input(s):
#         inputs['image'] tensor_info:
#             dtype: DT_FLOAT
#             shape: (-1, 28, 28, 1)
#             name: image_1:0
#       The given SavedModel SignatureDef contains the following output(s):
#         outputs['classes'] tensor_info:
#             dtype: DT_INT64
#             shape: (-1)
#             name: classes:0
#         outputs['probabilities'] tensor_info:
#             dtype: DT_FLOAT
#             shape: (-1, 10)
#             name: probabilities:0
#       Method name is: tensorflow/serving/predict

# Step 3, 4
# Read names of the model tag and the output nodes from the above output and run the coveter:
tensorflowjs_converter \
    --input_format=tf_saved_model \
    --saved_model_tags='serve' \
    --output_node_names='classes,probabilities' \
    export/Servo/* \
    public/model

# Step 5
# Update INPUT_NODE_NAME and OUTPUT_NODE_NAME in src/index.js as needed.

Note that, for simplicity's sake, the example code deals with single output node. This limitation has no drawback in this example as probabilities node holds all the information necessary for the demo. When you need to read output from more than one node, however, please reference the other example.

Another note -- as mentioned in "Notes on Debugging" above, attention needs to be paid to make sure that the new files are really being used when running after an update.