Deploying Watson Deep Learning Models to Browsers
Switch branches/tags
Nothing to show
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Type Name Latest commit message Commit time
Failed to load latest commit information.

Deploying Watson Deep Learning Models to Browsers

This project includes sample code how to train a model with TensorFlow and the Deep Learning service within Watson Studio and how to deploy and access the model in a web browser.

This project extends the open source project Emoji Scavenger Hunt which is a web based game that makes use of TensorFlow.js to identify objects seen by your webcam or mobile camera in the browser. Emojis are shown and you have to find those objects in the real world before the timer runs out.

This is a screenshot from the app running on an iPhone where currently a hat is recognized:

alt text

I've deployed a live demo but it will only work for you if you have items that look similar.

Check out the video for a quick demo.

In order to train the model I've taken pictures from seven items: plug, soccer ball, mouse, hat, truck, banana and headphones. Here is how the emojis map to the real objects. You can find the images in the data directory.

alt text


Get a free IBM Cloud lite account (no time restriction, no credit card required).

Create an instance of the Machine Learning service. From the credentials get the user name, password and the instance id.

Install the IBM Cloud CLI with the machine learning plugin and set environment variables by following these instructions.

Create an instance of the Cloud Object Storage service and create HMAC credentials by following these instructions. Make sure to use 'Writer' or 'Manager' access and note the aws_access_key_id and aws_secret_access_key for a later step.

Install and configure the AWS CLI by following these instructions.

Training of the Model

Clone this repo:

$ git clone

Create two buckets (use unique names):

$ aws --endpoint-url= --profile ibm_cos s3 mb s3://nh-hunt-input
$ aws --endpoint-url= --profile ibm_cos s3 mb s3://nh-hunt-output

Download and extract Mobilenet:

$ cd watson-deep-learning-javascript/data
$ wget
$ tar xvzf mobilenet_v1_0.25_224.tgz 

Upload bucket with MobileNet and data (use your unique bucket name):

$ cd xxx/watson-deep-learning-javascript/data
$ aws --endpoint-url= --profile ibm_cos s3 cp . s3://nh-hunt-input/ --recursive 

Prepare the training:

Invoke the training and check for status (change the generated training name):

$ cd xxx/watson-deep-learning-javascript/model
$ bx ml train tf-train.yaml
$ bx ml list training-runs
$ bx ml monitor training-runs training-5PQK89IiR
$ bx ml show training-runs training-5PQK89IiR

Download the saved model:

$ cd xxx/watson-deep-learning-javascript/saved-model
$ aws --endpoint-url= --profile ibm_cos s3 sync s3://nh-hunt-output .

Optionally evaluate the model via Tensorboard (either from Docker container or Virtualenv):

$ cd xxx/watson-deep-learning-javascript/saved-model/training-0xebs3Iig
$ tensorboard --logdir=xxx/watson-deep-learning-javascript/saved-model/training-0xebs3Iig/retrain_logs

Deployment of the Web Application

Convert the model:

$ cd xxx/watson-deep-learning-javascript/convert
$ docker build -t model-converter .
$ cp -a xxx/watson-deep-learning-javascript/saved-model/training-qBnjUqImR/model/. xxx/watson-deep-learning-javascript/convert/data/saved_model/ 
$ docker run -v xxx/watson-deep-learning-javascript/convert/data:/data -it model-converter

Build the web application (more details):

Change your emojis in scavenger_classes.ts and game_levels.ts.

$ cp -a xxx/watson-deep-learning-javascript/convert/data/saved_model_web/. xxx/watson-deep-learning-javascript/emoji-scavenger-hunt/dist/model/
$ cd xxx/watson-deep-learning-javascript/emoji-scavenger-hunt
$ yarn prep
$ yarn build

Push the application to IBM Cloud (change host and name in manifest.yaml to something unique):

$ cd xxx/watson-deep-learning-javascript/emoji-scavenger-hunt
$ cf login
$ cf push

After this you can open the application via URLs like

Deployment of the Model to Watson Studio

Deploy the model (change training id and model id):

$ bx ml store training-runs training-qBnjUqImR
$ bx ml deploy 0c78b7d6-9d22-4719-90da-ab649c0edc90 "my-deployment"

Generate payloads for predictions:

$ cd xxx/watson-deep-learning-javascript/predict
$ docker build -t generate-payload .
$ docker run -v xxx/watson-deep-learning-javascript/predict:/data -it -e file_name=ball.JPG generate-payload

Copy model id, deployment id and raw-payload.json in payload.json.

Predict something for a test image:

$ cd xxx/watson-deep-learning-javascript/predict
$ bx ml score payload.json

To interpret the result, check out output_labels.txt for the labels and the order of labels.

As alternative to the IBM Cloud CLI you can also use curl. See the API documentation for details.