Skip to content
This repository has been archived by the owner on May 27, 2019. It is now read-only.
Hovig Ohannessian edited this page Nov 21, 2017 · 11 revisions

Short Name

Upload and analyze an image into IBM Cloud and receive status alerts

Short Description

Build an IoT project with IBM Cloud Functions (serverless), Visual Recognition, Node-RED, Node.js and along with IoT Platform

Offering Type

  • Emerging Technologies

Introduction

Build a bundle of apps that will insert images into Cloudant database, analyze it and based on the analysis it will trigger an alert showing if there's a danger or not.

Authors

Code

  • Please let us know what you think about this project. Comment on it, open issues and/or give us suggestions or reach us out.

Demo

Video

Overview

Industrial or high tech maintenance companies usually start with a concept on how to analyze images and inform the responsible person that an action should be taken. Our application setup in this tutorial will do that.

This tutorial avoided the need of a real camera but it is using an application that will upload and insert an image to Cloudant database in forms of binary events. The app will act like a real device.

You can run this app or similar apps on devices such as Raspberry Pi, etc. to upload the captured images.

A real-world architecture designs is implied in this tutorial. The IBM Functions service makes a REST call to the Watson Visual Recognition service and converts these binaries into JSON events. In turn, these events are sent to the IoT Platform in form of processed data of the image and under a registered gateway.

This processed data is already evaluated by the visual recognition service. So Noder-RED flow will capture the exceptions and trigger alerts based on that data.

Flow

architecture-diagram

Architecture

IMPORTANT: For more detailed step-by-step instructions, please make sure you visit the README.md page on the Github repo.

As the diagram above in the picture presents six steps. It will be best to start as the following:

  1. viz-send-image-app folder can be executed locally or be pushed to the cloud if you want. This folder contains the app UI to upload and insert an image to Cloudant database. Copy/Paste your credentials from Cloudant and IoT Platform in credentials.json (in viz-send-image-app)

  2. Cloudant database comes included in a Node-RED package, so create a Node-RED package from IBM Cloud Catalog

  3. Create IBM Functions from the Catalog (previously OpenWhisk). Copy/Paste your credentials from Cloudant, IoT Platform, Visual Recognition into credentials.cfg (in viz-openwhisk-functions)

  4. Create Watson Visual Recognition service from the Catalog

  5. Create Watson IoT Platform as well and bind it to Node-RED package

  6. Copy/Paste the json flow in your Node-RED editor. Make sure that ibmiot in Node-RED have the correct information from the IoT Platform

Make sure you can see the Internet of Things Platform, Visual Recognition and Cloudant services available in your Bluemix: Dashboard -> Connections.

It's worth mentioning that the IBM Cloud Functions requires a setup on its own.

Included components

Featured technologies

  • IBM IoT Platform
  • Node-Red
  • Javascript
  • IBM Cloud Functions (serverless)
  • Watson Visual Recognition

Blog post

Imagine you get an alert about a picture you took of an emergency scene or an alerting situation whether from a drone, digital camera or phone camera and you need to report that as quick as possible to your company or the authorities.

Developers can build applications that do just that automatically. They can accomplish this by using IBM Cloud Functions which analyzes an image, sends it to the Watson IoT Platform and then the score of this will be evaluated to trigger an alert reaching the authorities by selecting the best communication channel (email, text, push notifications, etc).

Developers have the option to develop this as a standalone application that can be easily replaced or transformed to work from within a smart devices or run it on a browser on your laptop or phone.

This project sends an image for processing, in this case, we are detecting a fire (similarly, developers can use this same app for maintenance alerts or other emergency alert detections). In this instance a fire is identified by the Watson Visual Recognition, hence the Node-RED app will subsequently notify the appropriate resources.

There are multiple ways to design this process. Developers can take take this project and make minor tweaks to extend it to other real-world use cases. Leverage the IBM Cloud and ensure you send alerts to designated sources and with designated ways for alert notifications.

The following points are an overall explanations of how this project works and what it does:

  • Application takes images from devices or uploads it from local image folders to Cloudant NoSQL database

  • Cloudant database, in turn, receives the binary data and triggers an action on IBM Cloud Functions

  • IBM Cloud Functions Composer performs the Visual Recognition analysis and receives a response in JSON format

  • The response is sent to the IoT Platform and registers itself as a device receiving the analyzed image.

  • Node-RED flow continues to read these events from the device on the IoT Platform and triggers alerts based on the image's features. For example:

iot-2/type/Device/id/motor1/evt/eventData/fmt/json

  • image: fire
  • score: 0.679
  • alert: EMERGENCY ALERT!
  • time: Tue Oct 24 2017 01:20:49 GMT+0000 (UTC)

It's probably the simplest serverless project example for IoT and Visual Recognition. For more detailed instructions, visit my project at: README.md

I look forward to hear what you've come up with, using this project.

Related links