No description, website, or topics provided.
Latest commit 2f9852a Feb 9, 2017 @mangini mangini bump sdk to 0.2
Change-Id: Ifdf634962ea1e1bac5e42c61a9d2f3bda3ba170f

Android Things TensorFlow image classifier sample

This sample demonstrates how to run TensorFlow inference on Android Things.

When a GPIO button is pushed, the current image is captured from an attached camera. The captured image is then converted and piped into a TensorFlow model that identifies what is in the image. Up to three labels returned by the TensorFlow network is shown on logcat and on the screen, if there is an attached display. Also, the result is spoken out loud using text-to-speech and sent to an attached speaker, if any.

This project is based on the TensorFlow Android Camera Demo TF_Classify app. The TensorFlow training was done using Google inception model and the trained data set is used to run inference and generate classification labels via TensorFlow Android Inference APIs.

The AAR in app/libs is built by combining the native libraries for x86 and ARM platforms from the Android TensorFlow inference library. By using this AAR, the app does not need to be built with the NDK toolset.

Note: this sample requires a camera. Find an appropriate board in the documentation.


  • Android Things compatible board e.g. Raspberry Pi 3
  • Android Things compatible camera e.g. Raspberry Pi 3 camera module
  • Android Studio 2.2+
  • The following individual components:
    • 1 push button
    • 2 resistors
    • 1 LED light
    • 1 breadboard
    • jumper wires
    • Optional: speaker or earphone set
    • Optional: HDMI display or Raspberry Pi display



Build and Install

On Android Studio, click on the "Run" button. If you prefer to run on the command line, type

./gradlew installDebug
adb shell am start

If you have everything set up correctly:

  1. Reboot the device to get all permissions granted; see Known issues in release notes
  2. Wait until the LED turns on
  3. Point the camera to something like a dog, cat or a furniture
  4. Push the button to take a picture
  5. The LED should go off while running. In a Raspberry Pi 3, it takes less than one second to capture the picture and run it through TensorFlow, and some extra time to speak the results through Text-To-Speech
  6. Inference results will show in logcat and, if there is a display connected, both the image and the results will be shown
  7. If there is a speaker or earphones connected, the results will be spoken via text to speech


Copyright 2016 The Android Open Source Project, Inc.

Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to you under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.