Skip to content
This repository


Subversion checkout URL

You can clone with HTTPS or Subversion.

Download ZIP
Browse code

adding a getting started section to the readme

  • Loading branch information...
commit 418146e2b6f32aa8c5ef37dadf7260c035a9a88f 1 parent 01c6414
Jeremy Lightsmith authored

Showing 1 changed file with 29 additions and 0 deletions. Show diff stats Hide diff stats

  1. +29 0
Source Rendered
@@ -22,6 +22,35 @@ We want to recognize:
22 22 - Individual fingers
23 23 - Finger-based hand gestures (e.g. peace sign, etc)
24 24
  25 +Getting Started
  26 +---------------
  27 +
  28 +Right now, you'll want to be on Chrome, on OSX, then:
  29 +
  30 +- Plug in your Kinect to your computer via USB
  31 +- Download this code from
  32 +- Install the chrome extension
  33 + - Bring up the extensions management page by clicking the wrench icon and choosing *Tools* > *Extensions*.
  34 + - If *Developer mode* has a + by it, click the + to add developer information to the page. The + changes to a -, and more buttons and information appear.
  35 + - Click the *Load unpacked extension* button. A file dialog appears.
  36 + - In the file dialog, navigate to your *depthjs/chrome-extension-mac* and click *OK*.
  37 +- Open a new web page (it only affects new pages)
  38 +- Have fun!
  39 +
  40 +What the camera sees...
  41 +
  42 +- When the extension starts up, it opens 3 windows, blob, first, and second
  43 +- first & second show the limits of the depth of field the camera is paying attention to, you should try to make it so that your hand is in both first & second and not much else
  44 +- blob shows you everything the camera can see as well as a blue circle around where it thinks your hand is.
  45 +
  46 +After opening a new page in Chrome,
  47 +
  48 +- Pull your hand back so that both first & second windows are blank and there is no circle in the blob window
  49 +- Then bring your hand forward until it gets an outline in the blob window, and pause for a second until the blue circle appears
  50 +- You should then see a blue circle on the web site that will track your movements
  51 +- To click, just close your hand into a fist!
  52 +
  53 +
25 54 Components
26 55 ----------
27 56 DepthJS is very modular. The Kinect driver and computer vision are written on top of Open Frameworks and OpenCV in C++. This component can output the raw RGB image, the raw depth map (filtered for the hand), as well as the high-level events that the computer vision recognizes. The three outputs are pumped out on three separate 0MQ TCP sockets. Next, a Torando web server (written in Python) takes the 0MQ data and wraps it into a WebSocket, which is what enables the web browser extension to receive the data. Finally a pure javascript-based extension connects to the WebSocket server to receive the events. Event handlers may be placed globally, in content scripts injected into each web page, or pushed via the content script to local DOM elements written by 3rd parties.

0 comments on commit 418146e

Please sign in to comment.
Something went wrong with that request. Please try again.