Skip to content
Browse files

adding a getting started section to the readme

  • Loading branch information...
1 parent 01c6414 commit 418146e2b6f32aa8c5ef37dadf7260c035a9a88f @jeremylightsmith committed Mar 1, 2011
Showing with 29 additions and 0 deletions.
  1. +29 −0 README.md
View
29 README.md
@@ -22,6 +22,35 @@ We want to recognize:
- Individual fingers
- Finger-based hand gestures (e.g. peace sign, etc)
+Getting Started
+---------------
+
+Right now, you'll want to be on Chrome, on OSX, then:
+
+- Plug in your Kinect to your computer via USB
+- Download this code from http://github.com/doug/depthjs
+- Install the chrome extension
+ - Bring up the extensions management page by clicking the wrench icon and choosing *Tools* > *Extensions*.
+ - If *Developer mode* has a + by it, click the + to add developer information to the page. The + changes to a -, and more buttons and information appear.
+ - Click the *Load unpacked extension* button. A file dialog appears.
+ - In the file dialog, navigate to your *depthjs/chrome-extension-mac* and click *OK*.
+- Open a new web page (it only affects new pages)
+- Have fun!
+
+What the camera sees...
+
+- When the extension starts up, it opens 3 windows, blob, first, and second
+- first & second show the limits of the depth of field the camera is paying attention to, you should try to make it so that your hand is in both first & second and not much else
+- blob shows you everything the camera can see as well as a blue circle around where it thinks your hand is.
+
+After opening a new page in Chrome,
+
+- Pull your hand back so that both first & second windows are blank and there is no circle in the blob window
+- Then bring your hand forward until it gets an outline in the blob window, and pause for a second until the blue circle appears
+- You should then see a blue circle on the web site that will track your movements
+- To click, just close your hand into a fist!
+
+
Components
----------
DepthJS is very modular. The Kinect driver and computer vision are written on top of Open Frameworks and OpenCV in C++. This component can output the raw RGB image, the raw depth map (filtered for the hand), as well as the high-level events that the computer vision recognizes. The three outputs are pumped out on three separate 0MQ TCP sockets. Next, a Torando web server (written in Python) takes the 0MQ data and wraps it into a WebSocket, which is what enables the web browser extension to receive the data. Finally a pure javascript-based extension connects to the WebSocket server to receive the events. Event handlers may be placed globally, in content scripts injected into each web page, or pushed via the content script to local DOM elements written by 3rd parties.

0 comments on commit 418146e

Please sign in to comment.
Something went wrong with that request. Please try again.