Depth Explanation
Pages 4
Clone this wiki locally
Our depth program is integrated to utilize the Xbox 360 Kinect. This program may be adjusted to work for other depth cameras, as well.
We first process the image applying the laplacian transform, which acts as an edge detector. Utilizing the OpenCV library, we are able to identify contours of interest, find the center of the contours, then utilize the depth image's intrinsic property to calcualte distance to the center of our object. This is done by eliminating small anomalies also known as noise in the image. We then do a color check at the center point of our object with the Kinect's rgb camera. If the pixel is grayish, we declare it as a gray tote and so on.
We then run our object through a series of checks to determine various statistics of a game piece, such as the rotation of the robot toward the object. We also calculate how tall our object is to determine how many stacks of totes we are looking at, and if there is a green bin on top of a stack of totes. Lastly, we determine the offset the tote is, if it is a tote, to the camera in degrees.
With all of this information, we fill up a vector of Game_Piece, where game piece is our custom class with the following variables: float x_rot; float distance; float rotation; int totes_high; //ints, -1 = default, 1 = gray tote, 2 = yellow tote, 3 = bin int piece_type; bool green_bin_top;
with this information, a robot is able to autonomously pick up any game piece and stack it on any other game piece, given the robot has the required abilities.