Skip to content

In recent years, object recognition has attracted increasing attention of researchers due to its numerous applications. For instance, object recognition enables collaborative robots to carry out tasks like searching for an object in an unstructured environment or retrieving a tool for a human coworker. In this study, we present a new technique f…

Notifications You must be signed in to change notification settings

AndreBrasUC/Object_Recognition_From_RGBD_Data

Repository files navigation

Hello everybody! My name is André Brás and I'm a researcher in the University of Coimbra, Portugal, in the field of collaborative robotics, computer vision, and pattern recognition. Here, I'll give you a few details about this project so that you can easily use it. This project introduces an unsupervised approach for feature extraction from RGB-D data. The features can be then used to train several classifiers, which are able to perform object recognition. Experiments are conduct on a subset of 20 objects selected from the YCB object and model set.

You can start by opening the script named "MAIN_YCB". This script uses RGB-D images available in the dataset to build the corresponding point cloud. Then, such point cloud is used to perform feature extraction. The features will later be used to train an Artificial Neural Network (ANN), which may be able to accurately recognize different object classes. You are only able to run this script if you have downloaded the data and stored it on "YCB_Object_Model_Set" folder. You will also need to install Python 2 to run the code that generates the point cloud, which is provided with the dataset. If you are unable to run this script, you can find the output at "YCB_Features". In this data file, the last column of the variable 'Features' is the ground truth of the corresponding sample. The easiest way of using this output is to open the Wizard provided by Matlab (type "nnstart" in the command window) and follow the steps. I already provide 3 ANNs trained with visual features, shape features and both visual and shape features; find them in the files named "YCB_Network_Visual_Features", "YCB_Network_Shape_Features" and "YCB_Network_All_Features", respectively.

Afterwards, you should install a Microsoft Kinect to detect and recognize real objects. However, you have to acquire new images, as the color images provided with the YCB Object and Model Set are very different from those made available by the Kinect. Use the script named "MAIN_Kinect" for such purpose. Once you collect images for all objects in the subset that you want to use, run the script named "From_Images_To_Features". This script uses the images to build the corresponding point clouds and make the feature extraction. I already collected some examples and you can find 300 samples of each object in the file named "Kinect_Features". Then, you can use these features and the script "Kinect_Network_Train" to train ANNs. Find some ANNs trained with visual features and both visual and shape features in the folder "Kinect_Networks".

Finally, the script "ONLINE_PICKING" allows to use a KUKA LBR iiwa to pick objects above a table. This script needs the KUKA Sunrise Toolbox and the file "ONLINE_PICKING_DATA". This file contains data to build the transformation matrix between the X and Y axes of KUKA robot and Kinect. Furthermore, it also contains some important positions. You should adapt this data to your apparatus.

About

In recent years, object recognition has attracted increasing attention of researchers due to its numerous applications. For instance, object recognition enables collaborative robots to carry out tasks like searching for an object in an unstructured environment or retrieving a tool for a human coworker. In this study, we present a new technique f…

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages