This project integrates a system for intelligent robotic handling of food products, based on collecting their images and point clouds, detection in images by a neural network, determining spatial position and planning and realizing grasping based on geometry. The images and point clouds used are collected by an Intel RealSense D435 3D vision system mounted on a Universal Robots UR5 collaborative robot, with an eye-in-hand calibration of the 3D vision system against his gripper. The YOLO v5 neural network, trained on a specially designed set of annotated images is used to detect the product in images, while the Open3D library for point cloud processing is used to determine the spatial position of the product. The Principal component analysis (PCA) algorithm is used as the basis for planning of product grasping. The listed elements of the system are connected by a graphical user interface, while the communication between the robot and computer is realized by the TCP/IP protocol.
Step 1.  Creating the dataset
Step 2.  Labeling the dataset
Step 3.  Training the YOLO v5 neural network on labeled dataset
Step 4.  Retrieving the pointcloud of scene from 3D camera and robot
Step 5.  Filtering the pointcloud of scene
Step 6.  Isolating the pointcloud of object
Step 7.  Defining the positions for object handling
Step 8.  Visualizing the object handling
Step 9.  Handling the object with robot
Step 1. Clone the repository:
cd %HOMEPATH% git clone https://github.com/Doc1996/3D-object-handling
Step 2. Create the virtual environment and install dependencies:
cd %HOMEPATH%\3D-object-handling python -m pip install --upgrade pip python -m pip install --user virtualenv python -m venv python-virtual-environment .\python-virtual-environment\Scripts\activate python -m pip install ipykernel python -m ipykernel install --user --name=3D-object-handling .\WINDOWS_INSTALLING_PACKAGES.bat
Step 3.  Modify the changeable variables in RS_and_3D_OD_constants.py
Step 4. Run the program:
cd %HOMEPATH%\3D-object-handling .\python-virtual-environment\Scripts\activate .\WINDOWS_3D_OBJECT_HANDLING_APPLICATION.bat
Optional step Run the prototyping program with Jupyter Notebook:
cd %HOMEPATH%\3D-object-handling .\python-virtual-environment\Scripts\activate jupyter notebook RS_and_3D_OD_prototypes.ipynb :: set the virtual environment kernel: "Kernel" -> "Change kernel" -> "3D-object-handling" :: run cells one after another
Step 1. Clone the repository:
cd $HOME git clone https://github.com/Doc1996/3D-object-handling
Step 2. Create the virtual environment and install dependencies:
cd $HOME/3D-object-handling python3 -m pip install --upgrade pip python3 -m pip install --user virtualenv python3 -m venv python-virtual-environment source python-virtual-environment/bin/activate python3 -m pip install ipykernel python3 -m ipykernel install --user --name=3D-object-handling source LINUX_INSTALLING_PACKAGES.sh
Step 3.  Modify the changeable variables in RS_and_3D_OD_constants.py
Step 4. Run the program:
cd $HOME/3D-object-handling source python-virtual-environment/bin/activate source LINUX_3D_OBJECT_HANDLING_APPLICATION.sh
Optional step Run the prototyping program with Jupyter Notebook:
cd $HOME/3D-object-handling source python-virtual-environment/bin/activate jupyter notebook RS_and_3D_OD_prototypes.ipynb # set the virtual environment kernel: "Kernel" -> "Change kernel" -> "3D-object-handling" # run cells one after another









