Skip to content

This is a ROS 2 based plug-and-play perception pipeline which can be customised to your need

Notifications You must be signed in to change notification settings

Krachitkumar/ros-perception-pipeline

 
 

Repository files navigation

Contributors Forks Stargazers Issues LinkedIn


Logo

ROS-Perception-Pipeline

This repo is focused on developing a ROS 2 based perception-pipeline. If you’re interested in helping to improve our Project, find out how to contribute.


Report Bug · Request Feature

Table of Contents
  1. About The Project
  2. Getting Started
  3. Usage
  4. Contributing
  5. License
  6. Contact
  7. Acknowledgments

About The Project

Our aim is to build a one-stop solution to all the problems related to Robotics-Perception. We are creating a plug-and-play ROS 2 based perception-pipeline which can be customized for user-specific custom tasks in the blink of an eye. We are in the process of creating different components for tasks like Object Detection, Image Pre-Processing, Image Segmentation etc. These components can be stitched together to make a custom pipeline for any use-case, just like how we play with LEGO bricks.

(back to top)

Built With

  • ROS 2
  • OpenCV
  • Ubuntu
  • Python

(back to top)

Getting Started

Follow these steps to setup this project on your systm

Prerequisites

Follow these steps to install ROS Humble and OpenCV

Installation

  1. Make a new workspace

    mkdir -p percep_ws/src
  2. Clone the ROS-Perception-Pipeline repository

    Now go ahead and clone this repository inside the "src" folder of the workspace you just created.

    cd percep_ws/src    
    git clone git@github.com:atom-robotics-lab/ros-perception-pipeline.git
  3. Compile the package

    Follow this execution to compile your ROS 2 package

    colcon build --symlink-install
  4. Source your workspace

    source install/local_setup.bash

Usage


1. Launch the Playground simulation

We have made a demo playground world to test our pipeline. To launch this world, follow the steps given below

ros2 launch perception_bringup playground.launch.py 

The above command will launch the playground world as shown below :


Don't forget to click on the play button on the bottom left corner of the Ignition Gazebo window


2. Launch the Object Detection node


Use the pip install command as shown below to install the required packages.

pip install -r src/ros-perception-pipeline/object_detection/requirements.txt

Use the command given below to run the ObjectDetection node. Remember to change the path of the object_detection.yaml file according to your present working directory

ros2 run object_detection ObjectDetection --ros-args --params-file src/ros-perception-pipeline/object_detection/config/object_detection.yaml

3. Changing the Detector

To change the object detector being used, you can change the parameters inside the object_detection.yaml file location inside the config folder.


Testing

Now to see the inference results, open a new terminal and enter the given command

ros2 run rqt_image_view rqt_image_view


(back to top)

Contributing

Contributions are what make the open-source community such an amazing place to learn, inspire, and create. Any contributions you make are greatly appreciated.

  1. Fork the Project
  2. Create your Feature Branch (git checkout -b feature/AmazingFeature)
  3. Commit your Changes (git commit -m 'Add some AmazingFeature')
  4. Push to the Branch (git push origin feature/AmazingFeature)
  5. Open a Pull Request

(back to top)

License

Distributed under the MIT License. See LICENSE for more information.


Contact

Our Socials - Linktree - atom@inventati.org

(back to top)

Acknowledgments

(back to top)

About

This is a ROS 2 based plug-and-play perception pipeline which can be customised to your need

Resources

Stars

Watchers

Forks

Packages

 
 
 

Languages

  • Python 96.3%
  • CMake 3.3%
  • Shell 0.4%