Skip to content

issaiass/jetbot_perception_2d

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

JETBOT_PERCEPTION_2D

Brief Review

This project add perception to your ros project. We used a deep learning approach based on TFOD API and using the SSD model of mobilenet.

Below an image example of the outcome:

Launching the application
  • Create a ROS ros workspace and compile an empty package:
    cd ~
    mkdir -p catkin_ws/src
    cd catkin_ws
    catkin_make
  • Open the .bashrc with nano:
    nano ~/.bashrc
  • Insert this line at the end of the ~/.bashrc file for sourcing your workspace:
    source ~/catkin_ws/devel/setup.bash
  • Clone this repo in the ~/catkin_ws/src folder by typing:
    cd ~/catkin_ws/src
    git clone https://github.com/issaiass/jetbot_perception_2d.git --recursive
    cd ..
  • Go to the root folder ~/catkin_ws and make the folder running catkin_make to ensure the application compiles.
  • Download the ssd mobilenet object detection model and place the frozen_inference_graph.pb file into the folder models/ssd_mobilenet_v2_coco_2018_03_29
  • Change the path paramters of the configfile, modelfile and classfile in the config/ssd_mobilenet.yaml configuration.
  • Finally, in three separate windows launch the commands.
    roslaunch jetbot_perception perception_detection.launch
    rosrun jetbot_perception perception_subscriber
    rostopic echo /detected_objects_info    
  • If you want, you could see the image in rviz, just add Image and select the /detected_objects topic.
Nodes and Parameters

Node: usb_cam

Node function

This node opens the video camera input and publish to a topic.

Node parameters

For usb_cam please visit the ros wiki from http://wiki.ros.org/usb_cam.

Node: perception_detection

Node function

This node publishes using opencv the image and also the object boxes properties like position, class id and probability.

Node parameters

  • /opencv/enable_floating_window

    If you set it to true/false you could see the floating window created by opencv.

  • /opencv/image_width

    The width of the opencv window.

  • /opencv/image_height

    The height of the opencv window.

  • /opencv/window_name

    The name of the flating window

  • /opencv/wait_key

    The time for waiting frames

  • /neuralnet/imawidth of ge_width

    The input size of width of the image that will be the input of the neural network. By default, if you are using mobilenet, set it to 300.

  • /neuralnet/image_height

    The input size of height of the image that will be the input of the neural network. By default, if you are using mobilenet, set it to 300.

  • /neuralnet/scale_factor

    The factor relatd to the scaling of the image in the neural network. Set it between 0.2 to 0.98 to get good results.

  • /neuralnet/confidenceThreshold

    The minimal probability value to consider a proper detection. By default, a good value could be 0.7. Range of values are between 0.0-1.0.

  • /neuralnet/meanValR

  • /neuralnet/meanValG

  • /neuralnet/meanValB

    Those are the mean values of the dataset, this values are calculated by the data scientist that made the model because are related to the dataset, by default, in mobilenet are [127.5, 127.5, 127.5].

  • /neuralnet/configfile

    The configuration file path of the TFOD API neural network. It contains information of the layers and how to process each one

  • /neuralnet/modelfile

    The TFOD API *.pb file that contains the computatinal graph serialized in that format, in other words, the model weights and biases.

  • /neuralnet/classfile

    The path of the class file that matches the object, is a relation between lines and object id.

Published Topics

sub_topic: /usb_cam/image_raw

  • /detected_objects ([sensor_msgs/Image])

    The perception output of the labeled image.

  • /detected_objects_info ([jetbot_msgs/BoundingBoxes])

    The perception output that consts of the list of objects with its class id, label, probability and boxes in format (x,y,w,h).

Subscribed Topics

  • /usb_cam/image_raw ([sensor_msgs/Image])

    Subscribes to the usb_cam topic for get the image of the webcam or camera.

Node: perception_subscriber

Node function

This node is an example of how to extract information of the resulting node.

Node parameters

Published Topics

  • /usb_cam/image_raw ([sensor_msgs/Image])

    Subscribes to the usb_cam topic for get the image of the webcam or camera.

Subscribed Topics

  • /detected_object_info ([jetbot_msgs/BoundingBoxes])

    Subscribes to the /detected_objects_info topic for get the image information of the published detected objects from the perception_detection node.

Results

You could see the results on this youtube video.

Jetbot Perception using SSD Mobilenet

Video Explanation

Explaining Jetbot Perception using SSD Mobilenet

Issues
  • No issues present in this release
Future Work
  • ❌ Add services
  • ❌ Add actions
  • ❌ Add dynamic_reconfigure
  • ✔️ Publish topic of objects information
  • ✔️ Publish topic of image
Contributing

Your contributions are always welcome! Please feel free to fork and modify the content but remember to finally do a pull request.

📱 Having Problems?

License

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published