Skip to content

tamu-edu-students/CSCE482-483-931_22F-2A1

Repository files navigation

CSCE482-483-931_22F-2A1

Team Logo

2A1. Traffic Object/Lane Detection

About the Project

With autonomous vehicles on the rise, it is becoming important more than ever for their internal systems to be optimized. This project handles the integration of an object/lane detection model to the Robot Operating System (ROS). ROS is commonly used to program autonomous vehicles. The HybridNets machine learning model was utilized for its joint detection capabilities and top performance.

Objectives:

  • Evaluate existing detection framework on our dataset
  • Integrate existing detection framework with ROS/ROS2
  • Input: image, Output: bounding boxes/classification results

Prerequisites

Getting Started Demo

Clone the repository:

git clone --recursive https://github.com/tamu-edu-students/CSCE482-483-931_22F-2A1
cd CSCE482-483-931_22F-2A1

Build the latest Docker container from image. Building the image initally may take 15-30 minutes, depending on the speed of your internet and performance of machine.

make build

Lastly, download the bag file 16-mcity1.bag and move it to the shared folder rootfs/

Disclaimer:

Unable to associate GitHub Docker Package to repository due to restrictions from tamu-edu-students organization.
Docker Package can be found here: https://github.com/users/ojasonbernal/packages/container/package/2a1-package

Usage

Run the latest Docker container from image

make init

Executing make init will then open the container Command-Line Interface (CLI). Make sure to run the command on both terminals as this will be useful for the next steps.

Optional: If you already have a container named 2a1-package, then execute the following command to run the docker container

make run

YOU MUST COMPLETE THESE STEPS EVERYTIME YOU RUN THE PROJECT
Ensure to source the environment first in order for the project to run as intended. Run the following commands to source the environment:

source /opt/ros/noetic/setup.bash
source rootfs/catkin_ws/devel/setup.bash

We need to change the directory to the catkin_ws to be able to compile using catkin_make.

cd rootfs/catkin_ws/
catkin_make

Once you have completed the above steps, you will be able to open up VNC. You will need to be in VNC to continue.
You can view VNC through http://localhost:6081/vnc.html
You should enter "password" for the password.

It is best to have two terminals open to run the project.
First execute the command in terminal 1:

source rootfs/catkin_ws/devel/setup.bash; (roscore &); rosbag play -r 10 --loop rootfs/16-mcity1.bag

The execution of the above command will make it so that the ROS node is sourced (this must be done in order to use ROS) and a small video will be played to test the model.

Then execute this command in terminal 2, in order to run the developed HybridNets node:

source rootfs/catkin_ws/devel/setup.bash; (rqt_image_view &); python3 /root/rootfs/catkin_ws/src/ros_basics_tutorials/scripts/image_subscriber.py

The above command will execute the Python ROS node which was created for joint detection of objects and lanes. The project is also publishing the bounding box information to be able to be received by another ROS node.

The processed video may not show up on the VNC instantly, it is recommened to refresh the topics to be able to view the processed video. The refresh button is located next to the selection of topics, as seen below.
Refresh Button

Open the rootfs/ folder to find images that are being outputted showcasing the joint detection model.

You must be in the rootfs/catkin_ws/ folder for this step. You can view the ROS messages for both the BoudingBox and BoudingBoxes by running the following command:

rosmsg show BoundingBox
rosmsg show BoundingBoxes

Performance Testing

For testing, the team began by executing the 16-mcity1.bag file on both the YOLO and HybridNets Machine Learning Models.
In order for the team to obtain data on the performance of YOLO, the team utilized Darknet-ROS as it uses the YOLO machine learning model. The team collected various data points on the performance differences between HybridNets and YOLO, as suggested below:

Metrics Observed:

  • Accuracy on Object Detection
  • Frame Rate
  • Completion Time of Model

Benefits

The joint detection model is a great start to be able to optimize the performance of such lane and object detection models that exist today. As of late 2022, most ROS nodes only execute one detection model, whether that be object or lane detection, but only few models can do both and even less have been implemented with ROS. The aim of the project was to utilize open source software which is rich in documentation so that further development of this system can be accomplished and improved upon. The project accomplished the task of being able to implement a joint detection model (HybridNets) in ROS.

Metrics improved based on other models:

  • Accuracy
  • Usability
  • Robustness

The Pipeline created can be visualized as:

Images -> ROS -> HybridNets -> ROS

The reason being that unprocessed images will be sent to ROS, whether as a .bag file or appropriate image file, which will then be processed using the HybridNets model and the bounding box information gathered will be sent back to ROS with the image unaltered as well. The use of ROS is critical as this will allow for use on multiple vehicles and the use of HybridNets is important due to the joint detection capabilities that is possesses.

Contributors

Team Name: OLD482

Acknowledgements

Resources Used:

Useful Commands (For Development Purposes Only):

If a package already exists with the name 2a1-package, you can remove it using the following command:

make rm

If updating Dockerfile, use the following command to build your own image:

make build

To run docker container, use:

make run

You can also pull the latest Docker package using the following command:

docker pull ghcr.io/ojasonbernal/2a1-package:latest