Skip to content

GoncaloR00/perception_with_multi-task_neural_networks

Repository files navigation


Perception with Multi-task Neural Networks

Issues

Table Of Contents

About The Project

Demo Video

This GitHub repository contains the developed software in the context of my dissertation. The developed software was designed to interface with ROS topics for data streaming, route the data through the model for inference, and send the results through other topics. The software accommodates diverse image transformations and model formats and standardizes the output for ease of handling on receiver devices. The software adopts a modular architecture for robustness and simplicity, utilizing different components based on the model's features. Two versions of the software were created: one for illustration and software testing and another for model evaluation.

Built With

This project uses mainly the following frameworks:

Getting Started

There are some prerequisites that must be fulfilled before installing this software. To get a local copy up and running, follow these steps.

Prerequisites

If x64:

  • Get the Nvidia drivers up to date
  • Install CUDA (v12.1), CUdnn(v8.8.1) and TensorRT(v8.6) (Detailed instructions here)

If Jetson:

  • Install JetPack(v5.2.2)

For both:

  • PyTorch v2.0
  • TorchVision v0.15 (Jetson - from source)
  • Torch-TensorRT v1.4.0 (Jetson - See this)
  • Polygraphy
  • Numpy
  • Scipy

Installation

  1. Go to the src folder of your ROS Workspace directory
cd ~/catkin_ws/src
  1. Clone the repo
git clone https://github.com/GoncaloR00/perception_with_multi-task_neural_networks
  1. Run catkin_make at your ROS Workspace
cd ~/catkin_ws/src && catkin_make
  1. Create a new folder with the name "models" inside the main repository folder
mkdir ~/catkin_ws/src/perception_with_multi-task_neural_networks/models
  1. Copy the models files into the models folder (download here)

Setup for multiple devices

The first step is to connect all the devices into the same network. After that take note of the IP of each device.

To use more than one device, it is easier to change the hosts file on each device, and add the IP and hostname of the other devices:

sudo nano /etc/hosts

Then select a device to run the ROSCore, and, in all the remaining devices, execute the following command in the same terminal that will be used to launch the lauchfiles.

export ROS_HOSTNAME=hostname_roscoredevice

Usage

This project contains some launch files for both evaluation and normal usage.

  • For normal usage in a single computer:

    roslaunch inference_manager sync_inference.launch
  • For evaluation in a single computer:

    roslaunch inference_eval evaluation.launch
  • For multiple devices:

    • For normal use in the inference device:
    roslaunch inference_manager inference-jetson.launch
    • For normal use in the sender/receiver device:
    roslaunch inference_manager sender_receiver.launch
    • For evaluation in the inference device:
    roslaunch inference_manager inference-jetson.launch
    • For evaluation in the sender/receiver device:
    roslaunch inference_manager evaluation-jetson.launch

Authors

Acknowledgements

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published