Introduction | Running on Docker Container | Running on Local Machine | Dependencies | Installation | Nodes | Launch Files | GPU Support
This repository contains the code for the ROS package prediction
which is used to detect a path from a camera image.
Clone the repository into your catkin workspace:
cd ~/catkin_ws/src
git clone git@github.com:allan-almeida1/prediction.git
This repository uses git LFS to store large files. If you don't have git LFS installed, you can install it by following the instructions here.
To pull the large files, run the following command:
cd ~/catkin_ws/src/prediction
git lfs pull
You can build and run the package on your local machine or on a Docker container. The instructions for both are provided below.
To ensure that the package runs correctly, it is recommended to run it on a Docker container. If you don't have Docker installed, you can install it by following the instructions here.
To build the Docker image, run the following command:
cd ~/catkin_ws/src/prediction
./build-devel.sh
Go grab a coffee ☕ or a tea 🍵 while the image is being built. Once the image is built, you can run the container using the following command:
./run-devel.sh
Or if you want to run the container with GPU support, use the following command:
./run-devel-gpu.sh
Note: The GPU version of the container requires an NVIDIA GPU and the NVIDIA Container Toolkit to be installed on the host machine. You can install the toolkit by following the instructions here.
To open a new terminal in the same container, run the following command:
./exec-terminal.sh
You should see the name of the environment in the terminal prompt, something like (tf_env) ros@container_id:~/catkin_ws$
.
Now, you can build the package by running:
cd ~/catkin_ws
catkin_make --only-pkg-with-deps prediction
After building, source the workspace:
source devel/setup.bash
There is already a python virtual environment set up in the container. To activate the environment, run the container and then run the following command:
source ~/tf_env/bin/activate
You can now run the package using the launch files as described in the Launch Files section.
To run prediction node on a video file, use the following command:
roslaunch prediction video.launch
To run the package on your local machine, you need to install the dependencies manually. The instructions for installing the dependencies are provided in the Dependencies section.
You also need to create a python virtual environment to install the required packages. To create the environment, run the following commands:
cd ~/
python3 -m venv tf_env
source ~/tf_env/bin/activate
Install the pip packages:
pip install --upgrade pip
pip install -r ~/catkin_ws/src/prediction/requirements.txt
The package depends on the following libraries:
TensorFlow, OpenCV (python) and NumPy will be installed automatically when you run the script to create the conda environment. The other dependencies need to be installed manually, using rosdep.
To install all the dependencies, run the following commands:
cd ~/catkin_ws
rosdep update
rosdep install --from-paths src --ignore-src -r -y
To compile the package, run catkin_make in the catkin workspace:
cd ~/catkin_ws
catkin_make # or catkin build
The package contains the following nodes:
-
prediction.py: This node is used to detect a path from a camera image. It subscribes to the topic defined by param
/prediction_node/topic_name
and runs semantic segmentation on the image using an adapted version of the ERFNet model. The node then publishes the binary image to the topic/image_raw_bin
. -
processing: This node is used to process the binary image published by the
prediction
node. It subscribes to the topic/image_raw_bin
and runs processing steps like normalization and polynomial fitting on the image using RANSAC algorithm. The node then publishes the polynomial coefficients, image resolution and y limits to the topic/prediction/path
.
To run the package on a video file, use the following command:
roslaunch prediction video.launch
To run the package on a ROS topic in the simulation environment, use the following command:
roslaunch prediction unity.launch
To run the unit tests, use the following command:
rostest prediction unit_tests.test
or
rosrun prediction unit_tests
To adjust the parameters related to the processing node, edit the file config/processing_params.yml
. Parameters are explained in the following table:
Parameter | Description | Default Value |
---|---|---|
processing/window_size |
Number of frames window to calculate moving average for RANSAC | 6 |
processing/order |
Order of the polynomial to fit | 3 |
processing/min_samples |
Minimum number of samples required to fit a polynomial (must be at least polynomial order + 1) | 4 |
processing/threshold |
Maximum distance from the fitted polynomial for a sample to be considered as an inlier | 10 |
processing/max_iterations |
Maximum number of iterations for RANSAC | 200 |
processing/n_points |
Number of points used to draw the curve for the lane | 8 |
To enable GPU support, you need to install the CUDA Toolkit and cuDNN. Then, you need to install the GPU version of TensorFlow. A script was created to automate this process and setup a virtual conda environment.
First, make sure you have conda installed. If not, you can install it using the following commands:
cd /tmp
curl -O https://repo.anaconda.com/archive/Anaconda3-2020.11-Linux-x86_64.sh
bash Anaconda3-2020.11-Linux-x86_64.sh
source ~/.bashrc
Then, run the following command to create the conda environment:
conda create --name <env_name> python=3.8 # or any other version
To install the required packages, go back to the root of the repository and run the following script:
./scripts/create_env.sh <env_name>
where <env_name>
is the name of the environment you want to create, e.g. tf_env
. The script will create a new conda environment with the name <env_name>
and install all the required packages. To activate the environment, run the following command:
To run with GPU support, you need to activate the conda environment first:
conda activate <env_name>
Then, you can run the node as usual.
roslaunch prediction video.launch
Note: The script is set up to install CUDA Toolkit 11.8, cuDNN 8.6.0.163 and Tensorflow 2.13. If you want to use different versions, you need to edit the script accordingly.