Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enhance camera features, stability and performance #16

Merged
merged 28 commits into from
Mar 26, 2020
Merged

Conversation

lukicdarkoo
Copy link
Member

@lukicdarkoo lukicdarkoo commented Mar 23, 2020

Description
Use GPU to perform YUV422 to RGB conversion and publish camera_info topic.

Related Issues
Fixes #15

Tasks

  • Utilise GPU for YUV422 to perform RGB conversion
  • Create a stable way to handle switching between compressed, raw and none
  • Delete OpenCV dependecy
  • Perform a stress test
  • Parametrise output image resolution
  • Add camera_info topic
  • Estimate camera intrinsic parameters
  • Investigate performances of RGB raw topic

Additional context
camera_info requires a camera calibration with a checkboard (for calibration matrix, with Webots it was easy, the focal length is known and there are no distortions). Since Corona forbids me from going to EPFL to print the checkboard it may take a while. Therefore, I will estimate parameters now and calibrate them as soon as I print the checkboard (open a separated issue).

@lukicdarkoo lukicdarkoo self-assigned this Mar 23, 2020
@lukicdarkoo lukicdarkoo added bug Something isn't working enhancement New feature or request labels Mar 23, 2020
@lukicdarkoo
Copy link
Member Author

Changing output encoding to RGB24 or any other RGB variation (RGBA, BGR...) is failing:

encoder->output[0]->format->encoding = MMAL_ENCODING_JPEG;

Looking for the workaround...

@lukicdarkoo
Copy link
Member Author

This is great, I have just managed to do it with vc.ril.isp component. It seems we can use it in parallel with compression. Even better, vc.ril.isp has two outputs, so we could potentially serve two resolutions simultaneously.

@lukicdarkoo
Copy link
Member Author

image

Successfully publishes camera images to two topics simultaneously

@lukicdarkoo
Copy link
Member Author

Camera info data is read from a YAML file (with a default one provided with the package), so it can be easily modified in the future.

Copy link
Member

@DavidMansolino DavidMansolino left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Many very minor polishing comments but the general design is very good.
Have you been able to measure quantitatively a performance improvment?

epuck_ros2_camera/README.md Outdated Show resolved Hide resolved
epuck_ros2_camera/camera_info/camera.yaml Outdated Show resolved Hide resolved
epuck_ros2_camera/src/pipuck_mmal.c Outdated Show resolved Hide resolved
epuck_ros2_camera/src/camera.cpp Outdated Show resolved Hide resolved
epuck_ros2_camera/src/camera.cpp Outdated Show resolved Hide resolved
epuck_ros2_camera/src/camera.cpp Outdated Show resolved Hide resolved
epuck_ros2_camera/src/camera.cpp Show resolved Hide resolved
epuck_ros2_camera/src/pipuck_mmal.c Outdated Show resolved Hide resolved
epuck_ros2_camera/src/pipuck_mmal.c Outdated Show resolved Hide resolved
epuck_ros2_camera/src/pipuck_mmal.c Outdated Show resolved Hide resolved
lukicdarkoo and others added 10 commits March 25, 2020 13:45
Co-Authored-By: David Mansolino <David.Mansolino@cyberbotics.com>
Co-Authored-By: David Mansolino <David.Mansolino@cyberbotics.com>
Co-Authored-By: David Mansolino <David.Mansolino@cyberbotics.com>
Co-Authored-By: David Mansolino <David.Mansolino@cyberbotics.com>
Co-Authored-By: David Mansolino <David.Mansolino@cyberbotics.com>
Co-Authored-By: David Mansolino <David.Mansolino@cyberbotics.com>
lukicdarkoo and others added 4 commits March 25, 2020 14:08
Co-Authored-By: David Mansolino <David.Mansolino@cyberbotics.com>
Co-Authored-By: David Mansolino <David.Mansolino@cyberbotics.com>
@lukicdarkoo
Copy link
Member Author

lukicdarkoo commented Mar 25, 2020

Many very minor polishing comments but the general design is very good.
Have you been able to measure quantitatively a performance improvment?

JPEG performance should not be impacted. I will check for raw RGB since we were getting only like 4fps onboard it should be much better now.


Update 1

Unfortunately, there are no improvements, it's about the same. This is strange. Investigating...

Update 2

I suspected that copying an array in the following line takes a lot of time:

message.data.assign(mPipuckMmalRgb.output.data, mPipuckMmalRgb.output.data + mPipuckMmalRgb.output.size);

so I changed it, but the results are about the same (3.5fps - 4fps).

Update 3

Looking into ROS2 middlewares if it makes sense changing them. Default one, eProsima’s FastRTPS doesn't support shared memory:
https://discourse.ros.org/t/is-there-any-dds-implementation-using-shared-memory-inside-system/7609/2
It means it sends the whole image through the "network". On the other hand, RTI Connext DDS supports the shared memory, but it doesn't work on ARM32 platform:
https://github.com/ros-infrastructure/rep/blob/master/rep-2000.rst#eloquent-elusor-november-2019---november-2020
Note that eProsima’s FastRTPS and RTI Connext DDS are Tier 1 ROS2 middlewares.

Update 4

This is a CPU usage:

  PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND                                                                                                                                  
 2165 pi        20   0  157404  30640  22620 S  44.7   8.1  14:47.04 camera                                                                                                                                   
 2249 pi        20   0  165436  60328  26908 R  41.4  16.0   0:52.28 ros2 

in which camera is our camera node and ros2 is a process fired by ros2 topic hz image_raw to measure the publishing rate.

Update 5

It seems there are a lot of message drops because it takes less than 100ms to convert an image and execute publisher->publish(msg):
image
More details on the following link: https://github.com/lukicdarkoo/epuck-ros2-various-analyses/blob/master/UnderhoodCameraPerfomances.ipynb

Comment on lines +20 to +21
from rclpy.node import Node
from sensor_msgs.msg import Image
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This script is very good, but I am just wondering if using ROS here is not a bit overkilled.
Wouldn't it be simpler just to create a simple python script without any ROS dependency that would read the image directly from the camera by itself (since their is already a dependency on opencv, opencv has a module for doing this, this is maybe not as efficient as what you implemented in the camera node, but since it is only used for calibration it doesn't really matter)?

Copy link
Member Author

@lukicdarkoo lukicdarkoo Mar 26, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, I didn't consider running the calibration onboard. This way I have a nice preview of the images I am taking. Also I afraid of running findChessboardCorners() and especially calibrateCamera() since those two can be really demanding even for my PC (takes a few seconds depending on the number of images).

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You're right, been able to run the calibration offboard is a good reason 😃

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working enhancement New feature or request
Development

Successfully merging this pull request may close these issues.

Fix camera /image_raw topic
2 participants