Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[video_thread] Improvements to video_thread performance #1735

Merged
merged 4 commits into from Jun 23, 2016

Conversation

wilcoschoneveld
Copy link
Contributor

The default 'nice level' for the video_threads is 10, but this needs to be lower in order for some critical vision algorithms to work properly and reliably (such as optical flow hover control #1698).

Other possible improvements to video_thread;

  • Dynamically change VISION_THREAD_NICE_LEVEL (via GCS setting)
  • Being able to have some vision functions run with less fps than others (e.g. use RunOnceEvery)
  • Improved fps regulation (e.g. smoothed average)
  • Provide the vision functions with access to current frame time and fps

@wilcoschoneveld
Copy link
Contributor Author

@guidoAI could you please repeat here what functionality is required?

@@ -80,6 +80,9 @@ static void *video_thread_function(void *data)
{
struct video_config_t *vid = (struct video_config_t *)data;

char print_tag[80];
sprintf(print_tag, "video_thread-%s", vid->dev_name);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if you already have a fixed size char array: use snprintf

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Besides the niceness level, it would be great if the priorities can be set between different vision functions. For example, textons should run at 5Hz and optical flow at 30Hz. With a periodic function this was / (is?) possible, but the vision process is a bit differently arranged now, with all vision functions added with cv_add.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You still have the option to only copy/save the image in the callback from the video_thread and then run the processing of it in another thread or schedule it as you want...

There is no requirement to run the processing in the callback (and hence the video_thread thread) unless you want to pass a modified version on to the next in the pipeline.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copying the reference would be fine, but really copying the image costs time. Can the img ref be used safely at the moment even when the processing will end after a new image comes in? Or is the image memory block freed before?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@guidoAI the image memory is freed after the computer vision pipeline function is finished, i.e. when all cv_functions are executed

sw/airborne/modules/computer_vision/video_thread.c:149

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Difficult problem. In this case optical flow fits perfectly in the normal image pipeline, but I can understand that algorithms that take longer might want to only have a reference to this image.

One solution is to add some kind of variable to the image struct (or list) that indicates for each function that uses the image whether this function still needs the image...
At the moment the image gets read from the camera we can take a new piece of memory when the previous image is not fully processed yet....

Note that this:

  • might take a lot of memory (potential problem on the ARDrone).
  • is easy to get bugs in (especially with new students who forget to indicate that they are done).

Maybe a circular image buffer could also work (solves the memory problem), but then your functions don't have control over when the image is re-used again.

Anyway, this does not matter for this pull request I think.

@flixr
Copy link
Member

flixr commented Jun 17, 2016

Can you please also rebase it on master to not have other already merged PRs in here?

@@ -9,6 +9,8 @@

- Possibility to save an image(shot) on the internal memory (JPEG, full size, best quality)
</description>

<define name="VIDEO_THREAD_NICE_LEVEL" value="5" description="Nice level for each separate video thread"/>
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe explain what it is, does, and what values are acceptable?

@fvantienen
Copy link
Member

This looks good 👍

@flixr
Copy link
Member

flixr commented Jun 19, 2016

After a rebase and change to snprintf it also looks good to me...

But for future improvements on individual computer vision functions that should run with higher prio it IMHO makes way more sense to not do those computations in the callback from the video thread, but instead copy the img and run the computation in a separate thread (as I already pointed out above).

@wilcoschoneveld
Copy link
Contributor Author

For now I did a rebase to master and implemented the feedback

@@ -3,11 +3,10 @@
<module name="video_thread" dir="computer_vision">
<doc>
<description>
Read video in a thread.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

First line of the description (until the dot) is used as brief description and displayed in http://docs.paparazziuav.org/latest/onboard_modules.html

@flixr flixr merged commit 2d05644 into paparazzi:master Jun 23, 2016
@wilcoschoneveld wilcoschoneveld deleted the video_thread_priority branch June 29, 2016 08:23
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants