An attempt to make GStreamer elements that allow to stabilize shaky video streams
Switch branches/tags
Nothing to show
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Failed to load latest commit information.



This stuff is still in a rather early stage of development (or should I say "research"), but might already be useful in some use cases.

You should be able to get the latest version like this:

git clone git://


This software is distributed under the terms of the GNU Lesser General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.

A copy of the GNU Lesser General Public License, and of the GNU General Public License (to which it refers) respectively in the files COPYING.LESSER and COPYING.



For the elements to be recognized, you need to include in GST_PLUGIN_PATH the directory where you checked out GstStabilizer (that is, the directory that contains the python directory and not the python directory itself).

Example pipeline:

gst-launch filesrc location=<my_shaky_video> ! decodebin ! tee name=tee \
  tee. ! ffmpegcolorspace ! opticalflowfinder ! opticalflowrevert name=mux \
  tee. ! ffmpegcolorspace ! mux. \
  mux. ! ffmpegcolorspace ! xvimagesink

Note that depending on the video and the options you give to opticalflowfinder, live stabilisation might not always be doable. If it's too laggy, your probably want to encode and save the stream instead of sending it to a visualisation sink.

You want to have a look at the myriad of options that can be set in opticalflowfinder:

gst-inspect opticalflowfinder

The most important of them is the algorithm, the two currently implemented are:

The faster one, good for typical video streams where there is little change from one frame to the next.
Slower, but can handle big changes from one frame to the next. Very useful for time lapses taken from a moving camera (specially developed for a time lapse from a tethered helium balloon, see


  • Only works if the original stream always points towards the same area of interest, i.e. does not support voluntary camera movements, they are all recognized as "shake" that should be compensated.
  • Slower than what it could actually be

Bugs and patches

Over there: