Skip to content
LibVA implementation for the Linux Video4Linux2 Request API
Branch: master
Clone or download
paulkocialkowski Lower libva requirement to API version 1.1.0 (lib version 2.1.0)
Signed-off-by: Paul Kocialkowski <>
Latest commit 0f4a76e Mar 7, 2019
Type Name Latest commit message Commit time
Failed to load latest commit information.
include Include local copies of DRM and V4L2 codec definitions Mar 7, 2019
src Implement proper timestamping for references Mar 7, 2019
.clang-format tree: Run clang-format to conform to the kernel coding style Jul 17, 2018
.gitignore Rename va_config to config for consistency Apr 23, 2018
AUTHORS Update AUTHORS file with Maxime and Paul Sep 2, 2018
COPYING.LGPL Clarify licenses text Aug 26, 2016
COPYING.MIT Clarify licenses text Aug 26, 2016
CREDITS autotools: Rewrite configuration in a minimalistic fashion Mar 7, 2019 Update to mention H265 support Sep 2, 2018 autotools: Rewrite configuration in a minimalistic fashion Mar 7, 2019 Lower libva requirement to API version 1.1.0 (lib version 2.1.0) Mar 7, 2019 Lower libva requirement to API version 1.1.0 (lib version 2.1.0) Mar 7, 2019

v4l2-request libVA Backend


This libVA backend is designed to work with the Linux Video4Linux2 Request API that is used by a number of video codecs drivers, including the Video Engine found in most Allwinner SoCs.


The v4l2-request libVA backend currently supports the following formats:

  • MPEG2 (Simple and Main profiles)
  • H264 (Baseline, Main and High profiles)
  • H265 (Main profile)


In order to use this libVA backend, the v4l2_request driver has to be specified through the LIBVA_DRIVER_NAME environment variable, as such:

export LIBVA_DRIVER_NAME=v4l2_request

A media player that supports VAAPI (such as VLC) can then be used to decode a video in a supported format:

vlc path/to/video.mpg

Sample media files can be obtained from:

Technical Notes


A Surface is an internal data structure never handled by the VA's user containing the output of a rendering. Usualy, a bunch of surfaces are created at the begining of decoding and they are then used alternatively. When created, a surface is assigned a corresponding v4l capture buffer and it is kept until the end of decoding. Syncing a surface waits for the v4l buffer to be available and then dequeue it.

Note: since a Surface is kept private from the VA's user, it can ask to directly render a Surface on screen in an X Drawable. Some kind of implementation is available in PutSurface but this is only for development purpose.


A Context is a global data structure used for rendering a video of a certain format. When a context is created, input buffers are created and v4l's output (which is the compressed data input queue, since capture is the real output) format is set.


A Picture is an encoded input frame made of several buffers. A single input can contain slice data, headers and IQ matrix. Each Picture is assigned a request ID when created and each corresponding buffer might be turned into a v4l buffers or extended control when rendered. Finally they are submitted to kernel space when reaching EndPicture.

The real rendering is done in EndPicture instead of RenderPicture because the v4l2 driver expects to have the full corresponding extended control when a buffer is queued and we don't know in which order the different RenderPicture will be called.


An Image is a standard data structure containing rendered frames in a usable pixel format. Here we only use NV12 buffers which are converted from sunxi's proprietary tiled pixel format with tiled_yuv when deriving an Image from a Surface.

You can’t perform that action at this time.