This is a simple application that uses GStreamer to record video from a webcam and save it to a file. The recording consists of a hls playlist file and a series of video chunks. The playlist file is updated with each new chunk of video.
Additionally, it records the tooltip images for each chunk of video and creates a sprite image from each chunk of video.
It is possible to take a still image from the webcam and save it to a file.
For the linking between the source (webcam) with the recoding and still pipelines, an unixfd element is used.
Also, a preview pipeline is available to display the video as a webrtc stream. This preview can be configured to display the video with an overlay.
To control the application a REST interface is available. The application can be started and stopped, the recording can be started and stopped, and a still image can be taken. Taskfile is used to build and run the application. Create an .env file with .env-template as a template and fill in the values.
The application assumes that the videosource is available on /dev/video10. In order to simulate a webcam, the v4l2loopback kernel module is used. The necessary files are are located in tools/endless-recording.
| Command | Description |
|---|---|
task endless-stream:getfull-fhd |
Downloads the full blender movie BigBunny and stores it in /temp/video.mp4 |
task endless-stream:install-v4l2loopback |
Installs a virtual v4l2 device on /dev/video10 (temporary) |
task endless-stream:run |
Starts playing the video to /dev/video10 in an endless loop |
To build the application run the following command:
| Command | Description |
|---|---|
task build-app |
Builds the application |
task run-app |
Starts the application |
task run-app-overlay |
Starts the application with an overlay on webrtc |
The rest interface is available at http://localhost:4000 with the following endpoints:
POST /recording/start- starts the recordingPOST /recording/stop- stops the recordingPOST /start- starts the input pipelinePOST /stop- stops the input pipelinePOST /still- takes a snapshot from the webcam and saves it to a file
| Command | Description |
|---|---|
task command:start |
Starts the videosource and webrtc streaming |
task command:stop |
Stops the videosource and webrtc streaming |
task command:start-recording |
Starts the video recording |
task command:stop-recording |
Stops the video recording |
task command:still |
Takes a still |
The overlay can be controlled via:
| Command | Description |
|---|---|
task overlay:build |
builds the overlay container (source) |
task overlay:run |
starts the overlay container in the background |
The overlay is available at http://localhost:3000/admin and can be used to create overlays.
The actual overlay is available at http://localhost:3000/ and can be used to display the video with the overlay.
The configuration is done in the config.toml file. The following parameters are available:
source_pipeline- the pipeline that reads from the webcamrecording_pipeline- the pipeline that writes to the filestill_pipeline- the pipeline that reads from the webcam and writes to a filepreview_pipeline- the pipeline that reads from the webcam and displays the videopreview_pipeline_overlay- the pipeline that reads from the webcam and displays the video with an overlaychunk_size- the size of the video chunks in secondsoutput_dir- the directory where the video files are savedchunkprefix- the prefix of the video files
Build on Manjaro Linux with the following dependencies:
- GStreamer
- Gstreamer rust plugins gst-plugins-rs
- OpenCV
- NVIDIA drivers
The output of the application is a series of video chunks and a playlist file. The playlist file is updated with each new chunk of video.
| Description | Output |
|---|---|
| video chunks | ts files (not shown here) |
| playlist file | m3u8 |
| tooltip file | ![]() |
| sprite file | |
| the vtt file | vtt |
The sprite file takes 4 pixels in the middle for each second of video. The sprite file is used to give a rough overview of the video. The vtt file created can be used by the http://plyr.io player to display the thumbnails during the playback. On WebRTC preview the video is displayed with an overlay as shown below:
The following is the graphical representation of the GStreamer pipelines used in the application.
They are created with the dot tool from the graphviz package. In order to create the dot files the GST_DEBUG_DUMP_DOT_DIR environment variable has to be set.
| Description | Output |
|---|---|
| videosource | pipeline-source |
| videoreording | pipeline-recording |
| videopreview | pipeline-preview |
| videostill | pipeline-still |
-
To share the video frames unixfd is used. This element does not copy the video frames but shares the memory between the pipelines. This sharing is done by passing a file descriptor between the pipelines. For this to happen the signal is propagated to the videosource element.
-
For the webrtc preview the webrtcsink is used. This element is used to stream the video to a webrtc server. In this case it can be reached via http://localhost:8080.
-
For the recording the hlssink3 element is used. This element is used to create a hls playlist file and a series of video chunks.
-
For the overlay the wpesrc is used. This element can read from a headless browser and display the video with an overlay. The actual overlay server was implemented by https://github.com/moschopsuk/Roses-2015-CasparCG-Graphics with some adaption from here. It gives a nice http frontend to dynamically create overlays. In this sample it is reachable at http://localhost:3000/admin.
-
The hlssink3 element only creates playlist entries with fully created chunks. So if the chunk size is 6 seconds, the playlist file is updated every 6 seconds. If a recording is stopped before the chunk is fully created, the playlist file is not updated. So you might have a chunk that is not listed in the playlist file.
-
The wpesrc elements had troubles creating the overlay directly on the GPU (Nvidia). So the environment variable
LIBGL_ALWAYS_SOFTWARE=truehad to be used to create the overlay on the CPU. The root cause seems to be the wpebackend library which could be resolved from Igalia in https://github.com/Igalia/WPEBackend-offscreen-nvidia which would solve this issue. The actual mixing of the video and the overlay is done with theglvideomixerelement. Only this element was able to make the overlay transparent (The compositor element didn't manage to get the html background transparent - only the whole page). -
The unixfdsrc element had issues getting a file descriptor once the stream has been uploaded to the GPU via glupload.
-
The webrtcsink element was used in this example with webserver enabled and the embedded signalling server. Inside the docker container the signalling server lost constantly the connection. So in a real application, a separate signalling server should be used.

