Skip to content

Releases: osai-ai/tensor-stream

FFmpeg 6.0, NATIVE_LOW_DELAY mode

27 Apr 14:01
ba0df3c
Compare
Choose a tag to compare

Changes

  • FFmpeg 6.0 is now used.
  • New frame rate mode: NATIVE_LOW_DELAY, provides minimal latency.
  • Small bug fixes and improvements.

Changed default stream frame rate adaptation mode, added suport of Pytorch 1.5.0

28 Apr 13:52
Compare
Choose a tag to compare

Changes

  • Default NATIVE frame rate mode was changed.
    NATIVE frame rate mode brings time delay between RTMP stream and TensorStream result due to thread sleep work specific so this mode was replaced with new frame rate mode, which adapts for stream frame rate, based on stream metadata. Thus instead of synchronizing for fixed time, as it was done in previous implementation of NATIVE mode, synchronization by external data is used. Old implementation is available via NATIVE_SIMPLE mode.
  • Added support for new Pytorch 1.5.0 version.

Crop support, handling FFmpeg hangs

24 Jan 14:02
Compare
Choose a tag to compare

New functionality

For a detailed description of how to use the new features please check the README.md and documentation

Crop as postprocessing feature:

Crop is represented by 2 points: left top and right bottom corners of ROI. Image cropping is performed on GPU before resize and any color conversion.

FFmpeg hangs handling

If FFmpeg hangs in case of internet connection loss or some other reasons, TensorStream can handle it if new timeout feature set. Once timeout reached, TensorStream will stop FFmpeg and return non-zero error code.

Changes

  • Stopped support of CUDA 9, so no new pre-built binaries for CUDA 9, but opportunity to build TensorStream with CUDA 9 manually still remains.
  • Changed command line for Docker container build, now need to specify Pytorch version via TORCH_VERSION.

Bug fix release.

01 Nov 08:15
Compare
Choose a tag to compare

Minor release with bug fixes.

Fixed

Serious

Correctness

  • Corrected Python installation file for Windows so TensorStream can be built correctly with new Pytorch releases (> v1.1.0): 87fcb8
  • Changed TensorStream initialization arguments in C++ sample to avoid issues related to decoded picture buffer (DPB) with some custom streams: db7b55

Postprocessing improvements, support of reading multiple videos at the same time, multi GPU support, different video reading modes, NVTX logs.

26 Sep 10:39
0794c44
Compare
Choose a tag to compare

New functionality

For a detailed description of how to use the new features please check the README.md and documentation

Postprocessing improvements:

  • New color spaces:
    • YUV (4:2:0, 4:2:2, 4:4:4 subsamplings): NV12, UYVY, YUV444.
    • HSV
  • Normalization to [0, 1] range for all available color spaces except HSV.
  • Dimension orders: c h w, h w c for RGB24 and BGR24. In other words, added support for both planar and merged RGB formats.
  • New image interpolation algorithms:
    • BILINEAR
    • BICUBIC
    • AREA

Read multiple videos at the same time:

Several instances of TensorStream library with different settings can be created. They are working in parallel, so different streams or local files can be read at the same time.

Multi GPU support:

For every TensorStream instance target GPU ('cuda:0', 'cuda:1', etc) can be specified, so GPU related processing of instances can be paralleled on different GPUs.

Different video reading modes:

Besides default reading mode, which reads streams in native frame rate, new two modes were added:

  • Read as fast as possible. In case of realtime streams this mode can increase latency dramatically if it reads frames faster than stream native frame rate.
  • Read frame by frame without skipping. While read frame is occupied by some external processing, no new frame are parsed.

Integrated support of NVTX logs:

NVTX logs were integrated into the most important GPU functions. These logs can be gathered and analyzed with NVidia Visual Profiler.

Changed

  • Number of consumers are able to read one stream in parallel were changed from hardcoded value to user defined. Check maxConsumers option in initPipeline function for C++ and max_consumers option in TensorStreamConverter constructor for Python for more information.
  • Capacity of an internal buffer with decoded frames was changed from hardcoded value to user defined. Check decoderBuffer option in initPipeline function for C++ and buffer_size option in TensorStreamConverter constructor for Python for more information.
  • Bitstream analyze stage can be skipped to decrease latency.

Fixed

  • Improved log system, added more information to simplify debugging. f7228, d435d

0.1.8

26 Sep 06:39
Compare
Choose a tag to compare

Initial TensorStream library release.