Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Video Acceleration (VA) API #14

Open
uwerat opened this issue May 6, 2022 · 3 comments
Open

Video Acceleration (VA) API #14

uwerat opened this issue May 6, 2022 · 3 comments

Comments

@uwerat
Copy link
Owner

uwerat commented May 6, 2022

Let's see how to make better use of the GPU: https://intel.github.io/libva/index.html

  • JPEG encoding
  • H.264 encoding
  • ???
@uwerat
Copy link
Owner Author

uwerat commented May 20, 2022

Did some first attempts for JPEG and got "something" using the old driver ( export LIBVA_DRIVER_NAME=i965 ). But my test scenario is different to the intended solution as transferring the image to a VASurface includes a down/up-load from/to the GPU.

Encoding seems to be more than twice ( including up/download to/from GPU ) as fast than using libjpeg-turbo for an Image of 600x600 pixels. However the colors are wrong and there are some lines shifted, when setting certain values for the quality.

@unintialized
Copy link

Would it be easier to use the neatvnc library for hardware acceleration?

@uwerat
Copy link
Owner Author

uwerat commented Oct 15, 2023

It is not about hardware acceleration in general - most of it is already achieved by the current implementation. The missing part is about using the encoders of the GPU without having to download the rendered frame to main memory before.

As far as I can see ( from a very brief check ) the neatvnc project has an optional dependency for the ffmpeg library and libavutil is used in a file called h264-encoder.c. There is a function "h264_encoder_feed" with a parameter "struct nvnc_fb".
I might be wrong, but when looking at this struct it seems to contain a pointer to the frame that is intended to be encoded.

For JPEG the neatvnc project seems to use libturbojpeg. This is usually ( depending on how Qt was built ) also behind the implementation of QImageWriter. So this also used by vnc-eglfs.

So if my quick analysis is correct the implementation of the neatvnc project expects the frames being in main memory ( not one the GPU ) before encoding. But please correct me if I missed something here ...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants