Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft: oneVPL for intel gpu #242359

Closed
wants to merge 4 commits into from

Conversation

evanrichter
Copy link
Contributor

@evanrichter evanrichter commented Jul 8, 2023

Description of changes

I got an Intel Arc dedicated gpu and want to test hardware decoding with ffmpeg. This requires using libvpl instead of libmfx to interface with hardware, and is a configure flag in ffmpeg (newer sources than currently in nixpkgs).

This PR:

  • adds oneVPL
  • adds oneVPL-intel-gpu
  • modifies ffmpeg-full configure flags to build with oneVPL
  • now that intel-media-sdk is deprecated, intel is recommending developers switch to depending on oneVPL, but at runtime, end-users can dispatch to newer hardware by setting an environment variable. I added oneVPL to the runtimeInputs of intel-media-sdk so that this can actually happen
Things done
  • Built on platform(s)
    • x86_64-linux
    • aarch64-linux
    • x86_64-darwin
    • aarch64-darwin
  • For non-Linux: Is sandbox = true set in nix.conf? (See Nix manual)
  • Tested, as applicable:
  • Tested compilation of all packages that depend on this change using nix-shell -p nixpkgs-review --run "nixpkgs-review rev HEAD". Note: all changes have to be committed, also see nixpkgs-review usage
  • Tested basic functionality of all binary files (usually in ./result/bin/)
  • 23.11 Release Notes (or backporting 23.05 Release notes)
    • (Package updates) Added a release notes entry if the change is major or breaking
    • (Module updates) Added a release notes entry if the change is significant
    • (Module addition) Added a release notes entry if adding a new NixOS module
  • Fits CONTRIBUTING.md.

@PJungkamp
Copy link
Contributor

PJungkamp commented Aug 22, 2023

I haven't checked the changes here against my own derivations, but I can already leave some notes:

  • oneVPL/oneVPL-intel-gpu would probably be better names for those packages as they more closely resemble the upstream name.
  • You need to tell oneVPL where to find the shared libraries for intel-media-sdk or oneVPL-intel-gpu. This can by done by specifying the ONEVPL_SEARCH_PATH environment variable at runtime.

If you want to try whether oneVPL actually works you should add something like this to your NixOS configuration:

environment.variables.ONEVPL_SEARCH_PATH = lib.strings.makeLibraryPath (with pkgs; [intel-media-sdk oneVPL-intel-gpu]);

@evanrichter
Copy link
Contributor Author

amazing work, thanks for taking a look! I'll be back home next week but until then I don't have time to test.

@evanrichter evanrichter changed the title Draft: Libvpl for intel gpu Draft: oneVPL for intel gpu Aug 22, 2023
@evanrichter
Copy link
Contributor Author

changed the PR title to oneVPL from libvpl, and I intend on changing the package name as well

@PJungkamp
Copy link
Contributor

FYI. Here's the relevant code in oneVPL that finds oneVPL-intel-gpu or intel-media-sdk shared libraries based on ONEVPL_SEARCH_PATH.
See: https://github.com/oneapi-src/oneVPL/blob/ca5bbbb057a6e84b103aca807612afb693ad046c/dispatcher/vpl/mfx_dispatcher_vpl_loader.cpp#L560-L568

@PJungkamp
Copy link
Contributor

PJungkamp commented Aug 23, 2023

These are the derivations and overlays I currently use as part of my NixOS configuration. The oneVPL repos are non-flake inputs to my configuration flake.

  • The oneVPL-intel-gpu derivation you wrote seems pretty complete.
  • I'd like to see the tests of oneVPL working, but I haven't managed to have those passing yet. You're derivation, doesn't currently include the oneVPL test suite. Could you have a look at my derivation below and see if you can get the tests to work?
  • The change to ffmpeg is pretty simple.
    1. Add a new withVPL switch and the onevpl package to the derivation inputs.
    2. Assert that not both withMfx and withVPL are specified.
    3. Add onevpl to buildInputs if withVPL is true.
    4. Add enableFeature "libvpl" to configureFlags if withVPL is true.

The withVPL flag should probably become the new default as oneVPL can still dispatch to intel-media-sdk, so hopefully everything keeps working.

My oneVPL-intel-gpu derivation:
{
  lib,
  stdenv,
  cmake,
  pkg-config,
  gtest,
  libdrm,
  libva,
  self,
}:
stdenv.mkDerivation rec {
  pname = "onevpl-intel-gpu";
  version = "custom";
  src = self.inputs.onevpl-intel-gpu;

  nativeBuildInputs = [cmake pkg-config];
  buildInputs = [
    libdrm
    libva
  ];
  nativeCheckInputs = [gtest];

  cmakeFlags = [
    "-DBUILD_TESTS=${
      if doCheck
      then "ON"
      else "OFF"
    }"
    "-DUSE_SYSTEM_GTEST=ON"
  ];

  doCheck = true;

  meta = with lib; {
    description = "Intel oneVPL GPU Runtime";
    license = licenses.mit;
    platforms = ["x86_64-linux"];
  };
}
My oneVPL derivation:
{
  self,
  lib,
  stdenv,
  cmake,
  pkg-config,
  gtest,
  libdrm,
  libffi,
  libpciaccess,
  libva,
  libX11,
  libXau,
  libXdmcp,
  wayland,
  wayland-protocols,
  ...
}:
stdenv.mkDerivation rec {
  pname = "onevpl";
  version = "custom";
  src = self.inputs.onevpl;

  nativeBuildInputs = [cmake pkg-config];
  buildInputs = [
    libdrm
    libffi
    libpciaccess
    libva
    libX11
    libXau
    libXdmcp
    wayland
    wayland-protocols
  ];
  nativeCheckInputs = [gtest];

  cmakeFlags = [
    "-DBUILD_TESTS=${
      if doCheck
      then "ON"
      else "OFF"
    }"
    "-DUSE_SYSTEM_GTEST=ON"
    "-DINSTALL_EXAMPLE_CODE=OFF"
  ];

  doCheck = false;

  meta = with lib; {
    description = "Intel oneVPL GPU Runtime";
    license = licenses.mit;
    platforms = ["x86_64-linux"];
  };
}
My jellyfin-ffmpeg overlay:
final: prev: {
  jellyfin-ffmpeg = final.ffmpeg_6-full.overrideAttrs (finalAttrs: prevAttrs: {
    src = final.fetchFromGitHub {
      owner = "jellyfin";
      repo = "jellyfin-ffmpeg";
      rev = "v6.0-4";
      sha256 = "sha256-o0D/GWbSoy5onbYG29wTbpZ8z4sZ2s1WclGCXRMSekA=";
    };

    version = "custom";
    buildInputs = prevAttrs.buildInputs ++ [final.onevpl];
    configureFlags = (final.lib.lists.remove "--enable-libmfx" prevAttrs.configureFlags) ++ ["--enable-libvpl"];
  });
  jellyfin = prev.jellyfin.override {
    ffmpeg = final.jellyfin-ffmpeg;
  };
}

@PJungkamp
Copy link
Contributor

PJungkamp commented Sep 1, 2023

@evanrichter I think I've got it working in Nixpkgs. Could you check the changes in my fork? I based those on this PR. See https://github.com/PJungkamp/nixpkgs/tree/onevpl

Feel free to pull those into your branch. I'm not quite settled on NixOS yet, so I won't fill myself in as a maintainer and thus won't create a PR myself.

Using ffmpeg -init_hw_device qsv=qs -filter_hw_device qs using ffmpeg_6-full from nixpkgs-unstable gives me this error:

[AVHWDeviceContext @ 0xa42cc0] Error initializing an MFX session: -3.
Device creation failed: -1313558101.
Failed to set value 'qsv=qs' for option 'init_hw_device': Unknown error occurred

But using oneVPL-intel-gpu with the changed ffmpeg_6-full from my branch the command above finishes successfully.

Remember that oneVPL needs the ONEVPL_SEARCH_PATH environment variable pointing at oneVPL-intel-gpu's lib folder. If not implementation for oneVPL is found this error is raised:

[AVHWDeviceContext @ 0x762cc0] Error creating a MFX session: -9.
Device creation failed: -1313558101.
Failed to set value 'qsv=qs' for option 'init_hw_device': Unknown error occurred

Building ffmpeg_6-full is a pain. There is a dependency from ffmpeg_6-full to opencv which in turn depends on ffmpeg-full. Which means that you have to rebuild several large projects to to test my ffmpeg update commit.

@J-Swift
Copy link

J-Swift commented Oct 29, 2023

@PJungkamp hey, thanks for documenting everything here and in that linked issue. I tried to use your fork, and it compiles successfully but I'm still getting the Device creation failed error. Anything else you might have set on your nixos config to get this all working (specifically with Jellyfin)? EDIT: Sigh.... for anyone else... make sure you aren't running via sudo.... it succeeds when running as my regular user. Very odd

EDIT2: it looks likes its because I was setting the various envvars using environment.variables instead of environment.sessionVariables. That meant they were not visible to root as well as the jellyfin user.

EDIT3: last edit, I promise... you need to also be sure to add these variables to systemd.services.jellyfin.environment so that the systemd process sees them. I'm now successfully hardware transcoding using QSV, with VPP tone mapping and low power enabled. I can even add burned in subs and 7.1 audio downgrade at the same time. Thanks again for the efforts!

@evanrichter
Copy link
Contributor Author

@J-Swift thanks for sharing your findings! Because of the nuance with environment variables to actually get these libraries to work at runtime, I think I should also define a nixos module like hardware.intelGPU.enable = true; that sets these by default. That can be a follow up PR

I'm hoping to test myself this week, and finally mark this PR ready. I'm not sure if the ffmpeg maintainers will like having a change like this so close to 23.11 feature freeze, so I'll probably leave the mfx -> vpl change out of the ffmpeg package. @Atemu any thoughts?

@Atemu
Copy link
Member

Atemu commented Oct 30, 2023

The ffmpeg part of this is probably rather trivial. As long as it builds and still works for everything else, I don't see why we shouldn't add this to at least ffmpeg-full. Once it's been enabled there for a release, we can move it to regular ffmpeg should some other package (jellyfin?) require it. Closure size might be a concern at that point, have you tested that?

@J-Swift
Copy link

J-Swift commented Oct 30, 2023

@evanrichter yeah there is some stuff in the nixos-hardware configs (https://github.com/NixOS/nixos-hardware/blob/master/common/cpu/intel/default.nix and https://github.com/NixOS/nixos-hardware/blob/master/common/gpu/intel/default.nix), I added my own jellyfin service wrapper to do as you describe. Here is what I've got for reference (intelDriverVersion is just a variable I'm using to indicate which kernel driver to use, but I've only verified it works with iHD locally):

{
  # https://jellyfin.org/docs/general/aministration/hardware-acceleration/intel/#linux-setups
  users.users."${config.services.jellyfin.user}".extraGroups = [ "render" ];

  hardware.opengl = {
    enable = true;
    extraPackages = with pkgs; [
      (if intelDriverVersion == "iHD" then intel-media-driver else intel-vaapi-driver)
      intel-ocl
      vaapiVdpau
      libvdpau-va-gl
    ];
  };

  boot.initrd.kernelModules = [ "i915" ];
  environment.sessionVariables = {
    LIBVA_DRIVER_NAME = if intelDriverVersion == "iHD" then "iHD" else "i965";
    VDPAU_DRIVER = "va_gl";
    ONEVPL_SEARCH_PATH = lib.strings.makeLibraryPath (with pkgs; [oneVPL-intel-gpu intel-media-sdk]);
  };

  systemd.services.jellyfin.environment = {
    LIBVA_DRIVER_NAME = if intelDriverVersion == "iHD" then "iHD" else "i965";
    VDPAU_DRIVER = "va_gl";
    ONEVPL_SEARCH_PATH = lib.strings.makeLibraryPath (with pkgs; [oneVPL-intel-gpu intel-media-sdk]);
  };

  # NOTE(jpr): useful debugging tools you can enable
  # environment.systemPackages = [
  #   pkgs.pciutils
  #   pkgs.libva-utils
  #   pkgs.intel-gpu-tools
  #   pkgs.jellyfin-ffmpeg
  # ];
}

@evanrichter evanrichter force-pushed the libvpl-for-intel-gpu branch 4 times, most recently from 558cfd2 to 445dde9 Compare October 30, 2023 22:53
@evanrichter
Copy link
Contributor Author

evanrichter commented Oct 30, 2023

@Atemu closure size goes down by a few MB, seemingly because intel-media-sdk is not a direct dependency, but it would be found at runtime by the new onevpl dependency. Which now that I think about it, that will probably cause problems for uses depending on intel-media-sdk playback or rendering due to library search paths? see @J-Swift's work adding the necessary environment variable:

ONEVPL_SEARCH_PATH = lib.strings.makeLibraryPath (with pkgs; [oneVPL-intel-gpu intel-media-sdk]);

here's the output of nix path-info to get closure sizes of the two builds

@PJungkamp I pulled in your commit for ffmpeg, hope that's ok!

also looks like I rebased too far once... need to remove that first commit

@PJungkamp
Copy link
Contributor

Awesome!

My Nix Build Machine that was supposed to run jellyfin is currently down, so I lost sight of this issue...

I also took a look into intel-media-sdk, which can also discover the onevpl backend at runtime. This seems to be done by searching a path specified at build time (MODULES_PATH I think), where the media-sdk .so is installed into and the onevpl .so is searched for. Maybe we should also pull onevpl as an optional, default enabled dependency into the media sdk derivation. That would make all media sdk remaining consumers get the proper GPU acceleration on Alder Lake and newer.

@PJungkamp
Copy link
Contributor

https://github.com/Intel-Media-SDK/MediaSDK/blob/7a72de33a15d6e7cdb842b12b901a003f7154f0a/api/mfx_dispatch/linux/mfxloader.cpp#L194

Maybe putting it into the buildInputs is enough, it seems to search for onevpl in the normal library path.

@Atemu
Copy link
Member

Atemu commented Oct 31, 2023

Please follow the contributing guide on how to change the merge base.

@Atemu Atemu closed this Oct 31, 2023
@NixOS NixOS locked and limited conversation to collaborators Oct 31, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants