New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Vulkan based linux port #604
Conversation
This version replaces the "screen grab" version. The mechanism is based on frame capture in vrcompositor process, which are then shared with vrserver process, encoded there and transmitted to the headset. A vulkan layer is added to vrcompositor, this layer exposes the VK_EXT_direct_mode_display, VK_EXT_acquire_xlib_display and some other related vulkan functions. Functions implemented in the layer add one display to the list naturally returned by the driver, this display matches properties in ALVR settings (resolution and refresh rate), and all functions required by SteamVR are implemented (xlib acquire, properties enumeration, vsync event, swapchain creation). The layer then connects to a socket owned by the server, submits the VkImage creation parameters, and transfers file descriptors corresponding to the images in the swapchain and associated semaphores. The, on each frame, a packet is sent on the socket describing the image that has been submitted. On server side, vulkan frames are mapped to vaapi surfaces, using ffmpeg, then converted to a format suitable for encoding, encoded through vaapi and sent with the usual mechanism. In order to associate each frame with tracking information, timing details are requested to the compositor, and tracking history is queried to find the most suitable tracking index. This mechanism is still inaccurate and will need significant upgrades. Co-authored-by: Ron B <me@ronthecookie.me>
Following error occures while building:
|
This means your ffmpeg does not include vulkan support. Build instructions update is in progress... |
thanks a lot! |
e9b5af8
to
a11c744
Compare
Do not build ffmpeg module inside alvr_common
it builds the server, and the client builds using android studio.
i tried different protocols (tcp/trhtld udp/udp) but without success. EDIT: reason was that i have disabled audio streaming |
after a dozen audio errors, i can get it to this point using jack and pulseaudio.
|
In order to get the correct pose associated to an image, search the stack for the variable, then copy it and send to the server.
This will allow the user to not have to modify `vrenv.sh` which is quite error-prone.
the launcher is looking for the openvr paths config at ~/.cache |
this has been changed long ago, it is in alvr/common/src/commands.rs #[cfg(windows)]
let path = trace_none!(dirs::cache_dir())?.join("openvr/openvrpaths.vrpath");
#[cfg(target_os = "linux")]
let path = trace_none!(dirs::config_dir())?.join("openvr/openvrpaths.vrpath"); Where do you get the issue? |
it's in /alvr/launcher/src/commands.rs another problem i found in the launcher is that it removes openvrpaths when clicking "reset drivers", then it throws an error about not being able to load it ^^ also, the XDG path environment variable of wrapper.sh points to some weird directory on my system, whereas the .txt file it searches for is in /tmp/ |
sorry for bothering you again, it occures on the self-compiled apk of this release, as well as on the main release of alvr. trying to change from tcp to udp results in the apk crashing on the quest. |
@dennisheine You have to turn off audio streaming as that part isn't implemented. |
For some reason, neither the debug nor the release version of the driver doesn't want to load. SteamVR web console shows this error: Meanwhile, the launcher floods the console with that: |
@FeckingPotato |
Thank you, switching from steam-runtime to steam-native worked. But now SteamVR complains about a key component not working properly (Error code 307). Edit 3: for some reason, I had the AMD driver installed alongside with the NVIDIA driver. Removing the AMD driver fixed the problem. |
This woks with an Nvidia GPU?? |
it is based on vulkan, so it should work with nvidia. this project is the last thing i am missing on linux. no need for windows anymore. |
It goes one step further for nvidia, the video stream is now captured by the server, but hardware encoding only supports vaapi (AMD/Intel) for the moment. |
oh, maybe that's why i couldn't get it running |
Maybe we can try https://github.com/freedesktop/vdpau-driver (a VDPAU-based backend for VA-API) to support vdpau? |
vdpau is a decode only api, the encoding one is nvenc, but as I don't have the hardware to test it someone else has to develop the feature. There are also the new provisional vulkan extensions which provide hardware h264 encoding, I don't know the timeline for their finalization, but that would be a much better solution as it will certainly be implemented by all vendors in the end. |
It seems that nvidia's bate driver already supports this new feature. So maybe we can use vulkan encoding on nvidia. When mesa supports this new extension, we can give up vaapi and use vulkan entirely. |
Sure, but vulkan encoding is a provisional extension, so it is only enabled on beta drivers and this will be the case until the extension is finalized. I do not have the hardware to test the feature, you can submit patches but I don't think this is an ideal solution for short-term, as khronos discourages shipping products that rely on provisional extensions.
|
Thank you for your working ❤️
The code is too hard for me, I just can't wait to use this on linux. 🤣 |
Is there a blocker for implementing NVENC support through FFmpeg? If I'm reading the ALVR code and FFmpeg docs correctly, it should just be a matter of changing |
Not really, there are a few differences, you can't map frames between vulkan and cuda and apparently color conversion doesn't work the same, you have to transfer them instead. |
Hello and very thanks for this linux feature. Now, if this works, I will leave Windows for a long time. I have Arch, compiled succesfully. Using steam-native. When I open alvr, it detect the headset (With the latest APK 15.2.1) and I can see on the headset "Streaming will start shortly, please wait". I can see packages being sent to the Quest, but nothing more... I can't see the room... Is there any APK or Special configuration on the server? I have 60Hz h264... Thanks! --- Update I have installed the latest NIGHTLY APK and now I have video!! 👍 Im very happy with this incredible step on linux VR gaming... My card is an Nvidia RTX2070 Super with 465.27 and I have a good quality (h264, 60hz) but is not so smooth... I think because the VAAPI not supported. I will check new changes asap you publish. Thanks again. |
Hi @amvidalrc!
Audio streaming hasn't been implemented yet and is turned off by default to avoid the missing codepath.
ALVR does not currently use hardware encoding on nVidia as CUDA is annoying to work with and no one has put the effort into it yet. (it uses software encoding on the CPU instead) Also, for the future, we develop ALVR on the discord server and that is where we usually discuss this kind of stuff. (you will find more activity there, but personally I am not against using GitHub too) |
This version replaces the "screen grab" version.
The mechanism is based on frame capture in vrcompositor process, which
are then shared with vrserver process, encoded there and transmitted to
the headset.
A vulkan layer is added to vrcompositor, this layer exposes the
VK_EXT_direct_mode_display, VK_EXT_acquire_xlib_display and some other
related vulkan functions. Functions implemented in the layer add one
display to the list naturally returned by the driver, this display
matches properties in ALVR settings (resolution and refresh rate), and
all functions required by SteamVR are implemented (xlib acquire,
properties enumeration, vsync event, swapchain creation).
The layer then connects to a socket owned by the server, submits the
VkImage creation parameters, and transfers file descriptors
corresponding to the images in the swapchain and associated semaphores.
The, on each frame, a packet is sent on the socket describing the image
that has been submitted.
On server side, vulkan frames are mapped to vaapi surfaces, using
ffmpeg, then converted to a format suitable for encoding, encoded
through vaapi and sent with the usual mechanism.
In order to associate each frame with tracking information, timing
details are requested to the compositor, and tracking history is queried
to find the most suitable tracking index. This mechanism is still
inaccurate and will need significant upgrades.
Co-authored-by: ckie git-525ff67@ckie.dev
Closes #269.