Skip to content
This repository has been archived by the owner on Jan 22, 2024. It is now read-only.

Add libEGL.so symlink #146

Closed
wants to merge 1 commit into from
Closed

Conversation

andrewseidl
Copy link

libcuda appears to dlopen libEGL.so instead of libEGL.so.1. Create this symlink to prevent segfaults in CUDA+OpenGL/EGL programs.

Without this patch, one workaround is to manually bring in libEGL.so from the host via

-v /usr/lib/nvidia-367/libEGL.so.1:/usr/lib/libEGL.so

OpenGL in a container with no running X dependency is nice :)

Related: #11

`libcuda` appears to dlopen `libEGL.so` instead of `libEGL.so.1`. Create
this symlink to prevent segfaults in CUDA+OpenGL/EGL programs.

Signed-off-by: Andrew Seidl <dev@aas.io>
@andrewseidl
Copy link
Author

Signed CLA emailed to digits@nvidia.com

@flx42
Copy link
Member

flx42 commented Jul 20, 2016

Thanks for the PR!
Do you have a small example that fails without this symlink?

Thanks

@andrewseidl
Copy link
Author

It looks like this was reported and acknowledged a few months ago on the CUDA side of things: https://devtalk.nvidia.com/default/topic/917987/cuda-opengl-interoperability-segfault-using-egl-opengl-context-egl_platform_device_ext-/

Gist with code and compilation instructions for the above post: https://gist.github.com/andrewseidl/7ff90a7f6675d1419560ef7850176979

@flx42
Copy link
Member

flx42 commented Jul 21, 2016

Thanks for the link and test case, I'm investigating internally right now!

@flx42
Copy link
Member

flx42 commented Aug 4, 2016

Good news, it will be fixed in future driver versions. I'm leaving this PR open in the meantime.

@andyneff
Copy link

I can verify this happens in a non-CUDA situation too, but for libGL.so. But the behavior is the same, if libGL.so does not exist, failed to create GLX context, if it does exist, succcess. (I'm using GLEW and freeglut, in debian:jessie, and not sure where the stray libGL.so is coming from. My example is not a simple example for this)

@flx42 When you said "in future drivers" were you talking about cuda versions? or nvidia driver? or nvidia-docker version? I'm using nvidia driver version 361.42

@flx42
Copy link
Member

flx42 commented Sep 13, 2016

@andyneff Should be fixed after 361.53, from what I see from an internal source.
The driver bundled with CUDA 8.0 GA (soon) should have the fix, I will try to test if this problem is fixed after CUDA exits RC.

@flx42
Copy link
Member

flx42 commented Dec 5, 2016

Seems to be fixed with 367.57, closing.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants