Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

X server in docker #98

Closed
Immortalin opened this issue Mar 16, 2019 · 26 comments
Closed

X server in docker #98

Immortalin opened this issue Mar 16, 2019 · 26 comments

Comments

@Immortalin
Copy link

Immortalin commented Mar 16, 2019

As far as I can tell, the only way to run virtualgl in an unprivileged docker container with a headless host is to use xdummy. Are there any other ways?

This fails with

 error: (EE) parse_vt_settings: Cannot open /dev/tty0

since tty0 is non-existent

@dcommander
Copy link
Member

I have never tried it, but I know that others in the community have. You may have better luck posting to the VirtualGL-users Google Group. I seem to recall that it requires https://github.com/NVIDIA/nvidia-docker if you’re using an nVidia GPU.

@Immortalin
Copy link
Author

Immortalin commented Mar 16, 2019

Well, using the default configs seem to give a segfault:

FROM nvidia/opengl:1.0-glvnd-runtime-ubuntu18.04
RUN apt-get update && apt-get install -y locales && rm -rf /var/lib/apt/lists/* \
    && localedef -i en_US -c -f UTF-8 -A /usr/share/locale/locale.alias en_US.UTF-8
ENV LANG en_US.utf8
WORKDIR /app
COPY ./build/virtualgl_2.6.1_amd64.deb /app/
RUN DEBIAN_FRONTEND=noninteractive dpkg -i  /app/virtualgl_2.6.1_amd64.deb 
RUN apt-get install -f
RUN apt-get update
COPY ./build/nvidia-xconfig /usr/bin/nvidia-xconfig
COPY ./build/xorg.conf /app/xorg.conf
RUN DEBIAN_FRONTEND=noninteractive apt-get install -y xserver-xorg-video-dummy pkg-config mesa-utils libxv1 libglu1-mesa --no-install-recommends
RUN nvidia-xconfig -a --allow-empty-initial-configuration --use-display-device=None --virtual=1920x1080 --busid 'PCI:1:0:0'
COPY ./test.sh /app/
CMD /app/test.sh 

and test.sh:

#!/bin/sh
Xorg -noreset +extension GLX +extension RANDR +extension RENDER -logfile ./1.log -config ./xorg.conf :1 &    
export VGL_DISPLAY=:01
export DISPLAY=:01
/opt/VirtualGL/bin/vglrun /opt/VirtualGL/bin/glxinfo  -display :01

Xorg-conf from here and nvidia-xconfig was a binary copied from the host machine, busid was also manually extracted from host machine

nvidia-docker build . -t gpu/test
docker run --runtime=nvidia -it --init gpu/test:latest

gives

X.Org X Server 1.19.6
Release Date: 2017-12-20
X Protocol Version 11, Revision 0
Build Operating System: Linux 4.4.0-138-generic x86_64 Ubuntu
Current Operating System: Linux b3a9a98f5222 4.15.0-46-generic #49-Ubuntu SMP Wed Feb 6 09:33:07 UTC 2019 x86_64
Kernel command line: BOOT_IMAGE=/boot/vmlinuz-4.15.0-46-generic root=UUID=0dbd2787-4c6a-49a7-8104-c191fa00b421 ro quiet splash vt.handoff=1
Build Date: 25 October 2018  04:11:27PM
xorg-server 2:1.19.6-1ubuntu4.2 (For technical support please see http://www.ubuntu.com/support) 
Current version of pixman: 0.34.0
	Before reporting problems, check http://wiki.x.org
	to make sure that you have the latest version.
Markers: (--) probed, (**) from config file, (==) default setting,
	(++) from command line, (!!) notice, (II) informational,
	(WW) warning, (EE) error, (NI) not implemented, (??) unknown.
(++) Log file: "./1.log", Time: Sat Mar 16 23:50:14 2019
(++) Using config file: "./xorg.conf"
(==) Using system config directory "/usr/share/X11/xorg.conf.d"
Error: unable to open display 
name of display: :01
display: :01  screen: 0
direct rendering: Yes
server glx vendor string: VirtualGL
server glx version string: 1.4
server glx extensions:
    GLX_ARB_create_context, GLX_ARB_create_context_profile, 
    GLX_ARB_get_proc_address, GLX_ARB_multisample, GLX_EXT_import_context, 
    GLX_EXT_swap_control, GLX_EXT_texture_from_pixmap, GLX_EXT_visual_info, 
    GLX_EXT_visual_rating, GLX_NV_swap_group, GLX_SGIX_fbconfig, 
    GLX_SGIX_pbuffer, GLX_SGI_make_current_read, GLX_SGI_swap_control, 
    GLX_SUN_get_transparent_index
client glx vendor string: VirtualGL
client glx version string: 1.4
client glx extensions:
    GLX_ARB_create_context, GLX_ARB_create_context_profile, 
    GLX_ARB_get_proc_address, GLX_ARB_multisample, GLX_EXT_import_context, 
    GLX_EXT_swap_control, GLX_EXT_texture_from_pixmap, GLX_EXT_visual_info, 
    GLX_EXT_visual_rating, GLX_NV_swap_group, GLX_SGIX_fbconfig, 
    GLX_SGIX_pbuffer, GLX_SGI_make_current_read, GLX_SGI_swap_control, 
    GLX_SUN_get_transparent_index
GLX version: 1.4
GLX extensions:
    GLX_ARB_create_context, GLX_ARB_create_context_profile, 
    GLX_ARB_get_proc_address, GLX_ARB_multisample, GLX_EXT_import_context, 
    GLX_EXT_swap_control, GLX_EXT_texture_from_pixmap, GLX_EXT_visual_info, 
    GLX_EXT_visual_rating, GLX_NV_swap_group, GLX_SGIX_fbconfig, 
    GLX_SGIX_pbuffer, GLX_SGI_make_current_read, GLX_SGI_swap_control, 
    GLX_SUN_get_transparent_index
OpenGL vendor string: VMware, Inc.
OpenGL renderer string: llvmpipe (LLVM 7.0, 256 bits)
OpenGL core profile version string: 3.3 (Core Profile) Mesa 18.2.2
OpenGL core profile shading language version string: 3.30
OpenGL core profile extensions:
    GL_AMD_conservative_depth, GL_AMD_draw_buffers_blend, 
    GL_AMD_seamless_cubemap_per_texture, GL_AMD_shader_stencil_export, 
    GL_AMD_shader_trinary_minmax, GL_AMD_vertex_shader_layer, 
    GL_AMD_vertex_shader_viewport_index, GL_ANGLE_texture_compression_dxt3, 
    GL_ANGLE_texture_compression_dxt5, GL_ARB_ES2_compatibility, 
    GL_ARB_ES3_compatibility, GL_ARB_arrays_of_arrays, GL_ARB_base_instance, 
    GL_ARB_blend_func_extended, GL_ARB_buffer_storage, 
    GL_ARB_clear_buffer_object, GL_ARB_clear_texture, GL_ARB_clip_control, 
    GL_ARB_compressed_texture_pixel_storage, 
    GL_ARB_conditional_render_inverted, GL_ARB_conservative_depth, 
    GL_ARB_copy_buffer, GL_ARB_copy_image, GL_ARB_cull_distance, 
    GL_ARB_debug_output, GL_ARB_depth_buffer_float, GL_ARB_depth_clamp, 
    GL_ARB_direct_state_access, GL_ARB_draw_buffers, 
    GL_ARB_draw_buffers_blend, GL_ARB_draw_elements_base_vertex, 
    GL_ARB_draw_indirect, GL_ARB_draw_instanced, GL_ARB_enhanced_layouts, 
    GL_ARB_explicit_attrib_location, GL_ARB_explicit_uniform_location, 
    GL_ARB_fragment_coord_conventions, GL_ARB_fragment_layer_viewport, 
    GL_ARB_fragment_shader, GL_ARB_framebuffer_object, 
    GL_ARB_framebuffer_sRGB, GL_ARB_get_program_binary, 
    GL_ARB_get_texture_sub_image, GL_ARB_gpu_shader_fp64, 
    GL_ARB_gpu_shader_int64, GL_ARB_half_float_pixel, 
    GL_ARB_half_float_vertex, GL_ARB_instanced_arrays, 
    GL_ARB_internalformat_query, GL_ARB_internalformat_query2, 
    GL_ARB_invalidate_subdata, GL_ARB_map_buffer_alignment, 
    GL_ARB_map_buffer_range, GL_ARB_multi_bind, GL_ARB_multi_draw_indirect, 
    GL_ARB_occlusion_query2, GL_ARB_pipeline_statistics_query, 
    GL_ARB_pixel_buffer_object, GL_ARB_point_sprite, 
    GL_ARB_polygon_offset_clamp, GL_ARB_program_interface_query, 
    GL_ARB_provoking_vertex, GL_ARB_robustness, GL_ARB_sampler_objects, 
    GL_ARB_seamless_cube_map, GL_ARB_seamless_cubemap_per_texture, 
    GL_ARB_separate_shader_objects, GL_ARB_shader_bit_encoding, 
    GL_ARB_shader_objects, GL_ARB_shader_stencil_export, 
    GL_ARB_shader_subroutine, GL_ARB_shader_texture_lod, 
    GL_ARB_shading_language_420pack, GL_ARB_shading_language_packing, 
    GL_ARB_stencil_texturing, GL_ARB_sync, GL_ARB_texture_buffer_object, 
    GL_ARB_texture_buffer_object_rgb32, GL_ARB_texture_buffer_range, 
    GL_ARB_texture_compression_bptc, GL_ARB_texture_compression_rgtc, 
    GL_ARB_texture_cube_map_array, GL_ARB_texture_float, 
    GL_ARB_texture_gather, GL_ARB_texture_mirror_clamp_to_edge, 
    GL_ARB_texture_multisample, GL_ARB_texture_non_power_of_two, 
    GL_ARB_texture_query_levels, GL_ARB_texture_query_lod, 
    GL_ARB_texture_rectangle, GL_ARB_texture_rg, GL_ARB_texture_rgb10_a2ui, 
    GL_ARB_texture_stencil8, GL_ARB_texture_storage, 
    GL_ARB_texture_storage_multisample, GL_ARB_texture_swizzle, 
    GL_ARB_texture_view, GL_ARB_timer_query, GL_ARB_transform_feedback2, 
    GL_ARB_transform_feedback3, GL_ARB_transform_feedback_instanced, 
    GL_ARB_transform_feedback_overflow_query, GL_ARB_uniform_buffer_object, 
    GL_ARB_vertex_array_bgra, GL_ARB_vertex_array_object, 
    GL_ARB_vertex_attrib_64bit, GL_ARB_vertex_attrib_binding, 
    GL_ARB_vertex_shader, GL_ARB_vertex_type_10f_11f_11f_rev, 
    GL_ARB_vertex_type_2_10_10_10_rev, GL_ARB_viewport_array, 
    GL_ATI_blend_equation_separate, GL_ATI_texture_float, 
    GL_ATI_texture_mirror_once, GL_EXT_abgr, GL_EXT_blend_equation_separate, 
    GL_EXT_draw_buffers2, GL_EXT_draw_instanced, GL_EXT_framebuffer_blit, 
    GL_EXT_framebuffer_multisample, GL_EXT_framebuffer_multisample_blit_scaled, 
    GL_EXT_framebuffer_sRGB, GL_EXT_packed_depth_stencil, GL_EXT_packed_float, 
    GL_EXT_pixel_buffer_object, GL_EXT_polygon_offset_clamp, 
    GL_EXT_provoking_vertex, GL_EXT_shader_integer_mix, GL_EXT_texture_array, 
    GL_EXT_texture_compression_dxt1, GL_EXT_texture_compression_rgtc, 
    GL_EXT_texture_compression_s3tc, GL_EXT_texture_integer, 
    GL_EXT_texture_mirror_clamp, GL_EXT_texture_sRGB, 
    GL_EXT_texture_sRGB_decode, GL_EXT_texture_shared_exponent, 
    GL_EXT_texture_snorm, GL_EXT_texture_swizzle, GL_EXT_timer_query, 
    GL_EXT_transform_feedback, GL_EXT_vertex_array_bgra, 
    GL_IBM_multimode_draw_arrays, GL_KHR_context_flush_control, GL_KHR_debug, 
    GL_KHR_no_error, GL_KHR_texture_compression_astc_ldr, GL_MESA_pack_invert, 
    GL_MESA_shader_integer_functions, GL_MESA_texture_signed_rgba, 
    GL_MESA_ycbcr_texture, GL_NV_conditional_render, GL_NV_depth_clamp, 
    GL_NV_packed_depth_stencil, GL_OES_EGL_image, GL_S3_s3tc
Segmentation fault (core dumped)

On host machine TurboVNC+VirtualGL both works fine

@dcommander
Copy link
Member

GitHub issues rarely get read by anyone but me. Post to VirtualGL-users if you want a chance of community engagement. I have no idea how to solve this and no time to look into it right now. I am working on an EGL back end that may at least make this easier.

@Immortalin
Copy link
Author

Immortalin commented Mar 17, 2019

The "headless" rendering documentation was a wild goose chase 😑 since my GPU doesn't support that. I won't mind sending in a pull request or something to update the instructions for headless rendering. I think you need to use Xdummy for that.

The comments here regarding performance are particularly interesting.

@dcommander
Copy link
Member

Your terminology is again confusing. A headless 3D X server does not require Xdummy. It just requires a headless GPU.

@Immortalin
Copy link
Author

Immortalin commented Mar 17, 2019

Uh, but don't you need Xdummy to get around the lack of hardware support for headless rendering?

Apologies for the poor terminology, I am new to this.

Since TurboVNC 2.2.1 has Libglvnd Mesa direct rendering support and Nvidia-docker provides Libglvnd, is there any need for VirtualGL at all when running TurboVNC in an Nvidia-enabled container? Docker already handles the GPU timesharing problem.

@dcommander
Copy link
Member

dcommander commented Mar 17, 2019

Let me play around and get back to you with a more thorough answer, but a quick & dirty answer:

  • I wasn’t aware that Xdummy could be used as a substitute for a headless GPU. What I’m reading online suggests you might be right, but I want to see it for myself.

  • GLVND just directs OpenGL requests to a particular driver stack, depending on the X screen. It doesn’t magically connect the X server to that driver or the GPU it uses. TurboVNC can’t (currently) use any Mesa driver other than llvmpipe/softpipe because its framebuffer is in main memory, not GPU memory. If nvidia-docker implements GPU pass-through, that’s great. That means that, once the EGL back end for VirtualGL is implemented, VGL should be able to access the nVidia EGL device within the Docker container. In the meantime, couldn’t you run Xdummy within the Docker container?

@Immortalin
Copy link
Author

Immortalin commented Mar 17, 2019

In my dockerfile here I tried to do that but I think I didn't configure things properly due to inexperience. (The display variables were mixed up for one and also the xserver was using the xdummy xorg files instead of the nvidia-generated ones, I have no idea how to merge those two together)

@dcommander dcommander reopened this Apr 5, 2019
@Immortalin
Copy link
Author

@dcommander any luck?

@dcommander
Copy link
Member

Have had zero time to look into it. Be patient.

@Immortalin
Copy link
Author

No worries, I just noticed that the issue was re-opened, that's all.

@jeremyfix
Copy link

Don't you have an issue in the name of the displays in your test.sh script ? It reads "01" and the beginning of the glxinfo output mentions it cannot open the display:

Error: unable to open display 
name of display: :01

Doesn't it work better if you change the display to "1" instead of "01" , the test.sh script reading :

#!/bin/sh
Xorg -noreset +extension GLX +extension RANDR +extension RENDER -logfile ./1.log -config ./xorg.conf :1 &    
export VGL_DISPLAY=:1
export DISPLAY=:1
/opt/VirtualGL/bin/vglrun /opt/VirtualGL/bin/glxinfo  -display :1

@dcommander
Copy link
Member

dcommander commented May 11, 2019

Note that setting VGL_DISPLAY and DISPLAY to the same value is pretty much automatically incorrect, since VGL_DISPLAY is supposed to point to the 3D X server and DISPLAY is supposed to point to the 2D X server. Since supporting VGL in a Docker container represents a feature enhancement rather than a bug fix, I don't have time to test this prior to 2.6.2 (unfortunately, as an independent OSS developer, rarely do I get paid for doing speculative research like that), but I re-opened the issue to remind me to look into it when I have time. In the meantime, hopefully others can give you advice. And if you have more direct questions to ask me regarding the configuration, I'm happy to answer those.

@mviereck
Copy link

@Immortalin If you just need an X server inside a container without having anything visible and without GPU support, just use Xvfb instead of Xdummy. It is easier to handle.

You might also be interested in x11docker to provide an X server from host to a container.
Maybe it even works to provide headless GPU access for the container. That could depend on the GPU itself.

I once had a VirtualGL setup in x11docker and currently think of including it again. (I am the developer of x11docker.)

@c4pQ
Copy link

c4pQ commented Aug 7, 2020

error: (EE) parse_vt_settings: Cannot open /dev/tty0

@Immortalin Regarding this issue. I've tried sort of a hack: passed --device=/dev/tty10 (or whatever tty you don't use on host) and then make a symlink /dev/tty0 . It didn't help though - I ran into different issues.

UPDATE.
Another approach is to again pass /dev/tty10/ to the container, but tell lightdm to start with this tty by adding to lightdm.conf the following option:

[LightDM]
minimum-vt=10

I also ran the container with --cap-add SYS_TTY_CONFIG to allow it interacting with the tty.

@kimown
Copy link

kimown commented Aug 10, 2020

error: (EE) parse_vt_settings: Cannot open /dev/tty0

@Immortalin Regarding this issue. I've tried sort of a hack: passed --device=/dev/tty10 (or whatever tty you don't use on host) and then make a symlink /dev/tty0 . It didn't help though - I ran into different issues.

UPDATE.
Another approach is to again pass /dev/tty10/ to the container, but tell lightdm to start with this tty by adding to lightdm.conf the following option:

[LightDM]
minimum-vt=10

I also ran the container with --cap-add SYS_TTY_CONFIG to allow it interacting with the tty.

Have you made success about starting xorg in a docker, I have same problem, I want to use opengl in a Nvidia docker environment.

@c4pQ
Copy link

c4pQ commented Aug 10, 2020

@kimown I have some progress. But it's not very big.

First of all, I'm trying to make it work with Intel GPU in headless mode in container derived from nvidia/opengl:1.0-glvnd-runtime which I believe is built on top of Ubuntu16.04 container.

docker run --rm -ti -h my_image_hostname -p 6001:6001 --device /dev/tty10 --device /dev/dri/card0 --device /dev/dri/renderD128 --cap-add SYS_TTY_CONFIG my_image bash

Running docker as above without predefined xorg.conf, but with lightdm.conf allows me to run lightdm. I even can see standard ubuntu login screen, but then it just hangs and the only thing I can do is to reboot. The problem at the moment is connected with acquiring the keyboard and mouse. In case I run the container with --privileged I can see mouse cursor for less than a second and then it disappears, cursor in password box blinks about 10 times and then everything becomes unresponsive. Worth to mention I can not even press CapsLock button (the light in the button doesn't react).

@Immortalin
Copy link
Author

I am no longer working on the project that needed streaming in docker.

@kimown
Copy link

kimown commented Aug 12, 2020

@c4pQ I find a way, opengl need a display, we can use X.org generate virtual display in host machine, then share this display with docker container, does this solve the problem?

@c4pQ
Copy link

c4pQ commented Aug 13, 2020

@kimown could you please provide your solution for everyone looking for the answer?

@kimown
Copy link

kimown commented Aug 14, 2020

@kimown could you please provide your solution for everyone looking for the answer?

Just an idea, first we start DISPLAY in host machine, then we share the DISPLAY with docker container.

@ffeldhaus
Copy link

As VirtualGL now has experimental support for running without an X server (see #10), it is much easier to run OpenGL inside a docker container using VirtualGL. Please see my early work here and tell me if it works for you and what needs to be improved:
https://github.com/ffeldhaus/docker-xpra-html5-opengl

@dcommander
Copy link
Member

I was able to make it work with a host-side 3D X server. See #113 (comment)

@dcommander
Copy link
Member

https://github.com/dcommander/virtualgl_docker_examples now contains my latest Docker/VirtualGL/TurboVNC examples.

@ehfd
Copy link

ehfd commented Nov 13, 2020

https://github.com/ehfd/docker-nvidia-egl-desktop

MATE Desktop container for NVIDIA GPUs without using an X Server, directly accessing the GPU with EGL to emulate GLX using VirtualGL and TurboVNC. Does not require /tmp/X11-unix host sockets.

@dcommander
Copy link
Member

@ehfd Please stop posting duplicate comments. You are creating noise.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

8 participants