Skip to content
This repository has been archived by the owner on Jan 22, 2024. It is now read-only.

nvidia-docker 2.6.0-1 - not working on Ubuntu WSL2 #1496

Closed
3 of 9 tasks
levipereira opened this issue Apr 30, 2021 · 60 comments
Closed
3 of 9 tasks

nvidia-docker 2.6.0-1 - not working on Ubuntu WSL2 #1496

levipereira opened this issue Apr 30, 2021 · 60 comments

Comments

@levipereira
Copy link

levipereira commented Apr 30, 2021

1. Issue or feature description

After run - apt-get upgrade
The bellow packages was updated to last version.

 nvidia-docker2:amd64 (2.5.0-1, 2.6.0-1)
 libnvidia-container-tools:amd64 (1.3.3-1, 1.4.0-1)
 nvidia-container-runtime:amd64 (3.4.2-1, 3.5.0-1)
 libnvidia-container1:amd64 (1.3.3-1, 1.4.0-1)
 nvidia-container-toolkit:amd64 (1.4.2-1, 1.5.0-1)

After upgrade above packages nvidia-docker stop working.

2. Steps to reproduce the issue

$ docker run   --gpus all  --rm -it  nvcr.io/nvidia/tensorrt:21.04-py3 /bin/bash


docker: Error response from daemon: OCI runtime create failed: container_linux.go:367: 
starting container process caused: process_linux.go:495: container init caused: Running hook #0:: 
error running hook: exit status 1, stdout: , stderr: nvidia-container-cli: initialization error: 
driver error: failed to process request: unknown.

3. Information to attach (optional if deemed irrelevant)

  • Some nvidia-container information: nvidia-container-cli -k -d /dev/tty info
  • Kernel version from uname -a
  • Any relevant kernel output lines from dmesg
  • Driver information from nvidia-smi -a
  • Docker version from docker version
  • NVIDIA packages version from dpkg -l '*nvidia*' or rpm -qa '*nvidia*'
  • NVIDIA container library version from nvidia-container-cli -V
  • NVIDIA container library logs (see troubleshooting)
  • Docker command, image and tag used
Windows 10 Pro
Build 21364.co_release.210416-1504

$ uname -a
Linux  5.10.16.3-microsoft-standard-WSL2 #1 SMP Fri Apr 2 22:23:49 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux

localuser@LEVI-PC:~$ docker version
Client: Docker Engine - Community
 Version:           20.10.5
 API version:       1.41
 Go version:        go1.13.15
 Git commit:        55c4c88
 Built:             Tue Mar  2 20:18:20 2021
 OS/Arch:           linux/amd64
 Context:           default
 Experimental:      true

Server: Docker Engine - Community
 Engine:
  Version:          20.10.6
  API version:      1.41 (minimum version 1.12)
  Go version:       go1.13.15
  Git commit:       8728dd2
  Built:            Fri Apr  9 22:45:28 2021
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.4.4
  GitCommit:        05f951a3781f4f2c1911b05e61c160e9c30eaa8e
 runc:
  Version:          1.0.0-rc93
  GitCommit:        12644e614e25b05da6fd08a38ffa0cfe1903fdec
 docker-init:
  Version:          0.19.0
  GitCommit:        de40ad0



C:\WINDOWS\system32>nvidia-smi -a

==============NVSMI LOG==============

Timestamp                                 : Fri Apr 30 13:34:08 2021
Driver Version                            : 470.25
CUDA Version                              : 11.4

Attached GPUs                             : 1
GPU 00000000:01:00.0
    Product Name                          : NVIDIA GeForce RTX 2060 SUPER
    Product Brand                         : GeForce
    Display Mode                          : Enabled
    Display Active                        : Enabled
    Persistence Mode                      : N/A
    MIG Mode
        Current                           : N/A
        Pending                           : N/A
    Accounting Mode                       : Disabled
    Accounting Mode Buffer Size           : 4000
    Driver Model
        Current                           : WDDM
        Pending                           : WDDM
    Serial Number                         : N/A
    GPU UUID                              : GPU-c9d4e34c-cbf6-4dea-9f87-959d9eb94069
    Minor Number                          : N/A
    VBIOS Version                         : 90.06.44.40.20
    MultiGPU Board                        : No
    Board ID                              : 0x100
    GPU Part Number                       : N/A
    Module ID                             : 0
    Inforom Version
        Image Version                     : G001.0000.02.04
        OEM Object                        : 1.1
        ECC Object                        : N/A
        Power Management Object           : N/A
....


C:\WINDOWS\system32>

Workaround

Downgrade the below Packages

apt-get install nvidia-docker2:amd64=2.5.0-1 \
           libnvidia-container-tools:amd64=1.3.3-1 \
           nvidia-container-runtime:amd64=3.4.2-1 \
           libnvidia-container1:amd64=1.3.3-1 \
           nvidia-container-toolkit:amd64=1.4.2-1
@elezar
Copy link
Member

elezar commented Apr 30, 2021

Hi @levipereira libnvidia-container1:amd64=1.4.0-1 offers better support for WSL, but this may require a driver update. I am checking to see if there are minimum requirements that we should document.

@levipereira
Copy link
Author

The error was reproduced on Windows 10

Build 21343.rs_prerelease.210320-1757

Nvidia Driver:

C:\Users\levip>nvidia-smi
Fri Apr 30 10:19:10 2021
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 470.25       Driver Version: 470.25       CUDA Version: 11.4     |
|-------------------------------+----------------------+----------------------+
| GPU  Name            TCC/WDDM | Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  NVIDIA GeForce ... WDDM  | 00000000:01:00.0  On |                  N/A |
|  0%   36C    P8    27W / 175W |    507MiB /  8192MiB |      2%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+

@levipereira
Copy link
Author

Hi @levipereira libnvidia-container1:amd64=1.4.0-1 offers better support for WSL, but this may require a driver update. I am checking to see if there are minimum requirements that we should document.

I have tried upgrade only libnvidia-container1 and same error is raised as follow:

localuser@LEVI-PC:~$ sudo su -
root@LEVI-PC:~# apt-get install libnvidia-container1
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following packages will be upgraded:
  libnvidia-container1
1 upgraded, 0 newly installed, 0 to remove and 5 not upgraded.
Need to get 0 B/68.0 kB of archives.
After this operation, 1024 B of additional disk space will be used.
(Reading database ... 72998 files and directories currently installed.)
Preparing to unpack .../libnvidia-container1_1.4.0-1_amd64.deb ...
Unpacking libnvidia-container1:amd64 (1.4.0-1) over (1.3.3-1) ...
Setting up libnvidia-container1:amd64 (1.4.0-1) ...
Processing triggers for libc-bin (2.31-0ubuntu9.2) ...
/sbin/ldconfig.real: /usr/lib/wsl/lib/libcuda.so.1 is not a symbolic link

root@LEVI-PC:~# exit

localuser@LEVI-PC:~$ docker run   --gpus all  --rm -it  nvcr.io/nvidia/tensorrt:21.04-py3 /bin/bash
docker: Error response from daemon: OCI runtime create failed: container_linux.go:367: 
starting container process caused: process_linux.go:495: container init caused: 
Running hook #0:: error running hook: exit status 1, stdout: , stderr: 
nvidia-container-cli: initialization error: driver error: failed to process request: unknown.

localuser@LEVI-PC:~$ sudo su -

root@LEVI-PC:~# apt-get  install  libnvidia-container1:amd64=1.3.3-1
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following packages will be DOWNGRADED:
  libnvidia-container1
0 upgraded, 0 newly installed, 1 downgraded, 0 to remove and 5 not upgraded.
Need to get 0 B/68.0 kB of archives.
After this operation, 1024 B disk space will be freed.
Do you want to continue? [Y/n] y
dpkg: warning: downgrading libnvidia-container1:amd64 from 1.4.0-1 to 1.3.3-1
(Reading database ... 72998 files and directories currently installed.)
Preparing to unpack .../libnvidia-container1_1.3.3-1_amd64.deb ...
Unpacking libnvidia-container1:amd64 (1.3.3-1) over (1.4.0-1) ...
Setting up libnvidia-container1:amd64 (1.3.3-1) ...
Processing triggers for libc-bin (2.31-0ubuntu9.2) ...
/sbin/ldconfig.real: /usr/lib/wsl/lib/libcuda.so.1 is not a symbolic link

root@LEVI-PC:~# exit
logout
localuser@LEVI-PC:~$ docker run   --gpus all  --rm -it  nvcr.io/nvidia/tensorrt:21.04-py3 /bin/bash

=====================
== NVIDIA TensorRT ==
=====================

NVIDIA Release 21.04 (build 22393618)

NVIDIA TensorRT 7.2.3 (c) 2016-2021, NVIDIA CORPORATION.  All rights reserved.
Container image (c) 2021, NVIDIA CORPORATION.  All rights reserved.

https://developer.nvidia.com/tensorrt

This container image and its contents are governed by the NVIDIA Deep Learning Container License.
By pulling and using the container, you accept the terms and conditions of this license:
https://developer.nvidia.com/ngc/nvidia-deep-learning-container-license

To install Python sample dependencies, run /opt/tensorrt/python/python_setup.sh

To install the open-source samples corresponding to this TensorRT release version run /opt/tensorrt/install_opensource.sh.
To build the open source parsers, plugins, and samples for current top-of-tree on master or a different branch, run /opt/tensorrt/install_opensource.sh -b <branch>
See https://github.com/NVIDIA/TensorRT for more information.

WARNING: The NVIDIA Driver was not detected.  GPU functionality will not be available.
   Use 'nvidia-docker run' to start this container; see
   https://github.com/NVIDIA/nvidia-docker/wiki/nvidia-docker .

root@3b945e54b500:/workspace#

@feynmanliang
Copy link

@levipereira seems we have a similar use case :)

I am also seeing this on driver 470.25 with libnvidia-container1:amd64=1.4.0-1.

$ apt list --installed libnvidia-container1
Listing... Done
libnvidia-container1/bionic,now 1.4.0-1 amd64 [installed,automatic]
$ nvidia-docker run --gpus all nvcr.io/nvidia/k8s/cuda-sample:nbody nbody -gpu -benchmark
docker: Error response from daemon: OCI runtime create failed: container_linux.go:367: starting container process caused: process_linux.go:495: container init caused: Running hook #0:: error running hook: exit status 1, stdout: , stderr: nvidia-container-cli: initialization error: driver error: failed to process request: unknown.
ERRO[0000] error waiting for container: context canceled

Downgrading to libnvidia-container1:amd64=1.3.3-1 seems to fix the driver issue, but runs into #1458 (comment) again:

$ sudo apt-get install nvidia-docker2:amd64=2.5.0-1 nvidia-container-runtime:amd64=3.4.0-1 nvidia-container-toolkit:amd64=1.4.2-1 libnvidia-container-tools:amd64=1.3.3-1 libnvidia-container1:amd64=1.3.3-1
...
$ nvidia-docker run --gpus all nvcr.io/nvidia/k8s/cuda-sample:nbody nbody -gpu -benchmark
docker: Error response from daemon: OCI runtime create failed: container_linux.go:367: starting container process caused: process_linux.go:495: container init caused: Running hook #0:: error running hook: exit status 1, stdout: , stderr: nvidia-container-cli: requirement error: unsatisfied condition: cuda>=11.2, please update your driver to a newer version, or use an earlier cuda container: unknown.
ERRO[0000] error waiting for container: context canceled

@levipereira reported this issue earlier, and the current workaround is to run docker with --env NVIDIA_DISABLE_REQUIRE=1:

$ nvidia-docker run --env NVIDIA_DISABLE_REQUIRE=1 --gpus all nvcr.io/nvid
ia/k8s/cuda-sample:nbody nbody -gpu -benchmark
Run "nbody -benchmark [-numbodies=<numBodies>]" to measure performance.
        -fullscreen       (run n-body simulation in fullscreen mode)
        -fp64             (use double precision floating point values for simulation)
        -hostmem          (stores simulation data in host memory)
        -benchmark        (run benchmark to measure performance)
        -numbodies=<N>    (number of bodies (>= 1) to run in simulation)
        -device=<d>       (where d=0,1,2.... for the CUDA device to use)
        -numdevices=<i>   (where i=(number of CUDA devices > 0) to use for simulation)
        -compare          (compares simulation results running once on the default GPU and once on the CPU)
        -cpu              (run n-body simulation on the CPU)
        -tipsy=<file.bin> (load a tipsy model file for simulation)

NOTE: The CUDA Samples are not meant for performance measurements. Results may vary when GPU Boost is enabled.

> Windowed mode
> Simulation data stored in video memory
> Single precision floating point simulation
> 1 Devices used for simulation
GPU Device 0: "Pascal" with compute capability 6.1

> Compute 6.1 CUDA device: [NVIDIA GeForce GTX 1080]
20480 bodies, total time for 10 iterations: 14.627 ms
= 286.754 billion interactions per second
= 5735.088 single-precision GFLOP/s at 20 flops per interaction

Note that setting NVIDIA_DISABLE_REQUIRE=1 on libnvidia-container1:amd64=1.4.0-1 does not fix the issue; it looks like this is a new regression introduced in the 1.4.0 release which @klueska shipped yesterday alongside the nvidia-container-toolkit=1.5.0 release.

@klueska
Copy link
Contributor

klueska commented Apr 30, 2021

Unfortunately, the primary maintainers of the nvidia container stack (myself included) do not have much visibility into WSL specific issues like these. The WSL code is written / tested by a different group within NVIDIA. We just make sure that any support that does get added for WSL doesn’t interfere with the functionality we have on Linux.

I’ve asked the WSL developers to comment here, so hopefully you will get a response soon.

@dualvtable
Copy link
Contributor

hi everyone,

Unfortunately this is a known issue with NVML in our driver and the fix for this issue will be released (very soon) in an upcoming R470 driver release. The new drivers can be downloaded from the WSL page when available: https://developer.nvidia.com/cuda/wsl/download

Also - there is another known issue with multi-GPUs that will still be present with the new driver when its released. This new issue is still under debugging and thus CUDA with WSL2 may not work when used with systems with multi-GPUs.

@ongzexuan
Copy link

Is there a place to download an older version of the drivers? The page only allows you to download the latest version, which has this feature broken

@antonioFlavio
Copy link

I'm also having this issue. Following for updates. Thank you!

@ongzexuan
Copy link

I ended up finding some potentially dubious third party mirror for an older version of the driver, which solves my issue for now.

@kellerbaum
Copy link

Same issue. Following the CUDA on WSL guide here. Both docker examples fail with the same issue.

@StephaneKazmierczak
Copy link

Same issue, however I could only get 470.14 from the official download page. Workaround didn't work, but reverting to 465.42 seems to fixed it. Please add a page for downloading previous version, having to rely on some mega link is pretty lame.

@machineko
Copy link

hi everyone,

Unfortunately this is a known issue with NVML in our driver and the fix for this issue will be released (very soon) in an upcoming R470 driver release. The new drivers can be downloaded from the WSL page when available: https://developer.nvidia.com/cuda/wsl/download

Also - there is another known issue with multi-GPUs that will still be present with the new driver when its released. This new issue is still under debugging and thus CUDA with WSL2 may not work when used with systems with multi-GPUs.

@dualvtable So any updates 📦 ?

@feynmanliang
Copy link

feynmanliang commented May 13, 2021

It seems like the WSL driver page is serving 470.14 at least at the time of this post:
image
But from Windows Update I got 470.25:
image

Although NVML is broken (verified by a failing nvidia-smi), I am able to get CUDA on WSL to work fine under 470.25. My previous problems seem to have RC in the Ubuntu packages, and issues resolve after downgrading:

> sudo apt-get install nvidia-docker2:amd64=2.5.0-1 nvidia-container-runtime:amd64=3.4.0-1 nvidia-container-toolkit:amd64=1.4.2-1 libnvidia-container-tools:amd64=1.3.3-1 libnvidia-container1:amd64=1.3.3-1

@rkcroc
Copy link

rkcroc commented May 13, 2021

@feynmanliang Unfortunately, this does not work for me.
When I downgrade as you specify, I still get the same error as I did with latest WSL nvidia packages:

libnvidia-container-tools/bionic,now 1.4.0-1 amd64 [installed,automatic]
libnvidia-container1/bionic,now 1.4.0-1 amd64 [installed,automatic]
nvidia-container-runtime/bionic,now 3.5.0-1 amd64 [installed,automatic]
nvidia-container-toolkit/bionic,now 1.5.0-1 amd64 [installed,automatic]
nvidia-docker2/bionic,now 2.6.0-1 all [installed]

I am running Windows Insider Windows 10 Pro 21376.co_release.210503-1432

Error message:

docker: Error response from daemon: OCI runtime create failed: container_linux.go:367: starting container process caused: process_linux.go:495: container init caused: Running hook #0:: error running hook: exit status 1, stdout: , stderr: nvidia-container-cli: initialization error: driver error: failed to process request: unknown.

@levipereira
Copy link
Author

@feynmanliang Unfortunately, this does not work for me.
When I downgrade as you specify, I still get the same error as I did with latest WSL nvidia packages:

libnvidia-container-tools/bionic,now 1.4.0-1 amd64 [installed,automatic]
libnvidia-container1/bionic,now 1.4.0-1 amd64 [installed,automatic]
nvidia-container-runtime/bionic,now 3.5.0-1 amd64 [installed,automatic]
nvidia-container-toolkit/bionic,now 1.5.0-1 amd64 [installed,automatic]
nvidia-docker2/bionic,now 2.6.0-1 all [installed]

I am running Windows Insider Windows 10 Pro 21376.co_release.210503-1432

Error message:

docker: Error response from daemon: OCI runtime create failed: container_linux.go:367: starting container process caused: process_linux.go:495: container init caused: Running hook #0:: error running hook: exit status 1, stdout: , stderr: nvidia-container-cli: initialization error: driver error: failed to process request: unknown.

Please downgrade nvidia-docker2 to 2.5

nvidia-docker2/bionic,now 2.6.0-1 all [installed]

@rkcroc
Copy link

rkcroc commented May 14, 2021

@levipereira That's what I did. Sorry if it wasn't clear that, in the above, I was showing my upgraded state. When I downgraded, the NVIDIA packages were as below (i.e. what results from cut-pasting the apt-get from @feynmanliang):

nvidia-docker2:amd64=2.5.0-1
nvidia-container-runtime:amd64=3.4.0-1
nvidia-container-toolkit:amd64=1.4.2-1
libnvidia-container-tools:amd64=1.3.3-1
libnvidia-container1:amd64=1.3.3-1

Also, in case you meant to just downgrade nvidia-docker2 to 2.5, I did that. So, the final state would be what I listed in my previous post, but

nvidia-docker2/bionic,now 2.5.0-1 all [installed]

That gave again the same result as I pasted in my previous post.

@onomatopellan
Copy link

@rkcroc what's the output of ls -la $(which docker) ?

@rkcroc
Copy link

rkcroc commented May 14, 2021

@onomatopellan

-rwxr-xr-x 1 root root 71717000 Mar 29 15:10 /usr/bin/docker

@feynmanliang
Copy link

@rkcroc strange, I am on the same Windows build as you. I am unaffiliated with NVIDIA so we're both shooting in the dark here ;)

Maybe it's driver versions? I am on 470.25, which I got via Windows update (the NVIDIA WSL page was serving me 470.14)

@onomatopellan
Copy link

@rkcroc I can't reproduce it with a brand new Ubuntu install. I have driver 470.25 too.

@rkcroc
Copy link

rkcroc commented May 15, 2021

@feynmanliang @onomatopellan Yes, you nailed it. For some reason Windows is not offering me the NVIDIA 470.25 driver update.
I will try to get that installed and re-try.

@qiangxinglin
Copy link

Same issue, any update?

@onomatopellan
Copy link

@qiangxinglin can you post your output for nvidia-container-cli -k -d /dev/tty info ?

@qiangxinglin
Copy link

qiangxinglin commented May 17, 2021

@onomatopellan


I0517 13:36:49.986951 19831 nvc.c:372] initializing library context (version=1.4.0, build=704a698b7a0ceec07a48e56c37365c741718c2df)
I0517 13:36:49.987002 19831 nvc.c:346] using root /
I0517 13:36:49.987044 19831 nvc.c:347] using ldcache /etc/ld.so.cache
I0517 13:36:49.987047 19831 nvc.c:348] using unprivileged user 65534:65534
I0517 13:36:49.987056 19831 nvc.c:389] attempting to load dxcore to see if we are running under Windows Subsystem for Linux (WSL)
I0517 13:36:49.987114 19831 nvc.c:391] dxcore initialization failed, continuing assuming a non-WSL environment
W0517 13:36:49.998762 19831 nvc.c:254] failed to detect NVIDIA devices
I0517 13:36:49.998967 19832 nvc.c:274] loading kernel module nvidia
E0517 13:36:50.010418 19832 nvc.c:276] could not load kernel module nvidia
I0517 13:36:50.010443 19832 nvc.c:292] loading kernel module nvidia_uvm
E0517 13:36:50.021585 19832 nvc.c:294] could not load kernel module nvidia_uvm
I0517 13:36:50.021610 19832 nvc.c:301] loading kernel module nvidia_modeset
E0517 13:36:50.033145 19832 nvc.c:303] could not load kernel module nvidia_modeset
I0517 13:36:50.033457 19833 driver.c:101] starting driver service
E0517 13:36:50.033608 19833 driver.c:168] could not start driver service: load library failed: libnvidia-ml.so.1: cannot open shared object file: no such file or directory
I0517 13:36:50.033745 19831 driver.c:203] driver service terminated successfully
nvidia-container-cli: initialization error: driver error: failed to process request

By the way, I don't know whether this is a relevant issue:

$ sudo ldconfig
/sbin/ldconfig.real: /usr/lib/wsl/lib/libcuda.so.1 is not a symbolic link

@qiangxinglin
Copy link

qiangxinglin commented May 17, 2021

@onomatopellan

I also find an interesting thing:

With Docker Desktop ver 3.3.1(63152)

$ docker run --rm -it --env NVIDIA_DISABLE_REQUIRE=1 --gpus=all nvcr.io/nvidia/k8s/cuda-sample:nbody nbody -gpu -benchmark
Run "nbody -benchmark [-numbodies=<numBodies>]" to measure performance.
        -fullscreen       (run n-body simulation in fullscreen mode)
        -fp64             (use double precision floating point values for simulation)
        -hostmem          (stores simulation data in host memory)
        -benchmark        (run benchmark to measure performance)
        -numbodies=<N>    (number of bodies (>= 1) to run in simulation)
        -device=<d>       (where d=0,1,2.... for the CUDA device to use)
        -numdevices=<i>   (where i=(number of CUDA devices > 0) to use for simulation)
        -compare          (compares simulation results running once on the default GPU and once on the CPU)
        -cpu              (run n-body simulation on the CPU)
        -tipsy=<file.bin> (load a tipsy model file for simulation)

NOTE: The CUDA Samples are not meant for performance measurements. Results may vary when GPU Boost is enabled.

Error: only 0 Devices available, 1 requested.  Exiting.


$ docker run --rm -it --gpus=all nvcr.io/nvidia/k8s/cuda-sample:nbody nbody -gpu -benchmark
docker: Error response from daemon: OCI runtime create failed: container_linux.go:367: starting container process caused: process_linux.go:495: container init caused: Running hook #0:: error running hook: exit status 1, stdout: , stderr: nvidia-container-cli: requirement error: unsatisfied condition: cuda>=11.2, please update your driver to a newer version, or use an earlier cuda container: unknown.

But with Docker Desktop ver 3.3.3(64133)

$ docker run --rm -it --gpus=all nvcr.io/nvidia/k8s/cuda-sample:nbody nbody -gpu -benchmark
docker: Error response from daemon: OCI runtime create failed: container_linux.go:367: starting container process caused: process_linux.go:495: container init caused: Running hook #0:: error running hook: exit status 1, stdout: , stderr: nvidia-container-cli: initialization error: driver error: failed to process request: unknown.


$ docker run --rm -it --env NVIDIA_DISABLE_REQUIRE=1 --gpus=all nvcr.io/nvidia/k8s/cuda-sample:nbody nbody -gpu -benchmark
docker: Error response from daemon: OCI runtime create failed: container_linux.go:367: starting container process caused: process_linux.go:495: container init caused: Running hook #0:: error running hook: exit status 1, stdout: , stderr: nvidia-container-cli: initialization error: driver error: failed to process request: unknown.

The stderr are different in 2 version.

@onomatopellan
Copy link

@qiangxinglin What's your Windows build (winver.exe)? And the output of uname -a ?

@qiangxinglin
Copy link

@onomatopellan Oh my *** god! I rolled back to Docker Desktop 3.3.1(63152), and perform a reboot, it works!

$  docker run --rm --gpus=all --env NVIDIA_DISABLE_REQUIRE=1 -it nvcr.io/nvidia/k8s/cuda-sample:nbody nbody -gpu -benchmark
Run "nbody -benchmark [-numbodies=<numBodies>]" to measure performance.
        -fullscreen       (run n-body simulation in fullscreen mode)
        -fp64             (use double precision floating point values for simulation)
        -hostmem          (stores simulation data in host memory)
        -benchmark        (run benchmark to measure performance)
        -numbodies=<N>    (number of bodies (>= 1) to run in simulation)
        -device=<d>       (where d=0,1,2.... for the CUDA device to use)
        -numdevices=<i>   (where i=(number of CUDA devices > 0) to use for simulation)
        -compare          (compares simulation results running once on the default GPU and once on the CPU)
        -cpu              (run n-body simulation on the CPU)
        -tipsy=<file.bin> (load a tipsy model file for simulation)

NOTE: The CUDA Samples are not meant for performance measurements. Results may vary when GPU Boost is enabled.

> Windowed mode
> Simulation data stored in video memory
> Single precision floating point simulation
> 1 Devices used for simulation
GPU Device 0: "Ampere" with compute capability 8.6

> Compute 8.6 CUDA device: [NVIDIA GeForce RTX 3090]
83968 bodies, total time for 10 iterations: 254.847 ms
= 276.661 billion interactions per second
= 5533.223 single-precision GFLOP/s at 20 flops per interaction

@onomatopellan
Copy link

onomatopellan commented May 17, 2021

@qiangxinglin Awesome. It was weird that only worked for me.

Well, that confirms it's a nvidia-docker bug that should be fixed in a upcoming update/driver.

@qiangxinglin
Copy link

qiangxinglin commented May 17, 2021

Guys, Docker Desktop 3.3.1(63152) is the cure! Do not forget to reboot after downgrade :)

Do not update to the latest version until fixed!

@cpbotha
Copy link

cpbotha commented May 18, 2021

Guys, Docker Desktop 3.3.1(63152) is the cure! Do not forget to reboot after downgrade :)

Do not update to the latest version until fixed!

More generally speaking: On the docs at https://docs.nvidia.com/cuda/wsl-user-guide/index.html it says "Note that NVIDIA Container Toolkit does not yet support Docker Desktop WSL 2 backend." -- is that advice now simply outdated, so we can expect nvidia docker to work with Docker Desktop v3.3.1?

nvidia folks, maybe time to update that part of the docs?

More specifically @qiangxinglin -- did you have to downgrade anything else except for Docker Desktop, for example nvidia-docker2 down to 2.5?

@feynmanliang
Copy link

feynmanliang commented May 18, 2021

Docker Desktop >=3.1.0 should support WSL 2 GPUs (https://docs.docker.com/docker-for-windows/wsl/#gpu-support) so the nvidia documentation is out of date.

That said, there are quite a few users who prefer running docker natively on top of WSL, so I wouldn't close this issue with the resolution being "downgrade Docker Desktop." My guess is that the downgrade resulted in a new docker host WSL VM with the older nvidia Ubuntu packages (#1496 (comment))

@qiangxinglin
Copy link

@cpbotha No, only the Docker Desktop, but I find new issues, although torch.cuda.is_available() & tf.config.list_physical_devices('GPU') return true, the normal python code would freeze forever. Still need to find a workaround.

@renziver
Copy link

I'm having similar issue as well. Downgrading to Docker 3.3.1 or 3.3.2 worked for me, there's something wrong with the 3.3.3 release.

Another thing is that docker run works with CUDA/NVIDIA Driver, but I'm having trouble to make CUDA work with docker build . Does anyone here encounter the same issue with building containers w/ NVIDIA Driver support?

@Minxiangliu
Copy link

Minxiangliu commented May 25, 2021

@qiangxinglin
I tried all the above things, but none of them worked, please help me.

$ docker run --rm --gpus=all --env NVIDIA_DISABLE_REQUIRE=1 -it nvcr.io/nvidia/k8s/cuda-sample:nbody nbody -gpu -benchmark
> docker: Error response from daemon: OCI runtime create failed: container_linux.go:367: starting container process caused:
 process_linux.go:495: container init caused: Running hook #0:: error running hook: exit status 1, 
 stdout: , stderr: nvidia-container-cli: initialization error: driver error: failed to process request: unknown.
winver: version 2004(OS Build 19041.985)
uname: Linux HLAI005021 5.4.72-microsoft-standard-WSL2 #1 SMP Wed Oct 28 23:40:43 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
Windows nvidia driver: 470.14

image
image
image
image
image

@qiangxinglin
Copy link

qiangxinglin commented May 25, 2021 via email

@Minxiangliu
Copy link

原因是您應該使用“ dev”頻道訂閱Windows內部程序,然後重試。請記住,獲勝版本至少應為20000+。Minxiangliu @.***>於2021年5月25日週二下午4:54完成:

@qiangxinglin < https://github.com/qiangxinglin >我嘗試了上述所有操作,但沒有一個起作用,請幫助我。$ docker run --rm --gpus = all --env NVIDIA_DISABLE_REQUIRE = 1 -it nvcr.io/nvidia/k8s/cuda-sample:nbody nbody -gpu -benchmark> docker:來自守護程序的錯誤響應:OCI運行時創建失敗: container_linux.go:367:啟動容器進程引起:process_linux.go:495:容器初始化引起:運行鉤子0 ::錯誤運行鉤子:退出狀態1,stdout:,stderr:nvidia-container-cli:初始化錯誤:驅動程序錯誤:無法處理請求:未知。winver:版本2004(操作系統內部版本19041.985)別名:Linux HLAI005021 5.4.72-microsoft-standard-WSL2 #1SMP週三10月28日23:40:43 UTC 2020 x86_64 x86_64 x86_64 GNU / Linux Windows nvidia驅動程序:470.14 [image:圖片] < https://user-images.githubusercontent.com/30144428/119467420-16223d80-bd78-11eb- 93d0-81618dd18aea.png > [圖像:圖片] < https://user-images.githubusercontent.com/30144428/119468382-f93a3a00-bd78-11eb-8d1a-f47d36c392cf.png > [圖像:圖片] < https:// user-images.githubusercontent.com/30144428/119468529-1d961680-bd79-11eb-9ca7-cbf4a3d3e44d.png > [image:圖片] < https://user-images.githubusercontent.com/30144428/119468988-9006f680-bd79- 11eb-822b-ed4f7892fdfd.png > —您收到此郵件是因為有人提到您。直接回复此電子郵件,在GitHub < #1496上查看(評論)>,或取消訂閱< https://github.com/notifications/unsubscribe-auth/AE3QGORZ74I66QWBEMRZYC3TPNQVFANCNFSM433PA7NA >。

Do you mean that I need to update windows?

@qiangxinglin
Copy link

qiangxinglin commented May 25, 2021 via email

@Minxiangliu
Copy link

Minxiangliu commented May 25, 2021

Yes, follow the cuda on wsl instructions to upgrade your OS version (google keyword: windows insider program). Note that it’s quite a big update which may take you more than 10 minutes. Minxiangliu @.>于2021年5月25日 周二下午5:12写道:

原因是您應該使用“ dev”頻道訂閱Windows內部程序,然後重試。請記住,獲勝版本至少應為20000+。Minxiangliu @ .
>於2021年5月25日週二下午4:54完成: … <#m_-5782840853630349464_> @qiangxinglin https://github.com/qiangxinglin < https://github.com/qiangxinglin >我嘗試了上述所有操作,但沒有一個起作用,請幫助我。$ docker run --rm --gpus = all --env NVIDIA_DISABLE_REQUIRE = 1 -it nvcr.io/nvidia/k8s/cuda-sample:nbody nbody -gpu -benchmark> docker:來自守護程序的錯誤響應:OCI運行時創建失敗: container_linux.go:367:啟動容器進程引起:process_linux.go:495:容器初始化引起:運行鉤子0 ::錯誤運行鉤子:退出狀態1,stdout:,stderr:nvidia-container-cli:初始化錯誤:驅動程序錯誤:無法處理請求:未知。winver:版本2004(操作系統內部版本19041.985)別名:Linux HLAI005021 5.4.72-microsoft-standard-WSL2 #1 <#1>SMP週三10月28日23:40:43 UTC 2020 x86_64 x86_64 x86_64 GNU / Linux Windows nvidia驅動程序:470.14 [image:圖片] < https://user-images.githubusercontent.com/30144428/119467420-16223d80-bd78-11eb- 93d0-81618dd18aea.png https://user-images.githubusercontent.com/30144428/119467420-16223d80-bd78-11eb-93d0-81618dd18aea.png > [圖像:圖片] < https://user-images.githubusercontent.com/30144428/119468382-f93a3a00-bd78-11eb-8d1a-f47d36c392cf.png > [圖像:圖片] < https:// user-images.githubusercontent.com/30144428/119468529-1d961680-bd79-11eb-9ca7-cbf4a3d3e44d.png https://user-images.githubusercontent.com/30144428/119468529-1d961680-bd79-11eb-9ca7-cbf4a3d3e44d.png > [image:圖片] < https://user-images.githubusercontent.com/30144428/119468988-9006f680-bd79- 11eb-822b-ed4f7892fdfd.png https://user-images.githubusercontent.com/30144428/119468988-9006f680-bd79-11eb-822b-ed4f7892fdfd.png > —您收到此郵件 <https://user-images.githubusercontent.com/30144428/119468988-9006f680-bd79-11eb-822b-ed4f7892fdfd.png>是因為有人提到您。直接回复此電子郵件,在GitHub < #1496 <#1496 (comment)> 上查看(評論) <#1496 (comment)>>,或取消訂閱< https://github.com/notifications/unsubscribe-auth/AE3QGORZ74I66QWBEMRZYC3TPNQVFANCNFSM433PA7NA >。 Do you mean that I need to update windows? — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub <#1496 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/AE3QGOSSDX3FRTNP2P2GQ7TTPNSYRANCNFSM433PA7NA .

I have updated to the latest: Versino 20H2(OS build 19042.985), but still can't work.
Is this because my version has not been updated to 20000+?

@qiangxinglin
Copy link

qiangxinglin commented May 25, 2021 via email

@Minxiangliu
Copy link

Minxiangliu commented May 26, 2021

@qiangxinglin
My windows update keeps crashing in version 21387. Do you have any comments? Many thank.

Beta channel can work? I keep to fail in dev channel(version 21387) update.

@qiangxinglin
Copy link

@Minxiangliu No, the dev channel is mandantory. If you keep failing on dev, you should seek for help on Microsoft forums.

@Minxiangliu
Copy link

@qiangxinglin The following is the result of my execution. Am i doing something wrong?

hp@H......:~$ docker run --rm -it --env NVIDIA_DISABLE_REQUIRE=1 --gpus=all nvcr.io/nvidia/k8s/cuda-sample:nbody nbody -gpu -benchmark
Run "nbody -benchmark [-numbodies=<numBodies>]" to measure performance.
        -fullscreen       (run n-body simulation in fullscreen mode)
        -fp64             (use double precision floating point values for simulation)
        -hostmem          (stores simulation data in host memory)
        -benchmark        (run benchmark to measure performance)
        -numbodies=<N>    (number of bodies (>= 1) to run in simulation)
        -device=<d>       (where d=0,1,2.... for the CUDA device to use)
        -numdevices=<i>   (where i=(number of CUDA devices > 0) to use for simulation)
        -compare          (compares simulation results running once on the default GPU and once on the CPU)
        -cpu              (run n-body simulation on the CPU)
        -tipsy=<file.bin> (load a tipsy model file for simulation)

NOTE: The CUDA Samples are not meant for performance measurements. Results may vary when GPU Boost is enabled.

Error: only 0 Devices available, 1 requested.  Exiting.

image
image

winver: 21354.1

@qiangxinglin
Copy link

qiangxinglin commented May 27, 2021 via email

@Minxiangliu
Copy link

Update your driver to 470.14. Minxiangliu @.***>于2021年5月27日 周四下午5:49写道:

@qiangxinglin https://github.com/qiangxinglin The following is the result of my execution. Am i doing something wrong? hp@H......:~$ docker run --rm -it --env NVIDIA_DISABLE_REQUIRE=1 --gpus=all nvcr.io/nvidia/k8s/cuda-sample:nbody nbody -gpu -benchmark Run "nbody -benchmark [-numbodies=]" to measure performance. -fullscreen (run n-body simulation in fullscreen mode) -fp64 (use double precision floating point values for simulation) -hostmem (stores simulation data in host memory) -benchmark (run benchmark to measure performance) -numbodies= (number of bodies (>= 1) to run in simulation) -device= (where d=0,1,2.... for the CUDA device to use) -numdevices= (where i=(number of CUDA devices > 0) to use for simulation) -compare (compares simulation results running once on the default GPU and once on the CPU) -cpu (run n-body simulation on the CPU) -tipsy=<file.bin> (load a tipsy model file for simulation) NOTE: The CUDA Samples are not meant for performance measurements. Results may vary when GPU Boost is enabled. Error: only 0 Devices available, 1 requested. Exiting. [image: image] https://user-images.githubusercontent.com/30144428/119805398-caa49680-bf13-11eb-83e1-e47ee70225fb.png [image: image] https://user-images.githubusercontent.com/30144428/119805432-d2fcd180-bf13-11eb-925d-d6692d2a545a.png winver: 21354.1 — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub <#1496 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/AE3QGOSYS6YFQSDPPSIOETDTPYIUBANCNFSM433PA7NA .

Do I need to reinstall Ubuntu after I update the Nvidia driver?

@qiangxinglin
Copy link

qiangxinglin commented May 27, 2021 via email

@Minxiangliu
Copy link

@qiangxinglin Unfortunately, an error occurred after the driver was updated to 470.14.
image

@qiangxinglin
Copy link

@Minxiangliu If the screenshot is in WSL, then it's intended. There's no driver inside WSL but in your Windows. If your testing program works, then everything is done.

@Minxiangliu
Copy link

@qiangxinglin I tried to install according to Nvidia and got the following error.
image

Could the cause of the error be my winver problem?
My winver: 21354.1

@qiangxinglin
Copy link

qiangxinglin commented May 28, 2021 via email

@Minxiangliu
Copy link

The output does not same as anything talked above. Maybe you could open a new issue on this. I don’t know either.

On Fri, May 28, 2021 at 3:25 PM Minxiangliu @.***> wrote: @qiangxinglin https://github.com/qiangxinglin I tried to install according to Nvidia https://docs.nvidia.com/cuda/wsl-user-guide/index.html#setting-containers and got the following error. [image: image] https://user-images.githubusercontent.com/30144428/119945588-3d6f4980-bfc8-11eb-852d-bd2d73474acf.png Could the cause of the error be my winver problem? My winver: 21354.1 — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub <#1496 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/AE3QGOXZ47KRGHCXKGDQYATTP5ANNANCNFSM433PA7NA .

OK....I updated windows to the latest and the problem was solved. Thank you very much for your assistance.

USER@H...:~$ docker run --rm -it --env NVIDIA_DISABLE_REQUIRE=1 --gpus=all nvcr.io/nvidia/k8s/cuda-sample:nbody nbody -gpu -benchmark
Run "nbody -benchmark [-numbodies=<numBodies>]" to measure performance.
        -fullscreen       (run n-body simulation in fullscreen mode)
        -fp64             (use double precision floating point values for simulation)
        -hostmem          (stores simulation data in host memory)
        -benchmark        (run benchmark to measure performance)
        -numbodies=<N>    (number of bodies (>= 1) to run in simulation)
        -device=<d>       (where d=0,1,2.... for the CUDA device to use)
        -numdevices=<i>   (where i=(number of CUDA devices > 0) to use for simulation)
        -compare          (compares simulation results running once on the default GPU and once on the CPU)
        -cpu              (run n-body simulation on the CPU)
        -tipsy=<file.bin> (load a tipsy model file for simulation)

NOTE: The CUDA Samples are not meant for performance measurements. Results may vary when GPU Boost is enabled.

> Windowed mode
> Simulation data stored in video memory
> Single precision floating point simulation
> 1 Devices used for simulation
GPU Device 0: "Turing" with compute capability 7.5

> Compute 7.5 CUDA device: [NVIDIA GeForce RTX 2060 SUPER]
34816 bodies, total time for 10 iterations: 51.726 ms
= 234.339 billion interactions per second
= 4686.781 single-precision GFLOP/s at 20 flops per interaction

@kon72
Copy link

kon72 commented Jun 4, 2021

Driver 470.76 was released yesterday, and it works well with nvidia-docker2=2.6.0-1!

@onomatopellan
Copy link

Indeed Driver 470.76 fixed the problems I had with nvidia-docker:

  • The nvidia-container-cli: initialization error is fixed
  • No more need for NVIDIA_DISABLE_REQUIRE=1
  • nvidia-smi finally works (except GPU fan detection)!

https://developer.nvidia.com/cuda/wsl/download

@demul
Copy link

demul commented Jan 13, 2022

@onomatopellan Oh my *** god! I rolled back to Docker Desktop 3.3.1(63152), and perform a reboot, it works!

$  docker run --rm --gpus=all --env NVIDIA_DISABLE_REQUIRE=1 -it nvcr.io/nvidia/k8s/cuda-sample:nbody nbody -gpu -benchmark
Run "nbody -benchmark [-numbodies=<numBodies>]" to measure performance.
        -fullscreen       (run n-body simulation in fullscreen mode)
        -fp64             (use double precision floating point values for simulation)
        -hostmem          (stores simulation data in host memory)
        -benchmark        (run benchmark to measure performance)
        -numbodies=<N>    (number of bodies (>= 1) to run in simulation)
        -device=<d>       (where d=0,1,2.... for the CUDA device to use)
        -numdevices=<i>   (where i=(number of CUDA devices > 0) to use for simulation)
        -compare          (compares simulation results running once on the default GPU and once on the CPU)
        -cpu              (run n-body simulation on the CPU)
        -tipsy=<file.bin> (load a tipsy model file for simulation)

NOTE: The CUDA Samples are not meant for performance measurements. Results may vary when GPU Boost is enabled.

> Windowed mode
> Simulation data stored in video memory
> Single precision floating point simulation
> 1 Devices used for simulation
GPU Device 0: "Ampere" with compute capability 8.6

> Compute 8.6 CUDA device: [NVIDIA GeForce RTX 3090]
83968 bodies, total time for 10 iterations: 254.847 ms
= 276.661 billion interactions per second
= 5533.223 single-precision GFLOP/s at 20 flops per interaction

This is a only solution which works for me!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

Thank u @qiangxinglin

@rahm-hopkins
Copy link

Note for anyone still having this problem: Windows 10 version 21H2 works and has build version in the 19000s. I had to download the update here, as standard windows update process would not update this far for some reason.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests