Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Starting or stopping the admin container breaks nvidia-smi in one of my running containers #3860

Open
modelbitjason opened this issue Mar 28, 2024 · 7 comments
Labels
area/accelerated-computing Issues related to GPUs/ASICs status/needs-triage Pending triage or re-evaluation type/bug Something isn't working

Comments

@modelbitjason
Copy link

I'm using bottlerocket-nvidia on ECS.

Very recently logging into my ec2 hosts via SSM then entering the admin container causes nvidia-smi to stop functioning.

Other containers that also access the GPU on that host are fine.

I'm not telling ECS about my GPU usage, instead I changed the default runtime to nvidia and pass in the relevant env vars.
If I tell ECS about the GPU, it won't let me gracefully roll a new instance on that host.

Any ideas on what could have changed or how I can debug? This also happens on 1.19.0 -- I only tried 1.19.2 before filing this report, I can reproduce it there as well,

Running docker inspect produces the exact same output (except failing health checks once nvidia-smi stops working)

Image I'm using:

amazon/bottlerocket-aws-ecs-2-nvidia-x86_64-v1.19.2-29cc92cc

What I expected to happen:
I expect that nvidia-smi returns some information about the instance gfx cards.

bash-5.1# docker exec 694 nvidia-smi
Thu Mar 28 18:08:54 2024       
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.161.07             Driver Version: 535.161.07   CUDA Version: 12.2     |
|-----------------------------------------+----------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |         Memory-Usage | GPU-Util  Compute M. |
|                                         |                      |               MIG M. |
|=========================================+======================+======================|
|   0  Tesla T4                       Off | 00000000:00:1E.0 Off |                    0 |
| N/A   29C    P0              26W /  70W |    109MiB / 15360MiB |      0%      Default |
|                                         |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+
                                                                                         
+---------------------------------------------------------------------------------------+
| Processes:                                                                            |
|  GPU   GI   CI        PID   Type   Process name                            GPU Memory |
|        ID   ID                                                             Usage      |
|=======================================================================================|
+---------------------------------------------------------------------------------------+

What actually happened:

bash-5.1# docker exec 694 nvidia-smi
Failed to initialize NVML: Unknown Error

How to reproduce the problem:

Log into a instance, enter-admin-container, sheltie

 docker run -d --rm --runtime=nvidia -e NVIDIA_VISIBLE_DEVICES=all -e NVIDIA_DRIVER_CAPABILITIES=all \
 nvcr.io/nvidia/cuda:12.0.0-base-ubuntu20.04 bash -c "while [ true ]; do nvidia-smi -L; sleep 5; done"  

Then, exit, disable-admin-container, enter-admin-container

docker logs <container> -- Should see

GPU 0: Tesla T4 (UUID: GPU-0dff5832-17bd-b243-7f0f-95988d33ba5a)
Failed to initialize NVML: Unknown Error
Failed to initialize NVML: Unknown Error
@modelbitjason modelbitjason added status/needs-triage Pending triage or re-evaluation type/bug Something isn't working labels Mar 28, 2024
@sam-berning
Copy link
Contributor

I can reproduce this, launched a g3.8x instance with bottlerocket-aws-ecs-2-nvidia-x86_64-v1.19.2-29cc92cc and followed the repro steps, docker logs <container> gives me:

GPU 0: Tesla M60 (UUID: GPU-963a4569-ebb4-d876-14a7-c3c8491f8682)
GPU 1: Tesla M60 (UUID: GPU-cf45c479-5e3d-5316-42b5-1301bc4f4f6a)
Failed to initialize NVML: Unknown Error
Failed to initialize NVML: Unknown Error

Because the instance I was using had 2 GPUs, I also tried exposing each of the GPUs to two different containers:

docker run -d --rm --runtime=nvidia -e NVIDIA_VISIBLE_DEVICES=0 -e NVIDIA_DRIVER_CAPABILITIES=all \
 nvcr.io/nvidia/cuda:12.0.0-base-ubuntu20.04 bash -c "while [ true ]; do nvidia-smi -L; sleep 5; done"
docker run -d --rm --runtime=nvidia -e NVIDIA_VISIBLE_DEVICES=1 -e NVIDIA_DRIVER_CAPABILITIES=all \
 nvcr.io/nvidia/cuda:12.0.0-base-ubuntu20.04 bash -c "while [ true ]; do nvidia-smi -L; sleep 5; done"

The result was the same for both containers, but it showed a new error message in the logs:

GPU 0: Tesla M60 (UUID: GPU-963a4569-ebb4-d876-14a7-c3c8491f8682)
Unable to determine the device handle for gpu 0000:00:1D.0: Unknown Error
Failed to initialize NVML: Unknown Error
Failed to initialize NVML: Unknown Error

@arnaldo2792
Copy link
Contributor

Hi @modelbitjason , thanks for reporting this.

I just want to add some context on why the steps you follow trigger this failure.

When you enable-admin-container/disable-admin-container, a systemctl command will be issued to reload containerd's configurations so that the admin container is started/stopped. Whenever this happens, systemd will undo any cgroups modifications done by libnvidia-container while creating the containers and granting them access to the GPUs. This is a know issue, and the solution that was given was to run nvidia-ctk whenever the GPUs are loaded. We already do that today in all Bottlerocket variants. There seems to be another fix in newer versions of libnvidia-container, however when we tried to update to v1.14.X, it broke the ECS aarch64 variant. I'll ask my coworkers to give that new fix a spin, and check if the problem persists (after we figure out why the new libnvidia-container version broke the ECS aarch64 variant).

That said, could you please expand on what do you mean with this?

If I tell ECS about the GPU, it won't let me gracefully roll a new instance on that host.

Are you having problems with task deployments? Or are you trying to "over-subscribe" the node so that you can run multiple tasks in the same host and share the one GPU?

@modelbitjason
Copy link
Author

modelbitjason commented Apr 3, 2024

Thanks for the explanation @arnaldo2792 -- I saw some other tickets about cgroups but those fixes didn't help. This explains why.

re: ECS + GPUs, yes, but more specifically, I need to have the old instance hand-off data to the new one and do a graceful shutdown. I only have one ECS-managed task per EC2 instance (except during rollout).

When I roll a new task version, the newly started version finds the existing container, asks for some state, then tells it to drain + eventually shutdown. This way I can both be sure the new version comes up healthy before the old version on that host is removed.

This service manages some long-lived containers on the host and proxies requests to them. The old version passes control of the containers to the newly started version, but needs to stick around until all the outstanding requests it is currently proxying are complete.

@vigh-m vigh-m added the area/accelerated-computing Issues related to GPUs/ASICs label May 16, 2024
@arnaldo2792
Copy link
Contributor

@modelbitjason, I'm sorry for the very late response.

When I roll a new task version, the newly started version finds the existing container, asks for some state, then tells it to drain + eventually shutdown.

Just so that I have the full picture of your architecture, when/why do you use the admin container? Do you that to check if the old container is done draining?

@modelbitjason
Copy link
Author

Thanks @arnaldo2792 -- These days I only use the admin containers to debug. Previously it was also used to manually run docker prune to free up space.

We've had other cases where the GPU goes away, so I log in to try and see what happened. One or two times it seemed permanent and I just cycled to a new instance.

The current workaround is to have the admin container start at boot via settings. I'm just worried about other things triggering this behavior somehow.

@arnaldo2792
Copy link
Contributor

Previously it was also used to manually run docker prune to free up space.

There are options in ECS to free up space, 👍 I can give you pointers to the ones we support if you need them.

The current workaround is to have the admin container start at boot via settings.

Do you do enable the admin container on boot only to debug instances? Or in all your instances, regardless if you will debug in them?

I'm just worried about other things triggering this behavior somehow.

The only path that I've seen that triggers this behavior is the systemctl daemon-reload command, and the NVIDIA folks mentioned that moving towards CDI could help with this problem. I'll check with the ECS folks if they will plan to support CDI soon.

@modelbitjason
Copy link
Author

Previously it was also used to manually run docker prune to free up space.

There are options in ECS to free up space, 👍 I can give you pointers to the ones we support if you need them.

We already have the docker socket mapped in, so doing it from our control plane based on disk usage. We only use ECS to run the main control plane, that container then starts other containers directly via docket socket.

ECS is helpful for the networking and stuff, but we don't have enough control over placement and it's sometimes too slow to start tasks. So this partial usage has been pretty good, especially for tasks that don't need to have their own ENIs.

The current workaround is to have the admin container start at boot via settings.

Do you do enable the admin container on boot only to debug instances? Or in all your instances, regardless if you will debug in them?

Well we do it for all of them on boot since we don't know when we need to debug. Previously, we'd only create the admin instance when needed. Our system is still pretty new and we run into weird bugs like the GPU going away, so we'll need the ability to debug for the foreseeable future.

I'm just worried about other things triggering this behavior somehow.

The only path that I've seen that triggers this behavior is the systemctl daemon-reload command, and the NVIDIA folks mentioned that moving towards CDI could help with this problem. I'll check with the ECS folks if they will plan to support CDI soon.

That's reassuring to hear! We haven't noticed any problems since having the container start at boot. We use it rarely, it's a 'break glass' measure.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/accelerated-computing Issues related to GPUs/ASICs status/needs-triage Pending triage or re-evaluation type/bug Something isn't working
Projects
None yet
Development

No branches or pull requests

4 participants