-
Notifications
You must be signed in to change notification settings - Fork 493
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Starting or stopping the admin container breaks nvidia-smi
in one of my running containers
#3860
Comments
I can reproduce this, launched a
Because the instance I was using had 2 GPUs, I also tried exposing each of the GPUs to two different containers:
The result was the same for both containers, but it showed a new error message in the logs:
|
Hi @modelbitjason , thanks for reporting this. I just want to add some context on why the steps you follow trigger this failure. When you That said, could you please expand on what do you mean with this?
Are you having problems with task deployments? Or are you trying to "over-subscribe" the node so that you can run multiple tasks in the same host and share the one GPU? |
Thanks for the explanation @arnaldo2792 -- I saw some other tickets about cgroups but those fixes didn't help. This explains why. re: ECS + GPUs, yes, but more specifically, I need to have the old instance hand-off data to the new one and do a graceful shutdown. I only have one ECS-managed task per EC2 instance (except during rollout). When I roll a new task version, the newly started version finds the existing container, asks for some state, then tells it to drain + eventually shutdown. This way I can both be sure the new version comes up healthy before the old version on that host is removed. This service manages some long-lived containers on the host and proxies requests to them. The old version passes control of the containers to the newly started version, but needs to stick around until all the outstanding requests it is currently proxying are complete. |
@modelbitjason, I'm sorry for the very late response.
Just so that I have the full picture of your architecture, when/why do you use the admin container? Do you that to check if the old container is done draining? |
Thanks @arnaldo2792 -- These days I only use the admin containers to debug. Previously it was also used to manually run docker prune to free up space. We've had other cases where the GPU goes away, so I log in to try and see what happened. One or two times it seemed permanent and I just cycled to a new instance. The current workaround is to have the admin container start at boot via settings. I'm just worried about other things triggering this behavior somehow. |
There are options in ECS to free up space, 👍 I can give you pointers to the ones we support if you need them.
Do you do enable the admin container on boot only to debug instances? Or in all your instances, regardless if you will debug in them?
The only path that I've seen that triggers this behavior is the |
We already have the docker socket mapped in, so doing it from our control plane based on disk usage. We only use ECS to run the main control plane, that container then starts other containers directly via docket socket. ECS is helpful for the networking and stuff, but we don't have enough control over placement and it's sometimes too slow to start tasks. So this partial usage has been pretty good, especially for tasks that don't need to have their own ENIs.
Well we do it for all of them on boot since we don't know when we need to debug. Previously, we'd only create the admin instance when needed. Our system is still pretty new and we run into weird bugs like the GPU going away, so we'll need the ability to debug for the foreseeable future.
That's reassuring to hear! We haven't noticed any problems since having the container start at boot. We use it rarely, it's a 'break glass' measure. |
I'm using bottlerocket-nvidia on ECS.
Very recently logging into my ec2 hosts via SSM then entering the admin container causes
nvidia-smi
to stop functioning.Other containers that also access the GPU on that host are fine.
I'm not telling ECS about my GPU usage, instead I changed the default runtime to
nvidia
and pass in the relevant env vars.If I tell ECS about the GPU, it won't let me gracefully roll a new instance on that host.
Any ideas on what could have changed or how I can debug? This also happens on 1.19.0 -- I only tried 1.19.2 before filing this report, I can reproduce it there as well,
Running
docker inspect
produces the exact same output (except failing health checks once nvidia-smi stops working)Image I'm using:
amazon/bottlerocket-aws-ecs-2-nvidia-x86_64-v1.19.2-29cc92cc
What I expected to happen:
I expect that
nvidia-smi
returns some information about the instance gfx cards.What actually happened:
How to reproduce the problem:
Log into a instance,
enter-admin-container
,sheltie
Then, exit,
disable-admin-container
,enter-admin-container
docker logs <container>
-- Should seeThe text was updated successfully, but these errors were encountered: