-
Notifications
You must be signed in to change notification settings - Fork 35
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Container not available anymore #104
Comments
Same here, most containers are updated automatically by watchtower, and the log just blows up afterwards with thousands of these errors. |
@attila123456 The containers have a new name or the same name? |
@ualex73 Same issue here, and yes, they have the same name. |
@ualex73 I think it is something to do with the container ID. When you stop and restart a container, my understanding is that it keeps the same ID. However, when you re-create a container, such as after pulling a new image down, it get's a new container ID. I think this is what it is complaining about - it says the container (based on ID) no longer exists. This is technically true, but it has been replaced with a new container ID, but using the same name. |
Any update or progress on this at all? |
Is perhaps possible to reload the integration in order to refresh the containers list with current ID? |
Can you test the latest version, I believe it is fixed in this one. If confirmed, I will make a new version. |
I will close this old issue. If still persists, please open a new issue. |
I've restarted/recreated some containers due to issues I had with them (which I have to do fairly often for a small set of containers).
After having done this I get lots of instances of the following in the log file. I'm guessing monitor_docker is trying to track containers by their container id but re-creating them means that they are now different containers.
Restarting Home Assistant fixes the problem, but only until I have to recreate a container again.
The text was updated successfully, but these errors were encountered: