You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have a few problems with the images that are created after deploy services from docker stack.yml.
Some images generally do not have tags, however some images that are deployed from the same stack and created simultaneously!? (why 2 or 3 identical images) have and do not name tag, the images don't get tags and the tag column in the list of images will be empty.
Why in the list I see 2 or 3 identical images if I do not scale containers and my container runs on one node of 3.
After a little analysis, I can say that at the moment of the peel, a new container starts to be created, then goes to shutdown without messages in the logs, the previously run container changes its binding image and it continues to work correctly.
Supposedly no changes were made to the service, so the new container did not start, but just updated the image on the already running, BUT, in the list of images an additional image is created (a duplicate on an open container)
I tried to find an explanation in the knowledge bases, but I never got enough on this behavior.
Renaming the images manually every time does not fit, because a day can deploy stack 10-20 times
Remove duplicated images by manually comparing where actually used and where notused it is also not feasible
Tell me please how to solve this problem.
Steps to reproduce the issue:
1.deploy a stack
2. check tags for the images
3. observe image tag is empty
docker info
`Client:
Context: dockswarmssl
Debug Mode: false
Plugins:
buildx: Docker Buildx (Docker Inc., v0.8.2)
compose: Docker Compose (Docker Inc., v2.6.1)
extension: Manages Docker extensions (Docker Inc., v0.2.7)
sbom: View the packaged-based Software Bill Of Materials (SBOM) for an image (Anchore Inc., 0.6.0)
scan: Docker Scan (Docker Inc., v0.17.0)
The problem is that multiple images of the same are created, which prevents docker image prune from executing the command for unattended images, because all are deleted
Description
I have a few problems with the images that are created after deploy services from docker stack.yml.
Some images generally do not have tags, however some images that are deployed from the same stack and created simultaneously!? (why 2 or 3 identical images) have and do not name tag, the images don't get tags and the tag column in the list of images will be empty.
Why in the list I see 2 or 3 identical images if I do not scale containers and my container runs on one node of 3.
After a little analysis, I can say that at the moment of the peel, a new container starts to be created, then goes to shutdown without messages in the logs, the previously run container changes its binding image and it continues to work correctly.
Supposedly no changes were made to the service, so the new container did not start, but just updated the image on the already running, BUT, in the list of images an additional image is created (a duplicate on an open container)
I tried to find an explanation in the knowledge bases, but I never got enough on this behavior.
Renaming the images manually every time does not fit, because a day can deploy stack 10-20 times
Remove duplicated images by manually comparing where actually used and where notused it is also not feasible
Tell me please how to solve this problem.
Steps to reproduce the issue:
1.deploy a stack
2. check tags for the images
3. observe image tag is empty
docker info
`Client:
Context: dockswarmssl
Debug Mode: false
Plugins:
buildx: Docker Buildx (Docker Inc., v0.8.2)
compose: Docker Compose (Docker Inc., v2.6.1)
extension: Manages Docker extensions (Docker Inc., v0.2.7)
sbom: View the packaged-based Software Bill Of Materials (SBOM) for an image (Anchore Inc., 0.6.0)
scan: Docker Scan (Docker Inc., v0.17.0)
Server:
Containers: 10
Running: 6
Paused: 0
Stopped: 4
Images: 18
Server Version: 20.10.17
Storage Driver: overlay2
Backing Filesystem: xfs
Supports d_type: true
Native Overlay Diff: true
userxattr: false
Logging Driver: json-file
Cgroup Driver: cgroupfs
Cgroup Version: 1
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: active
NodeID: kcop4hec8novqspkjyvc10t6n
Is Manager: true
ClusterID: 3p9ws0irom4c9mes439frcbjj
Managers: 1
Nodes: 3
Default Address Pool: 10.0.0.0/8
SubnetSize: 24
Data Path Port: 4789
Orchestration:
Task History Retention Limit: 5
Raft:
Snapshot Interval: 10000
Number of Old Snapshots to Retain: 0
Heartbeat Tick: 1
Election Tick: 10
Dispatcher:
Heartbeat Period: 5 seconds
CA Configuration:
Expiry Duration: 3 months
Force Rotate: 0
Autolock Managers: false
Root Rotation In Progress: false
Node Address: 172.21.171.15
Manager Addresses:
172.21.171.15:2377
Runtimes: runc io.containerd.runc.v2 io.containerd.runtime.v1.linux
Default Runtime: runc
Init Binary: docker-init
containerd version: 10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1
runc version: v1.1.2-0-ga916309
init version: de40ad0
Security Options:
seccomp
Profile: default
Kernel Version: 3.10.0-1160.71.1.el7.x86_64
Operating System: CentOS Linux 7 (Core)
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 3.701GiB
Name: mss-docker00.sodrugestvo.local
ID: S77A:WBER:ODTY:74IZ:DX7L:C2CU:QG6F:YR5P:GJN7:ZOJL:CRBC:65TF
Docker Root Dir: /var/lib/docker
Debug Mode: false
HTTP Proxy: http://wsa.sodrugestvo.local:3128/
HTTPS Proxy: http://wsa.sodrugestvo.local:3128/
No Proxy: localhost,127.0.0.1,.sodrugestvo.local,.sodru.com
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
Default Address Pools:
Base: 192.168.0.0/16, Size: 24`
Portainer version:
Portainer Business Edition 2.16.1
Platform (linux)
Bug description
A clear and concise description of what the bug is.
Expected behavior
A clear and concise description of what you expected to happen.
Portainer Logs
Provide the logs of your Portainer container or Service.
You can see how here
Steps to reproduce the issue:
Technical details:
docker run -p 9443:9443 portainer/portainer
):Additional context
Add any other context about the problem here.
The text was updated successfully, but these errors were encountered: