Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: Auto-update with pods may fail if there's only one container in the pod #17181

Closed
saiarcot895 opened this issue Jan 21, 2023 · 10 comments · Fixed by #17508
Closed

[Bug]: Auto-update with pods may fail if there's only one container in the pod #17181

saiarcot895 opened this issue Jan 21, 2023 · 10 comments · Fixed by #17508
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.

Comments

@saiarcot895
Copy link

saiarcot895 commented Jan 21, 2023

Issue Description

My setup consists of multiple pods running one or more containers. All of the pods have user namespacing (or at least a UID/GID map) and a custom network specified. All containers have auto-update enabled. During the auto-update, if there's a container that needs to be updated, and that container is the only one running in that pod, then the auto-update might fail, because it seems that when the container is getting restarted by systemd, it's trying to bring down the pod at the same time. Therefore, the container gets rolled back.

Steps to reproduce the issue

  1. Set up a pod with just one container with image auto-update enabled.
  2. Run the auto-updater when there's some new update for the image.

Describe the results you received

The container was rolled back instead of getting updated.

Describe the results you expected

The container should've been updated successfully

podman info output

host:
  arch: amd64
  buildahVersion: 1.28.0
  cgroupControllers:
  - cpuset
  - cpu
  - io
  - memory
  - hugetlb
  - pids
  cgroupManager: systemd
  cgroupVersion: v2
  conmon:
    package: conmon_2:2.1.5-0ubuntu22.04+obs14.22_amd64
    path: /usr/bin/conmon
    version: 'conmon version 2.1.5, commit: '
  cpuUtilization:
    idlePercent: 95.71
    systemPercent: 0.86
    userPercent: 3.43
  cpus: 12
  distribution:
    codename: jammy
    distribution: ubuntu
    version: "22.04"
  eventLogger: journald
  hostname: nuc
  idMappings:
    gidmap: null
    uidmap: null
  kernel: 5.19.4arcot-00008-g04e5cacdda90
  linkmode: dynamic
  logDriver: journald
  memFree: 15378026496
  memTotal: 66578608128
  networkBackend: netavark
  ociRuntime:
    name: crun
    package: crun_1.7.2-0ubuntu22.04+obs48.12_amd64
    path: /usr/bin/crun
    version: |-
      crun version 1.7.2
      commit: 0356bf4aff9a133d655dc13b1d9ac9424706cac4
      rundir: /run/crun
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +YAJL
  os: linux
  remoteSocket:
    exists: true
    path: /run/podman/podman.sock
  security:
    apparmorEnabled: false
    capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: false
    seccompEnabled: true
    seccompProfilePath: /usr/share/containers/seccomp.json
    selinuxEnabled: false
  serviceIsRemote: false
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: slirp4netns_1.2.0-0ubuntu22.04+obs10.33_amd64
    version: |-
      slirp4netns version 1.2.0
      commit: 656041d45cfca7a4176f6b7eed9e4fe6c11e8383
      libslirp: 4.6.1
      SLIRP_CONFIG_VERSION_MAX: 3
      libseccomp: 2.5.3
  swapFree: 3326406656
  swapTotal: 5556404224
  uptime: 691h 13m 17.00s (Approximately 28.79 days)
plugins:
  authorization: null
  log:
  - k8s-file
  - none
  - passthrough
  - journald
  network:
  - bridge
  - macvlan
  volume:
  - local
registries:
  search:
  - registry.fedoraproject.org
  - registry.access.redhat.com
  - docker.io
  - quay.io
store:
  configFile: /usr/share/containers/storage.conf
  containerStore:
    number: 14
    paused: 0
    running: 14
    stopped: 0
  graphDriverName: overlay
  graphOptions:
    overlay.mountopt: nodev,metacopy=on
  graphRoot: /var/lib/containers/storage
  graphRootAllocated: 493921239040
  graphRootUsed: 258151227392
  graphStatus:
    Backing Filesystem: btrfs
    Native Overlay Diff: "false"
    Supports d_type: "true"
    Using metacopy: "true"
  imageCopyTmpDir: /var/tmp
  imageStore:
    number: 13
  runRoot: /run/containers/storage
  volumePath: /var/lib/containers/storage/volumes
version:
  APIVersion: 4.3.1
  Built: 0
  BuiltTime: Wed Dec 31 16:00:00 1969
  GitCommit: ""
  GoVersion: go1.18.1
  Os: linux
  OsArch: linux/amd64
  Version: 4.3.1

Podman in a container

No

Privileged Or Rootless

Privileged

Upstream Latest Release

Yes

Additional environment details

Ubuntu 22.04 with systemd 249

Additional information

Logs around the time of the update (logs from other containers have been excluded):

Jan 21 00:09:47 nuc podman[320376]: Copying config sha256:a619aa158489c1d0cffc721ee489fc74686cba844289364e048eeaa6c9d3eb59
Jan 21 00:09:47 nuc podman[320376]: Writing manifest to image destination
Jan 21 00:09:47 nuc podman[320376]: Storing signatures
Jan 21 00:09:48 nuc podman[320376]: 2023-01-21 00:09:15.330057693 -0800 PST m=+4.643772470 image pull  lscr.io/linuxserver/homeassistant:latest
Jan 21 00:09:48 nuc systemd[1]: Stopping Podman container-homeassistant-app.service...
Jan 21 00:09:58 nuc podman[321867]: time="2023-01-21T00:09:58-08:00" level=warning msg="StopSignal SIGTERM failed to stop container homeassistant-app in 10 seconds, resorting to SIGKILL"
Jan 21 00:09:58 nuc systemd[1]: libpod-6dd437a347fcbba513645b7817181184b2b7229a9d0f434331bd53d8d81984a8.scope: Deactivated successfully.
Jan 21 00:09:58 nuc systemd[1]: libpod-6dd437a347fcbba513645b7817181184b2b7229a9d0f434331bd53d8d81984a8.scope: Consumed 6min 30.387s CPU time, received 152.2M IP traffic, sent 126.5M IP traffic.
Jan 21 00:09:58 nuc podman[321867]: 2023-01-21 00:09:58.302019805 -0800 PST m=+10.251700173 container died 6dd437a347fcbba513645b7817181184b2b7229a9d0f434331bd53d8d81984a8 (image=lscr.io/linuxserver/homeassistant:latest, name=homeassistant-app, io.contain>
Jan 21 00:09:58 nuc systemd[1]: var-lib-containers-storage-overlay-4bcdec2c1142de5693a9273e5fce0817f8f317705c919d06665a3ff003461211-merged.mount: Deactivated successfully.
Jan 21 00:09:58 nuc podman[321867]: 2023-01-21 00:09:58.473593492 -0800 PST m=+10.423273870 container cleanup 6dd437a347fcbba513645b7817181184b2b7229a9d0f434331bd53d8d81984a8 (image=lscr.io/linuxserver/homeassistant:latest, name=homeassistant-app, pod_id=>
Jan 21 00:09:58 nuc podman[321867]: 6dd437a347fcbba513645b7817181184b2b7229a9d0f434331bd53d8d81984a8
Jan 21 00:09:58 nuc podman[321867]: 2023-01-21 00:09:58.475715649 -0800 PST m=+10.425396027 pod stop dc0477cff5f6eb73b4021b54f1f70c1c8b1dc5443783410760b00b1f2728ca3d (image=, name=homeassistant.local)
Jan 21 00:09:58 nuc systemd[1]: libpod-cd4893ce83331699446c95bf007b7456f03d689fd8c6e1b034f9d106ba73cf78.scope: Deactivated successfully.
Jan 21 00:09:58 nuc podman[321867]: 2023-01-21 00:09:58.629100912 -0800 PST m=+10.578781290 container died cd4893ce83331699446c95bf007b7456f03d689fd8c6e1b034f9d106ba73cf78 (image=localhost/podman-pause:4.3.1-0, name=dc0477cff5f6-infra, PODMAN_SYSTEMD_UNIT>
Jan 21 00:09:59 nuc systemd[1]: run-netns-netns\x2ddb4a67d6\x2da570\x2d700a\x2d24ff\x2dff18c8d43f05.mount: Deactivated successfully.
Jan 21 00:09:59 nuc systemd[1]: var-lib-containers-storage-overlay-e19f7deeb1bc0552451e68a127cc5e8233ba190839765d36f13fd53a730c2f39-merged.mount: Deactivated successfully.
Jan 21 00:09:59 nuc systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-cd4893ce83331699446c95bf007b7456f03d689fd8c6e1b034f9d106ba73cf78-userdata-shm.mount: Deactivated successfully.
Jan 21 00:09:59 nuc podman[321867]: 2023-01-21 00:09:59.206859608 -0800 PST m=+11.156539986 container cleanup cd4893ce83331699446c95bf007b7456f03d689fd8c6e1b034f9d106ba73cf78 (image=localhost/podman-pause:4.3.1-0, name=dc0477cff5f6-infra, pod_id=dc0477cff>
Jan 21 00:09:59 nuc podman[321867]: 2023-01-21 00:09:59.210662134 -0800 PST m=+11.160342512 pod stop dc0477cff5f6eb73b4021b54f1f70c1c8b1dc5443783410760b00b1f2728ca3d (image=, name=homeassistant.local)
Jan 21 00:09:59 nuc podman[322358]: 2023-01-21 00:09:59.258913942 -0800 PST m=+0.747038867 container cleanup cd4893ce83331699446c95bf007b7456f03d689fd8c6e1b034f9d106ba73cf78 (image=localhost/podman-pause:4.3.1-0, name=dc0477cff5f6-infra, pod_id=dc0477cff5>
Jan 21 00:09:59 nuc systemd[1]: container-homeassistant-app.service: Main process exited, code=exited, status=137/n/a
Jan 21 00:09:59 nuc podman[322358]: 2023-01-21 00:09:59.317546949 -0800 PST m=+0.805671884 pod stop dc0477cff5f6eb73b4021b54f1f70c1c8b1dc5443783410760b00b1f2728ca3d (image=, name=homeassistant.local)
Jan 21 00:09:59 nuc podman[322502]: 2023-01-21 00:09:59.466594215 -0800 PST m=+0.147022491 container remove 6dd437a347fcbba513645b7817181184b2b7229a9d0f434331bd53d8d81984a8 (image=lscr.io/linuxserver/homeassistant:latest, name=homeassistant-app, pod_id=dc>
Jan 21 00:09:59 nuc podman[322502]: 6dd437a347fcbba513645b7817181184b2b7229a9d0f434331bd53d8d81984a8
Jan 21 00:09:59 nuc podman[322502]: 2023-01-21 00:09:59.467940368 -0800 PST m=+0.148368644 pod stop dc0477cff5f6eb73b4021b54f1f70c1c8b1dc5443783410760b00b1f2728ca3d (image=, name=homeassistant.local)
Jan 21 00:09:59 nuc systemd[1]: container-homeassistant-app.service: Failed with result 'exit-code'.
Jan 21 00:09:59 nuc systemd[1]: Stopped Podman container-homeassistant-app.service.
Jan 21 00:09:59 nuc systemd[1]: container-homeassistant-app.service: Bound to unit pod-homeassistant.local.service, but unit isn't active.
Jan 21 00:09:59 nuc systemd[1]: Dependency failed for Podman container-homeassistant-app.service.
Jan 21 00:09:59 nuc systemd[1]: container-homeassistant-app.service: Job container-homeassistant-app.service/start failed with result 'dependency'.
Jan 21 00:09:59 nuc podman[322508]: 2023-01-21 00:09:59.522831861 -0800 PST m=+0.179610068 pod stop dc0477cff5f6eb73b4021b54f1f70c1c8b1dc5443783410760b00b1f2728ca3d (image=, name=homeassistant.local)
Jan 21 00:09:59 nuc podman[322508]: dc0477cff5f6eb73b4021b54f1f70c1c8b1dc5443783410760b00b1f2728ca3d
Jan 21 00:09:59 nuc podman[320376]: 2023-01-21 00:09:59.475397107 -0800 PST m=+48.789111874 image tag f18561245aa670a9f79ff9c5401db9a996342ea523360eb6b895530919142b8f lscr.io/linuxserver/homeassistant:latest
Jan 21 00:09:59 nuc podman[322535]: 2023-01-21 00:09:59.725660011 -0800 PST m=+0.162941749 container remove cd4893ce83331699446c95bf007b7456f03d689fd8c6e1b034f9d106ba73cf78 (image=localhost/podman-pause:4.3.1-0, name=dc0477cff5f6-infra, pod_id=dc0477cff5f>
Jan 21 00:09:59 nuc systemd[1]: Removed slice cgroup machine-libpod_pod_dc0477cff5f6eb73b4021b54f1f70c1c8b1dc5443783410760b00b1f2728ca3d.slice.
Jan 21 00:09:59 nuc systemd[1]: machine-libpod_pod_dc0477cff5f6eb73b4021b54f1f70c1c8b1dc5443783410760b00b1f2728ca3d.slice: Consumed 6min 30.391s CPU time, received 152.2M IP traffic, sent 126.5M IP traffic.
Jan 21 00:09:59 nuc podman[322535]: 2023-01-21 00:09:59.745249881 -0800 PST m=+0.182531619 pod remove dc0477cff5f6eb73b4021b54f1f70c1c8b1dc5443783410760b00b1f2728ca3d (image=, name=homeassistant.local)
Jan 21 00:09:59 nuc podman[322535]: dc0477cff5f6eb73b4021b54f1f70c1c8b1dc5443783410760b00b1f2728ca3d
Jan 21 00:09:59 nuc systemd[1]: pod-homeassistant.local.service: Deactivated successfully.
Jan 21 00:09:59 nuc systemd[1]: pod-homeassistant.local.service: Consumed 588ms CPU time, received 0B IP traffic, sent 3.3K IP traffic.
Jan 21 00:09:59 nuc systemd[1]: Starting Podman pod-homeassistant.local.service...
Jan 21 00:09:59 nuc systemd[1]: Created slice cgroup machine-libpod_pod_81e65f782b04643e3406eef7d5c46935eac0b4c27d7d5d73c2f5d752c932cfb7.slice.
Jan 21 00:10:00 nuc podman[322553]:
Jan 21 00:10:00 nuc podman[322553]: 2023-01-21 00:10:00.032068549 -0800 PST m=+0.203621934 container create bde1deed9bc382a4f17e0b9ba8b7d4e8e87554d38aaca339388dfcc083f714de (image=localhost/podman-pause:4.3.1-0, name=81e65f782b04-infra, pod_id=81e65f782b0>
Jan 21 00:10:00 nuc podman[322553]: 2023-01-21 00:10:00.03904065 -0800 PST m=+0.210594035 pod create 81e65f782b04643e3406eef7d5c46935eac0b4c27d7d5d73c2f5d752c932cfb7 (image=, name=homeassistant.local)
Jan 21 00:10:00 nuc podman[322553]: 81e65f782b04643e3406eef7d5c46935eac0b4c27d7d5d73c2f5d752c932cfb7
Jan 21 00:10:00 nuc kernel: overlayfs: POSIX ACLs are not yet supported with idmapped layers, mounting without ACL support.
Jan 21 00:10:00 nuc systemd[1]: var-lib-containers-storage-overlay-ac350b282de9536850ae47f1a7f8c73f1372620b7e9cd5c1597550a0079f47d8-mapped-0.mount: Deactivated successfully.
Jan 21 00:10:00 nuc systemd[1]: Started libcrun container.
Jan 21 00:10:00 nuc podman[322640]: 2023-01-21 00:10:00.950461897 -0800 PST m=+0.803554956 container init bde1deed9bc382a4f17e0b9ba8b7d4e8e87554d38aaca339388dfcc083f714de (image=localhost/podman-pause:4.3.1-0, name=81e65f782b04-infra, pod_id=81e65f782b046>
Jan 21 00:10:00 nuc podman[322640]: 2023-01-21 00:10:00.966812291 -0800 PST m=+0.819905360 container start bde1deed9bc382a4f17e0b9ba8b7d4e8e87554d38aaca339388dfcc083f714de (image=localhost/podman-pause:4.3.1-0, name=81e65f782b04-infra, pod_id=81e65f782b04>
Jan 21 00:10:00 nuc podman[322640]: 2023-01-21 00:10:00.967010535 -0800 PST m=+0.820103584 pod start 81e65f782b04643e3406eef7d5c46935eac0b4c27d7d5d73c2f5d752c932cfb7 (image=, name=homeassistant.local)
Jan 21 00:10:00 nuc podman[322640]: 81e65f782b04643e3406eef7d5c46935eac0b4c27d7d5d73c2f5d752c932cfb7
Jan 21 00:10:01 nuc systemd[1]: Started Podman pod-homeassistant.local.service.
Jan 21 00:10:01 nuc systemd[1]: Starting Podman container-homeassistant-app.service...
Jan 21 00:10:01 nuc podman[323276]: 2023-01-21 00:10:01.312131184 -0800 PST m=+0.035683941 image pull  lscr.io/linuxserver/homeassistant:latest
Jan 21 00:10:01 nuc podman[323276]:
Jan 21 00:10:01 nuc podman[323276]: 2023-01-21 00:10:01.435997963 -0800 PST m=+0.159550770 container create 1dc2a00d25d059c5efdf80e37b6fe941e534b8fea96fcd8378712873716721bd (image=lscr.io/linuxserver/homeassistant:latest, name=homeassistant-app, pod_id=81>
Jan 21 00:10:01 nuc kernel: overlayfs: POSIX ACLs are not yet supported with idmapped layers, mounting without ACL support.
Jan 21 00:10:01 nuc systemd[1]: var-lib-containers-storage-overlay-470dbbd4da3c52973796cc52132bcb5a8b6aa106bc544e4d43c86b2f8ff5ba34-mapped-0.mount: Deactivated successfully.
Jan 21 00:10:01 nuc systemd[1]: Started libcrun container.
Jan 21 00:10:01 nuc podman[323276]: 2023-01-21 00:10:01.537637827 -0800 PST m=+0.261190584 container init 1dc2a00d25d059c5efdf80e37b6fe941e534b8fea96fcd8378712873716721bd (image=lscr.io/linuxserver/homeassistant:latest, name=homeassistant-app, pod_id=81e6>
Jan 21 00:10:01 nuc podman[323276]: 2023-01-21 00:10:01.561972409 -0800 PST m=+0.285525166 container start 1dc2a00d25d059c5efdf80e37b6fe941e534b8fea96fcd8378712873716721bd (image=lscr.io/linuxserver/homeassistant:latest, name=homeassistant-app, pod_id=81e>
Jan 21 00:10:01 nuc podman[323276]: 1dc2a00d25d059c5efdf80e37b6fe941e534b8fea96fcd8378712873716721bd
Jan 21 00:10:01 nuc homeassistant-app[323295]: [mod-init] Attempting to run Docker Modification Logic
Jan 21 00:10:01 nuc systemd[1]: Started Podman container-homeassistant-app.service.
@saiarcot895 saiarcot895 added the kind/bug Categorizes issue or PR as related to a bug. label Jan 21, 2023
@Luap99
Copy link
Member

Luap99 commented Jan 24, 2023

@vrothberg PTAL

@vrothberg
Copy link
Member

Thanks for reaching out, @saiarcot895. Could you share a reproducer?

@saiarcot895
Copy link
Author

Sure, here's the systemd files for the pod and container:

/etc/systemd/system/pod-homeassistant.local.service:

# pod-homeassistant.local.service
# autogenerated by Podman 4.3.1
# Sun Dec 25 23:10:38 PST 2022

[Unit]
Description=Podman pod-homeassistant.local.service
Documentation=man:podman-generate-systemd(1)
Wants=network-online.target
After=network-online.target
RequiresMountsFor=/run/containers/storage
Wants=container-homeassistant-app.service
Before=container-homeassistant-app.service

[Service]
Environment=PODMAN_SYSTEMD_UNIT=%n
Restart=on-failure
TimeoutStopSec=70
ExecStartPre=/bin/rm \
        -f %t/pod-homeassistant.local.pid %t/pod-homeassistant.local.pod-id
ExecStartPre=/usr/bin/podman pod create \
        --infra-conmon-pidfile %t/pod-homeassistant.local.pid \
        --pod-id-file %t/pod-homeassistant.local.pod-id \
        --exit-policy=stop \
        -p 8123:8123 \
        --subuidname homeasst \
        --subgidname homeasst \
        --net homeasst homeassistant.local
ExecStart=/usr/bin/podman pod start \
        --pod-id-file %t/pod-homeassistant.local.pod-id
ExecStop=/usr/bin/podman pod stop \
        --ignore \
        --pod-id-file %t/pod-homeassistant.local.pod-id  \
        -t 10
ExecStopPost=/usr/bin/podman pod rm \
        --ignore \
        -f \
        --pod-id-file %t/pod-homeassistant.local.pod-id
PIDFile=%t/pod-homeassistant.local.pid
Type=forking

[Install]
WantedBy=default.target

/etc/systemd/system/container-homeassistant-app.service:

# container-homeassistant-app.service
# autogenerated by Podman 4.3.1
# Sun Dec 25 23:10:38 PST 2022

[Unit]
Description=Podman container-homeassistant-app.service
Documentation=man:podman-generate-systemd(1)
Wants=network-online.target
After=network-online.target
RequiresMountsFor=%t/containers
BindsTo=pod-homeassistant.local.service
After=pod-homeassistant.local.service

[Service]
Environment=PODMAN_SYSTEMD_UNIT=%n
Restart=on-failure
TimeoutStopSec=70
ExecStartPre=/bin/rm \
        -f %t/%n.ctr-id
ExecStart=/usr/bin/podman run \
        --cidfile=%t/%n.ctr-id \
        --cgroups=no-conmon \
        --rm \
        --pod-id-file %t/pod-homeassistant.local.pod-id \
        --sdnotify=conmon \
        -d \
        --replace \
        -e TZ=America/Los_Angeles \
        -e DOCKER_MODS=linuxserver/mods:homeassistant-hacs \
        --label io.containers.autoupdate=image \
        -v homeassistant_config:/config \
        --name homeassistant-app lscr.io/linuxserver/homeassistant:latest
ExecStop=/usr/bin/podman stop \
        --ignore -t 10 \
        --cidfile=%t/%n.ctr-id
ExecStopPost=/usr/bin/podman rm \
        -f \
        --ignore -t 10 \
        --cidfile=%t/%n.ctr-id
Type=notify
NotifyAccess=all

[Install]
WantedBy=default.target

This assumes that a subuid and subgid entry for homeasst is present, and that there is a network named homeasst present.

Once this is set up, when there is an update, podman-auto-update will restart the homeassistant container, but the start of the homeassistant container will happen as the pod is shutting down.

I do have two additional lines in both of the above files that I didn't include above:

ExecStartPre=/bin/systemctl reload nftables
ExecStartPost=/bin/systemctl reload nftables

This is to reload my nftables rules. I'm assuming that this doesn't have an impact on the bug happening.

@saiarcot895
Copy link
Author

@vrothberg Were you able to repro this issue?

@bbalp
Copy link
Contributor

bbalp commented Feb 5, 2023

Using podman v4.4.0 (rootless) I am experiencing the same problem. After I build a new image locally, running podman auto-update rolls back my image tag to the previous image and my pod is not updated.

@vrothberg
Copy link
Member

@saiarcot895, I did not find time to look into the issue yet. I'll update this issue once I do.

@konradmb
Copy link

I have the same issue with linuxserver/bookstack pod. It never auto-updated itself. I have to manually pull latest image and restart pod in systemd.

@thmo
Copy link

thmo commented Feb 13, 2023

I think it also fails to update and rolls back from time to time in cases with more than one container in the pod.

vrothberg added a commit to vrothberg/libpod that referenced this issue Feb 15, 2023
Relax the reverse dependency to the "parent" pod.service.
This allows a container.service to be restarted without
restarting the entire pod.service.

Note that there is no system test for auto-updates with pods.

Fixes: containers#17181
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
@vrothberg
Copy link
Member

Thanks everybody for the input and the patience. I opened #17508 to fix the issue.

vrothberg added a commit to vrothberg/libpod that referenced this issue Feb 16, 2023
Support auto updating containers running inside pods.  Similar to
containers, the systemd units need to be generated via
`podman-generate-systemd --new $POD` to generate the pod's units.

Note that auto updating a container inside a pod will restart the entire
pod.  Updates of multiple containers inside a pod are batched, such that
a pod is restarted at most once.  That is effectively the same mechanism
for auto updating containers in a K8s YAML via the `podman-kube@`
template or via Quadlet.

Updating a single container unit without restarting the entire pod is
not possible.  The reasoning behind is that pods are created with
--exit-policy=stop which will render the pod to be stopped when auto
updating the only container inside the pod.  The (reverse) dependencies
between the pod and its containers unit have been carefully selected for
robustness.  Changes may entail undesired side effects or backward
incompatibilities that I am not comfortable with.

Fixes: containers#17181
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
vrothberg added a commit to vrothberg/libpod that referenced this issue Feb 16, 2023
Support auto updating containers running inside pods.  Similar to
containers, the systemd units need to be generated via
`podman-generate-systemd --new $POD` to generate the pod's units.

Note that auto updating a container inside a pod will restart the entire
pod.  Updates of multiple containers inside a pod are batched, such that
a pod is restarted at most once.  That is effectively the same mechanism
for auto updating containers in a K8s YAML via the `podman-kube@`
template or via Quadlet.

Updating a single container unit without restarting the entire pod is
not possible.  The reasoning behind is that pods are created with
--exit-policy=stop which will render the pod to be stopped when auto
updating the only container inside the pod.  The (reverse) dependencies
between the pod and its containers unit have been carefully selected for
robustness.  Changes may entail undesired side effects or backward
incompatibilities that I am not comfortable with.

Fixes: containers#17181
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
@vrothberg
Copy link
Member

Note that I had to revisit the initial version of the PR. Auto updating a container inside a pod will always render the entire pod to be restarted - similar to what's happening in the podman-kube@ systemd template. More on that in the PR.

vrothberg added a commit to vrothberg/libpod that referenced this issue Feb 16, 2023
Support auto updating containers running inside pods.  Similar to
containers, the systemd units need to be generated via
`podman-generate-systemd --new $POD` to generate the pod's units.

Note that auto updating a container inside a pod will restart the entire
pod.  Updates of multiple containers inside a pod are batched, such that
a pod is restarted at most once.  That is effectively the same mechanism
for auto updating containers in a K8s YAML via the `podman-kube@`
template or via Quadlet.

Updating a single container unit without restarting the entire pod is
not possible.  The reasoning behind is that pods are created with
--exit-policy=stop which will render the pod to be stopped when auto
updating the only container inside the pod.  The (reverse) dependencies
between the pod and its containers unit have been carefully selected for
robustness.  Changes may entail undesired side effects or backward
incompatibilities that I am not comfortable with.

Fixes: containers#17181
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
vrothberg added a commit to vrothberg/libpod that referenced this issue Feb 16, 2023
Support auto updating containers running inside pods.  Similar to
containers, the systemd units need to be generated via
`podman-generate-systemd --new $POD` to generate the pod's units.

Note that auto updating a container inside a pod will restart the entire
pod.  Updates of multiple containers inside a pod are batched, such that
a pod is restarted at most once.  That is effectively the same mechanism
for auto updating containers in a K8s YAML via the `podman-kube@`
template or via Quadlet.

Updating a single container unit without restarting the entire pod is
not possible.  The reasoning behind is that pods are created with
--exit-policy=stop which will render the pod to be stopped when auto
updating the only container inside the pod.  The (reverse) dependencies
between the pod and its containers unit have been carefully selected for
robustness.  Changes may entail undesired side effects or backward
incompatibilities that I am not comfortable with.

Fixes: containers#17181
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
vrothberg added a commit to vrothberg/libpod that referenced this issue Feb 16, 2023
Support auto updating containers running inside pods.  Similar to
containers, the systemd units need to be generated via
`podman-generate-systemd --new $POD` to generate the pod's units.

Note that auto updating a container inside a pod will restart the entire
pod.  Updates of multiple containers inside a pod are batched, such that
a pod is restarted at most once.  That is effectively the same mechanism
for auto updating containers in a K8s YAML via the `podman-kube@`
template or via Quadlet.

Updating a single container unit without restarting the entire pod is
not possible.  The reasoning behind is that pods are created with
--exit-policy=stop which will render the pod to be stopped when auto
updating the only container inside the pod.  The (reverse) dependencies
between the pod and its containers unit have been carefully selected for
robustness.  Changes may entail undesired side effects or backward
incompatibilities that I am not comfortable with.

Fixes: containers#17181
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
vrothberg added a commit to vrothberg/libpod that referenced this issue Feb 17, 2023
Support auto updating containers running inside pods.  Similar to
containers, the systemd units need to be generated via
`podman-generate-systemd --new $POD` to generate the pod's units.

Note that auto updating a container inside a pod will restart the entire
pod.  Updates of multiple containers inside a pod are batched, such that
a pod is restarted at most once.  That is effectively the same mechanism
for auto updating containers in a K8s YAML via the `podman-kube@`
template or via Quadlet.

Updating a single container unit without restarting the entire pod is
not possible.  The reasoning behind is that pods are created with
--exit-policy=stop which will render the pod to be stopped when auto
updating the only container inside the pod.  The (reverse) dependencies
between the pod and its containers unit have been carefully selected for
robustness.  Changes may entail undesired side effects or backward
incompatibilities that I am not comfortable with.

Fixes: containers#17181
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Aug 31, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Aug 31, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants