Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

--imagestore is not enough for building/publishing images #20203

Open
SeaLife opened this issue Sep 29, 2023 · 6 comments
Open

--imagestore is not enough for building/publishing images #20203

SeaLife opened this issue Sep 29, 2023 · 6 comments
Assignees
Labels
kind/bug Categorizes issue or PR as related to a bug. stale-issue

Comments

@SeaLife
Copy link

SeaLife commented Sep 29, 2023

Issue Description

Running podman throught Gitlab-CI docker executor with the latest image: quay.io/podman/stable.

My Dockerfile to be built looks like this:

FROM python:3

## REQUIREMENTS
COPY src/python/requirements.txt .
RUN python3 -m pip install -r requirements.txt

## APPLICATION
COPY src/python/* /app/
RUN chmod +x /app/*.py
COPY src/lua/* /app/.dependencies/
WORKDIR /tmp

My .gitlab-ci.yaml looks like this:

## Gitlab CI Workflow for building docker containers using podman

stages:
  - test
  - build
  - docker
  - publish
  - deploy

variables:
  XDG_DATA_HOME: /cache/podman-cache/storage
  XDG_CONFIG_HOME: /cache/podman-cache
  PODMAN_BUILD_ARGS: --platform linux/amd64

.podman: &podman
  image: quay.io/podman/stable
  rules:
    - exists:
        - Dockerfile
  before_script:
    - podman login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" "$CI_REGISTRY"


.podman publish: &publish_podman
  <<: *podman
  variables:
    GIT_STRATEGY: none
  stage: publish
  timeout: 5 minutes
  retry:
    max: 2
    when: stuck_or_timeout_failure
  before_script:
    - podman login -u=gitlab-ci-token -p=$CI_JOB_TOKEN $CI_REGISTRY
  dependencies: []

podman build:
  <<: *podman
  stage: build
  script:
    - >
      podman build $PODMAN_BUILD_ARGS \
        --label "org.opencontainers.image.title=$CI_PROJECT_TITLE" \
        --label "org.opencontainers.image.url=$CI_PROJECT_URL" \
        --label "org.opencontainers.image.created=$CI_JOB_STARTED_AT" \
        --label "org.opencontainers.image.revision=$CI_COMMIT_SHORT_SHA" \
        --label "org.opencontainers.image.version=$CI_COMMIT_REF_NAME" \
        --tag $CI_REGISTRY_IMAGE:$CI_COMMIT_SHORT_SHA \
        --imagestore $XDG_DATA_HOME \
        --file Dockerfile

podman publish latest:
  <<: *publish_podman
  image: quay.io/podman/stable
  rules:
    - exists:
        - Dockerfile

  script:
    - podman tag --imagestore $XDG_DATA_HOME $CI_REGISTRY_IMAGE:$CI_COMMIT_SHORT_SHA $CI_REGISTRY_IMAGE:latest
    - podman push --imagestore $XDG_DATA_HOME $CI_REGISTRY_IMAGE:latest

Using the docker image XDG_DATA_HOME is not respected so i had to use --imagestore but maybe im just missing something.

I wanna push the image separately from the build command and i tried using --imagestore to do that. Building the image works just fine but publishing the image does not:

Running with gitlab-runner 15.11.0~beta.128.g8716592b (8716592b)
  on xxxxx.de#1 fe6wacXw, system ID: r_DT6dkC16MQyJ
Resolving secrets
00:00
Preparing the "docker" executor
00:01
Using Docker executor with image quay.io/podman/stable ...
Pulling docker image quay.io/podman/stable ...
Using docker image sha[2](https://git.xxxxx.de/fivem/avrp-deployment-client/-/jobs/102724#L2)56:7b0095dbba9ae8c8c498fd9666646d718dfcc200f2b20af89fff762c084d984c for quay.io/podman/stable with digest quay.io/podman/stable@sha256:f[3](https://git.xxxxxx.de/fivem/avrp-deployment-client/-/jobs/102724#L3)bc561b9c4fe3942394df3b0b2f161fc08d32291bd8d1ee006ae4d9b17e5583 ...
Preparing environment
00:01
Running on runner-fe6wacxw-project-291-concurrent-0 via 8c1357cb755b...
Getting source from Git repository
00:00
Skipping Git repository setup
Skipping Git checkout
Skipping Git submodules setup
Executing "step_script" stage of the job script
00:02
Using docker image sha2[5](https://git.xxxxxxx.de/fivem/avrp-deployment-client/-/jobs/102724#L5)[6](https://git.xxxxxxx.de/fivem/avrp-deployment-client/-/jobs/102724#L6):[7](https://git.xxxxxxx.de/fivem/avrp-deployment-client/-/jobs/102724#L7)b0095dbba9ae[8](https://git.xxxxxxx.de/fivem/avrp-deployment-client/-/jobs/102724#L8)c8c498fd9666646d718dfcc200f2b20af89fff762c084d984c for quay.io/podman/stable with digest quay.io/podman/stable@sha256:f3bc561b9c4fe3942394df3b0b2f161fc08d32291bd8d1ee006ae4d9b17e5583 ...
$ podman login -u=gitlab-ci-token -p=$CI_JOB_TOKEN $CI_REGISTRY
Login Succeeded!
$ podman tag --imagestore $XDG_DATA_HOME $CI_REGISTRY_IMAGE:$CI_COMMIT_SHORT_SHA $CI_REGISTRY_IMAGE:latest
$ podman push --imagestore $XDG_DATA_HOME $CI_REGISTRY_IMAGE:latest
Getting image source signatures
Copying blob sha256:7c85cfa30cb11b7606c0ee84c713a8f6c9faad7cb7ba92f1f33ba36d4731cc82
Copying blob sha256:a981dddd4c650efa96d82013fba1d1189cf4648cd1e766b34120b32af9ce8a06
Copying blob sha256:01d6cdeac53917b874d5adac862ab90dcf22489b7914611a1bde7c84db0a99ae
Copying blob sha256:f6589095d5b5a4933db1a75edc905d20570c9f6e5dbebd9a7c39a8eef81bb3fd
Copying blob sha256:c26432533a6af2f6f59d50aba5f76f2b61c2e0088c7e01b5d2a8708b6cb9ef08
Copying blob sha256:78dd9ecf8a6d0416276882b6972a5dde6ec7484f97d722b00b446b72d057ecc7
Copying blob sha256:0d3f1aea6da4452[10](https://git.xxxxxxx.de/fivem/avrp-deployment-client/-/jobs/102724#L10)0b091391d66b782b9e39967a2be2e55fda229fc8de[11](https://git.xxxxxxx.de/fivem/avrp-deployment-client/-/jobs/102724#L11)18f
Copying blob sha256:db22e0d1d36b7cf128ec60f76cc788155f8f60e7d4cb8b9ad1dc97ec20cf7b7e
Copying blob sha256:f655a41ab5983b90bdc7df7a00d59a03c774f3bfd01ef1d88f8b7d2d3d4c4090
Copying blob sha256:3964156d636cc7f0b8417be1dc5758c46e8a98ace88b25a2182fcf7e7ca9aece
Copying blob sha256:7474e262487e67[13](https://git.xxxxxxx.de/fivem/avrp-deployment-client/-/jobs/102724#L13)5b71faf49a0a432cbf218212b8b956a95b99ad49cf65f101
Copying blob sha256:adfa51405eeda836482d18f054dd77b489741faa80cc7886433f002f278e619e
Copying blob sha256:6df7f2d0835a847ec109e5fef0bbcd91a525fcd0169b332b7dd0e5621c0a57ce
Error: reading blob sha256:7c85cfa30cb11b7606c0ee84c713a8f6c9faad7cb7ba92f1f33ba36d4731cc82: 1 error occurred:
	* creating file-getter: readlink /var/lib/containers/storage/overlay/7c85cfa30cb11b7606c0ee84c713a8f6c9faad7cb7ba92f1f33ba36d4731cc82/diff: no such file or directory
Cleaning up project directory and file based variables
00:00
ERROR: Job failed: exit code 1

Steps to reproduce the issue

Steps to reproduce the issue (mostly described above)

  1. Build a image with a custom --imagestore
  2. Try pushing/tagging the image from the same location

Describe the results you received

A file is somehow missing after building the image and telling podman to tag/push the image. (see log above)

Describe the results you expected

I expect the image to get tagged and pushed.

podman info output

Docker version 24.0.6, build ed223bc
Ubuntu 20.04.6 LTS with Kernel 5.4.0-163-generic
Podman inside Docker using Image: `quay.io/podman/stable`

Podman in a container

Yes

Privileged Or Rootless

Privileged

Upstream Latest Release

Yes

Additional environment details

Gitlab-Runner / Gitlab CI/CD (Pipeline-Instruction in Bug description)

Additional information

No response

@SeaLife SeaLife added the kind/bug Categorizes issue or PR as related to a bug. label Sep 29, 2023
@SeaLife
Copy link
Author

SeaLife commented Sep 29, 2023

I just tried re-running the pipeline without any changes and the build step also fails:

$ podman login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" "$CI_REGISTRY"
Login Succeeded!
$ podman build $PODMAN_BUILD_ARGS \ # collapsed multi-line command
STEP 1/8: FROM python:3
Error: creating build container: creating container: creating read-write layer with ID "cde5ef89d770d8056f78bf098916a6667a26d58eb15b744dde19b3e0066d6ea6": Stat /var/lib/containers/storage/overlay/305449d3ecaefe499fac6f728185c599c7f00b7f00f0f97df45cfb0acd55d825/diff: no such file or directory
Cleaning up project directory and file based variables
00:01
ERROR: Job failed: exit code 1

@baude
Copy link
Member

baude commented Sep 29, 2023

@flouthoc mind peeking at this one?

@flouthoc
Copy link
Collaborator

I'll check this, thanks for reporting @SeaLife

@flouthoc flouthoc self-assigned this Sep 29, 2023
@SeaLife
Copy link
Author

SeaLife commented Oct 11, 2023

I'll check this, thanks for reporting @SeaLife

Any updates?

//EDIT

Somehow my docker (dind) pipeline fails now as well because they changed something, i modified my podman pipeline to not use any XDG_* variable and omitted the --imagestore argument. Works well but wont cache anything (because no persistence is mounted).

stages:
  - publish

variables:
  PODMAN_BUILD_ARGS: --platform linux/amd64

podman:
  image: quay.io/podman/stable
  rules:
    - exists:
        - Dockerfile
  stage: publish
  script:
    - >
      podman build $PODMAN_BUILD_ARGS \
        --label "org.opencontainers.image.title=$CI_PROJECT_TITLE" \
        --label "org.opencontainers.image.url=$CI_PROJECT_URL" \
        --label "org.opencontainers.image.created=$CI_JOB_STARTED_AT" \
        --label "org.opencontainers.image.revision=$CI_COMMIT_SHORT_SHA" \
        --label "org.opencontainers.image.version=$CI_COMMIT_REF_NAME" \
        --tag $CI_REGISTRY_IMAGE:$CI_COMMIT_SHORT_SHA \
        --file Dockerfile

    - podman login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" "$CI_REGISTRY"
    - podman tag $CI_REGISTRY_IMAGE:$CI_COMMIT_SHORT_SHA $CI_REGISTRY_IMAGE:latest
    - podman push $CI_REGISTRY_IMAGE:latest

Copy link

A friendly reminder that this issue had no activity for 30 days.

@luckylinux
Copy link

luckylinux commented Apr 12, 2024

@SeaLife, @flouthoc
I faced a similar issue that occurs somewhat inconsistently: #21810
Nobody so far could give me an explanation as to why/when/how it occurs.

It seems to be related (in some cases at least) to the --rbind I am using on ZFS where I am mounting --rbind /zdata/PODMAN/{STORAGE,IMAGES,VOLUMES} to /home/podman/containers/{storage,images,volumes} respectively. Maybe your pipeline (server) is also using a --rbind mountpoint ?

Removing the --rbind and setting /zdata/PODMAN/{STORAGE,IMAGES,VOLUMES} in storage.conf and containers.conf (for volume_path) seems to work on ZFS systems (knock on wood ... it's impossible to know when it starts failing again - I lost count of how many podman system reset and manually purging the storage/images directory I did).

Strangely enough on ext4 systems and just a directory structure (without --rbind), it fails as well (both during pulls as well as during build now since I am testing building a custom container using my own Dockerfile).

Possibly related to an incomplete first pull from my (local) repository mirror, then attempted a full pull to the docker.io Registry ? Not sure. I also had these issues "first-time" when simply issuing podman pull redis:alpine for instance (without any custom/local Registry set up at the time).

I have in .bash_profile for all Systems:

# Podman Configuration
export XDG_RUNTIME_DIR=/run/user/1001
export XDG_CONFIG_HOME="/home/podman/.config"

storage.conf on ZFS which seems to give less problems now:

# This file is the configuration file for all tools
# that use the containers/storage library. The storage.conf file
# overrides all other storage.conf files. Container engines using the
# container/storage library do not inherit fields from other storage.conf
# files.
#
#  Note: The storage.conf file overrides other storage.conf files based on this precedence:
#      /usr/containers/storage.conf
#      /etc/containers/storage.conf
#      $HOME/.config/containers/storage.conf
#      $XDG_CONFIG_HOME/containers/storage.conf (If XDG_CONFIG_HOME is set)
# See man 5 containers-storage.conf for more information
# The "container storage" table contains all of the server options.
[storage]

# Default Storage Driver, Must be set for proper operation.
driver = "overlay"

# Temporary storage location
runroot = "/run/user/1001"

# Primary Read/Write location of container storage
# When changing the graphroot location on an SELINUX system, you must
# ensure  the labeling matches the default locations labels with the
# following commands:
# semanage fcontext -a -e /var/lib/containers/storage /NEWSTORAGEPATH
# restorecon -R -v /NEWSTORAGEPATH
graphroot = "/zdata/PODMAN/STORAGE"

# Optional alternate location of image store if a location separate from the
# container store is required. If set, it must be different than graphroot.
imagestore = "/zdata/PODMAN/IMAGES"

# Storage path for rootless users
#
rootless_storage_path = "/zdata/PODMAN/STORAGE"

# Transient store mode makes all container metadata be saved in temporary storage
# (i.e. runroot above). This is faster, but doesn't persist across reboots.
# Additional garbage collection must also be performed at boot-time, so this
# option should remain disabled in most configurations.
# transient_store = true

[storage.options]
# Storage options to be passed to underlying storage drivers

# AdditionalImageStores is used to pass paths to additional Read/Only image stores
# Must be comma separated list.
#additionalimagestores = [
#"/usr/lib/containers/storage",
#]

# Allows specification of how storage is populated when pulling images. This
# option can speed the pulling process of images compressed with format
# zstd:chunked. Containers/storage looks for files within images that are being
# pulled from a container registry that were previously pulled to the host.  It
# can copy or create a hard link to the existing file when it finds them,
# eliminating the need to pull them from the container registry. These options
# can deduplicate pulling of content, disk storage of content and can allow the
# kernel to use less memory when running containers.

# containers/storage supports three keys
#   * enable_partial_images="true" | "false"
#     Tells containers/storage to look for files previously pulled in storage
#     rather then always pulling them from the container registry.
#   * use_hard_links = "false" | "true"
#     Tells containers/storage to use hard links rather then create new files in
#     the image, if an identical file already existed in storage.
#   * ostree_repos = ""
#     Tells containers/storage where an ostree repository exists that might have
#     previously pulled content which can be used when attempting to avoid
#     pulling content from the container registry
pull_options = {enable_partial_images = "false", use_hard_links = "false", ostree_repos=""}

# Remap-UIDs/GIDs is the mapping from UIDs/GIDs as they should appear inside of
# a container, to the UIDs/GIDs as they should appear outside of the container,
# and the length of the range of UIDs/GIDs.  Additional mapped sets can be
# listed and will be heeded by libraries, but there are limits to the number of
# mappings which the kernel will allow when you later attempt to run a
# container.
#
# remap-uids = "0:1668442479:65536"
# remap-gids = "0:1668442479:65536"

# Remap-User/Group is a user name which can be used to look up one or more UID/GID
# ranges in the /etc/subuid or /etc/subgid file.  Mappings are set up starting
# with an in-container ID of 0 and then a host-level ID taken from the lowest
# range that matches the specified name, and using the length of that range.
# Additional ranges are then assigned, using the ranges which specify the
# lowest host-level IDs first, to the lowest not-yet-mapped in-container ID,
# until all of the entries have been used for maps. This setting overrides the
# Remap-UIDs/GIDs setting.
#
# remap-user = "containers"
# remap-group = "containers"

# Root-auto-userns-user is a user name which can be used to look up one or more UID/GID
# ranges in the /etc/subuid and /etc/subgid file.  These ranges will be partitioned
# to containers configured to create automatically a user namespace.  Containers
# configured to automatically create a user namespace can still overlap with containers
# having an explicit mapping set.
# This setting is ignored when running as rootless.
# root-auto-userns-user = "storage"
#
# Auto-userns-min-size is the minimum size for a user namespace created automatically.
# auto-userns-min-size=1024
#
# Auto-userns-max-size is the maximum size for a user namespace created automatically.
# auto-userns-max-size=65536

[storage.options.overlay]
# ignore_chown_errors can be set to allow a non privileged user running with
# a single UID within a user namespace to run containers. The user can pull
# and use any image even those with multiple uids.  Note multiple UIDs will be
# squashed down to the default uid in the container.  These images will have no
# separation between the users in the container. Only supported for the overlay
# and vfs drivers.
#ignore_chown_errors = "false"

# Inodes is used to set a maximum inodes of the container image.
# inodes = ""

# Path to an helper program to use for mounting the file system instead of mounting it
# directly.
mount_program = "/usr/bin/fuse-overlayfs"

# mountopt specifies comma separated list of extra mount options
mountopt = "nodev,metacopy=on"

# Set to skip a PRIVATE bind mount on the storage home directory.
# skip_mount_home = "false"

# Size is used to set a maximum size of the container image.
# size = ""

# ForceMask specifies the permissions mask that is used for new files and
# directories.
#
# The values "shared" and "private" are accepted.
# Octal permission masks are also accepted.
#
#  "": No value specified.
#     All files/directories, get set with the permissions identified within the
#     image.
#  "private": it is equivalent to 0700.
#     All files/directories get set with 0700 permissions.  The owner has rwx
#     access to the files. No other users on the system can access the files.
#     This setting could be used with networked based homedirs.
#  "shared": it is equivalent to 0755.
#     The owner has rwx access to the files and everyone else can read, access
#     and execute them. This setting is useful for sharing containers storage
#     with other users.  For instance have a storage owned by root but shared
#     to rootless users as an additional store.
#     NOTE:  All files within the image are made readable and executable by any
#     user on the system. Even /etc/shadow within your image is now readable by
#     any user.
#
#   OCTAL: Users can experiment with other OCTAL Permissions.
#
#  Note: The force_mask Flag is an experimental feature, it could change in the
#  future.  When "force_mask" is set the original permission mask is stored in
#  the "user.containers.override_stat" xattr and the "mount_program" option must
#  be specified. Mount programs like "/usr/bin/fuse-overlayfs" present the
#  extended attribute permissions to processes within containers rather than the
#  "force_mask"  permissions.
#
# force_mask = ""

[storage.options.thinpool]
# Storage Options for thinpool

# autoextend_percent determines the amount by which pool needs to be
# grown. This is specified in terms of % of pool size. So a value of 20 means
# that when threshold is hit, pool will be grown by 20% of existing
# pool size.
# autoextend_percent = "20"

# autoextend_threshold determines the pool extension threshold in terms
# of percentage of pool size. For example, if threshold is 60, that means when
# pool is 60% full, threshold has been hit.
# autoextend_threshold = "80"

# basesize specifies the size to use when creating the base device, which
# limits the size of images and containers.
# basesize = "10G"

# blocksize specifies a custom blocksize to use for the thin pool.
# blocksize="64k"

# directlvm_device specifies a custom block storage device to use for the
# thin pool. Required if you setup devicemapper.
# directlvm_device = ""

# directlvm_device_force wipes device even if device already has a filesystem.
# directlvm_device_force = "True"

# fs specifies the filesystem type to use for the base device.
# fs="xfs"

# log_level sets the log level of devicemapper.
# 0: LogLevelSuppress 0 (Default)
# 2: LogLevelFatal
# 3: LogLevelErr
# 4: LogLevelWarn
# 5: LogLevelNotice
# 6: LogLevelInfo
# 7: LogLevelDebug
# log_level = "7"

# min_free_space specifies the min free space percent in a thin pool require for
# new device creation to succeed. Valid values are from 0% - 99%.
# Value 0% disables
# min_free_space = "10%"

# mkfsarg specifies extra mkfs arguments to be used when creating the base
# device.
# mkfsarg = ""

# metadata_size is used to set the `pvcreate --metadatasize` options when
# creating thin devices. Default is 128k
# metadata_size = ""

# Size is used to set a maximum size of the container image.
# size = ""

# use_deferred_removal marks devicemapper block device for deferred removal.
# If the thinpool is in use when the driver attempts to remove it, the driver
# tells the kernel to remove it as soon as possible. Note this does not free
# up the disk space, use deferred deletion to fully remove the thinpool.
# use_deferred_removal = "True"

# use_deferred_deletion marks thinpool device for deferred deletion.
# If the device is busy when the driver attempts to delete it, the driver
# will attempt to delete device every 30 seconds until successful.
# If the program using the driver exits, the driver will continue attempting
# to cleanup the next time the driver is used. Deferred deletion permanently
# deletes the device and all data stored in device will be lost.
# use_deferred_deletion = "True"

# xfs_nospace_max_retries specifies the maximum number of retries XFS should
# attempt to complete IO when ENOSPC (no space) error is returned by
# underlying storage device.
# xfs_nospace_max_retries = "0"

storage.conf on EXT4 which seems to still feature some problem:

# This file is the configuration file for all tools
# that use the containers/storage library. The storage.conf file
# overrides all other storage.conf files. Container engines using the
# container/storage library do not inherit fields from other storage.conf
# files.
#
#  Note: The storage.conf file overrides other storage.conf files based on this precedence:
#      /usr/containers/storage.conf
#      /etc/containers/storage.conf
#      $HOME/.config/containers/storage.conf
#      $XDG_CONFIG_HOME/containers/storage.conf (If XDG_CONFIG_HOME is set)
# See man 5 containers-storage.conf for more information
# The "container storage" table contains all of the server options.
[storage]

# Default Storage Driver, Must be set for proper operation.
driver = "overlay"

# Temporary storage location
runroot = "/run/user/1001"

# Primary Read/Write location of container storage
# When changing the graphroot location on an SELINUX system, you must
# ensure  the labeling matches the default locations labels with the
# following commands:
# semanage fcontext -a -e /var/lib/containers/storage /NEWSTORAGEPATH
# restorecon -R -v /NEWSTORAGEPATH
graphroot = "/home/podman/containers/storage"

# Optional alternate location of image store if a location separate from the
# container store is required. If set, it must be different than graphroot.
imagestore = "/home/podman/containers/images"


# Storage path for rootless users
#
rootless_storage_path = "/home/podman/containers/storage"

# Transient store mode makes all container metadata be saved in temporary storage
# (i.e. runroot above). This is faster, but doesn't persist across reboots.
# Additional garbage collection must also be performed at boot-time, so this
# option should remain disabled in most configurations.
# transient_store = true

[storage.options]
# Storage options to be passed to underlying storage drivers

# AdditionalImageStores is used to pass paths to additional Read/Only image stores
# Must be comma separated list.
additionalimagestores = [
]

# Allows specification of how storage is populated when pulling images. This
# option can speed the pulling process of images compressed with format
# zstd:chunked. Containers/storage looks for files within images that are being
# pulled from a container registry that were previously pulled to the host.  It
# can copy or create a hard link to the existing file when it finds them,
# eliminating the need to pull them from the container registry. These options
# can deduplicate pulling of content, disk storage of content and can allow the
# kernel to use less memory when running containers.

# containers/storage supports four keys
#   * enable_partial_images="true" | "false"
#     Tells containers/storage to look for files previously pulled in storage
#     rather then always pulling them from the container registry.
#   * use_hard_links = "false" | "true"
#     Tells containers/storage to use hard links rather then create new files in
#     the image, if an identical file already existed in storage.
#   * ostree_repos = ""
#     Tells containers/storage where an ostree repository exists that might have
#     previously pulled content which can be used when attempting to avoid
#     pulling content from the container registry
#   * convert_images = "false" | "true"
#     If set to true, containers/storage will convert images to a
#     format compatible with partial pulls in order to take advantage
#     of local deduplication and hard linking.  It is an expensive
#     operation so it is not enabled by default.
pull_options = {enable_partial_images = "true", use_hard_links = "false", ostree_repos=""}

# Remap-UIDs/GIDs is the mapping from UIDs/GIDs as they should appear inside of
# a container, to the UIDs/GIDs as they should appear outside of the container,
# and the length of the range of UIDs/GIDs.  Additional mapped sets can be
# listed and will be heeded by libraries, but there are limits to the number of
# mappings which the kernel will allow when you later attempt to run a
# container.
#
# remap-uids = "0:1668442479:65536"
# remap-gids = "0:1668442479:65536"

# Remap-User/Group is a user name which can be used to look up one or more UID/GID
# ranges in the /etc/subuid or /etc/subgid file.  Mappings are set up starting
# with an in-container ID of 0 and then a host-level ID taken from the lowest
# range that matches the specified name, and using the length of that range.
# Additional ranges are then assigned, using the ranges which specify the
# lowest host-level IDs first, to the lowest not-yet-mapped in-container ID,
# until all of the entries have been used for maps. This setting overrides the
# Remap-UIDs/GIDs setting.
#
# remap-user = "containers"
# remap-group = "containers"

# Root-auto-userns-user is a user name which can be used to look up one or more UID/GID
# ranges in the /etc/subuid and /etc/subgid file.  These ranges will be partitioned
# to containers configured to create automatically a user namespace.  Containers
# configured to automatically create a user namespace can still overlap with containers
# having an explicit mapping set.
# This setting is ignored when running as rootless.
# root-auto-userns-user = "storage"
#
# Auto-userns-min-size is the minimum size for a user namespace created automatically.
# auto-userns-min-size=1024
#
# Auto-userns-max-size is the maximum size for a user namespace created automatically.
# auto-userns-max-size=65536

[storage.options.overlay]
# ignore_chown_errors can be set to allow a non privileged user running with
# a single UID within a user namespace to run containers. The user can pull
# and use any image even those with multiple uids.  Note multiple UIDs will be
# squashed down to the default uid in the container.  These images will have no
# separation between the users in the container. Only supported for the overlay
# and vfs drivers.
#ignore_chown_errors = "false"

# Inodes is used to set a maximum inodes of the container image.
# inodes = ""

# Path to an helper program to use for mounting the file system instead of mounting it
# directly.
mount_program = "/usr/bin/fuse-overlayfs"

# mountopt specifies comma separated list of extra mount options
mountopt = "nodev,metacopy=on"

# Set to skip a PRIVATE bind mount on the storage home directory.
# skip_mount_home = "false"

# Set to use composefs to mount data layers with overlay.
# use_composefs = "false"

# Size is used to set a maximum size of the container image.
# size = ""

# ForceMask specifies the permissions mask that is used for new files and
# directories.
#
# The values "shared" and "private" are accepted.
# Octal permission masks are also accepted.
#
#  "": No value specified.
#     All files/directories, get set with the permissions identified within the
#     image.
#  "private": it is equivalent to 0700.
#     All files/directories get set with 0700 permissions.  The owner has rwx
#     access to the files. No other users on the system can access the files.
#     This setting could be used with networked based homedirs.
#  "shared": it is equivalent to 0755.
#     The owner has rwx access to the files and everyone else can read, access
#     and execute them. This setting is useful for sharing containers storage
#     with other users.  For instance have a storage owned by root but shared
#     to rootless users as an additional store.
#     NOTE:  All files within the image are made readable and executable by any
#     user on the system. Even /etc/shadow within your image is now readable by
#     any user.
#
#   OCTAL: Users can experiment with other OCTAL Permissions.
#
#  Note: The force_mask Flag is an experimental feature, it could change in the
#  future.  When "force_mask" is set the original permission mask is stored in
#  the "user.containers.override_stat" xattr and the "mount_program" option must
#  be specified. Mount programs like "/usr/bin/fuse-overlayfs" present the
#  extended attribute permissions to processes within containers rather than the
#  "force_mask"  permissions.
#
# force_mask = ""

[storage.options.thinpool]
# Storage Options for thinpool

# autoextend_percent determines the amount by which pool needs to be
# grown. This is specified in terms of % of pool size. So a value of 20 means
# that when threshold is hit, pool will be grown by 20% of existing
# pool size.
# autoextend_percent = "20"

# autoextend_threshold determines the pool extension threshold in terms
# of percentage of pool size. For example, if threshold is 60, that means when
# pool is 60% full, threshold has been hit.
# autoextend_threshold = "80"

# basesize specifies the size to use when creating the base device, which
# limits the size of images and containers.
# basesize = "10G"

# blocksize specifies a custom blocksize to use for the thin pool.
# blocksize="64k"

# directlvm_device specifies a custom block storage device to use for the
# thin pool. Required if you setup devicemapper.
# directlvm_device = ""

# directlvm_device_force wipes device even if device already has a filesystem.
# directlvm_device_force = "True"

# fs specifies the filesystem type to use for the base device.
# fs="xfs"

# log_level sets the log level of devicemapper.
# 0: LogLevelSuppress 0 (Default)
# 2: LogLevelFatal
# 3: LogLevelErr
# 4: LogLevelWarn
# 5: LogLevelNotice
# 6: LogLevelInfo
# 7: LogLevelDebug
# log_level = "7"

# min_free_space specifies the min free space percent in a thin pool require for
# new device creation to succeed. Valid values are from 0% - 99%.
# Value 0% disables
# min_free_space = "10%"

# mkfsarg specifies extra mkfs arguments to be used when creating the base
# device.
# mkfsarg = ""

# metadata_size is used to set the `pvcreate --metadatasize` options when
# creating thin devices. Default is 128k
# metadata_size = ""

# Size is used to set a maximum size of the container image.
# size = ""

# use_deferred_removal marks devicemapper block device for deferred removal.
# If the thinpool is in use when the driver attempts to remove it, the driver
# tells the kernel to remove it as soon as possible. Note this does not free
# up the disk space, use deferred deletion to fully remove the thinpool.
# use_deferred_removal = "True"

# use_deferred_deletion marks thinpool device for deferred deletion.
# If the device is busy when the driver attempts to delete it, the driver
# will attempt to delete device every 30 seconds until successful.
# If the program using the driver exits, the driver will continue attempting
# to cleanup the next time the driver is used. Deferred deletion permanently
# deletes the device and all data stored in device will be lost.
# use_deferred_deletion = "True"

# xfs_nospace_max_retries specifies the maximum number of retries XFS should
# attempt to complete IO when ENOSPC (no space) error is returned by
# underlying storage device.
# xfs_nospace_max_retries = "0"

I am NOT using volumes (yet) so take these with a grain of salt.
containers.conf on ZFS:

[engine]

# Volume Path
volume_path = "/zdata/PODMAN/VOLUMES"

containers.conf on EXT4:

[engine]

# Volume Path
volume_path = "/home/podman/containers/volumes"

To be honest, whenever this occurs, the only solution is to issue a TOTAL reset:
https://github.com/luckylinux/podman-tools/blob/main/reset_podman.sh.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. stale-issue
Projects
None yet
Development

No branches or pull requests

4 participants