Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Stuck on Storing Signatures #3323

Closed
vladkosarev opened this issue Jun 13, 2019 · 26 comments
Closed

Stuck on Storing Signatures #3323

vladkosarev opened this issue Jun 13, 2019 · 26 comments
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. rootless

Comments

@vladkosarev
Copy link

/kind bug

Description
Using Void Linux, latest podman gets stuck on sitespeed.io container at "Storing Signatures" step.

Steps to reproduce the issue:
podman run --rm -v "$(pwd)":/sitespeed.io sitespeedio/sitespeed.io:9.3.2 https://sitespeed.io

Describe the results you received:
[vlad@void-vm ~]$ podman run --rm -v "$(pwd)":/sitespeed.io sitespeedio/sitespeed.io:9.3.2 https://sitespeed.io
Trying to pull docker.io/sitespeedio/sitespeed.io:9.3.2...Getting image source signatures
Copying blob edb2b29fa1e2 skipped: already exists
Copying blob c46b5fa4d940 skipped: already exists
Copying blob 6b1eed27cade skipped: already exists
Copying blob 93ae3df89c92 skipped: already exists
Copying blob 76d5ea985833 skipped: already exists
Copying blob 473ede7ed136 skipped: already exists
Copying blob cf82bd0b1aa3 skipped: already exists
Copying blob 7dc6cf341fb3 skipped: already exists
Copying blob 3c9757b8e6c7 skipped: already exists
Copying blob 275861923052 skipped: already exists
Copying blob 4c29465436d4 skipped: already exists
Copying blob 66700b5e3941 skipped: already exists
Copying blob 160f3f39f1b5 skipped: already exists
Copying blob a0507231acd7 skipped: already exists
Copying blob b965ed368ed7 skipped: already exists
Copying blob ad9103b58e2d skipped: already exists
Copying blob 946c4c8160b3 skipped: already exists
Copying blob df426434925b skipped: already exists
Copying blob 8791c156ea54 skipped: already exists
Copying blob b1ac729adf6d skipped: already exists
Copying blob 4d916c8de88f skipped: already exists
Copying blob 67578fe28a3d skipped: already exists
Copying blob 47f6b4d4a060 done
Copying blob 203d22208385 done
Copying blob 671ce6b824ea done
Copying config 681cf2afef done
Writing manifest to image destination
Storing signatures

Describe the results you expected:
I would expect it to not freeze at "Storing signatures" step and actually run the container

Output of podman version:

Version:            1.4.0
RemoteAPI Version:  1
Go Version:         go1.12.5
OS/Arch:            linux/amd64

Output of podman info --debug:

debug:
  compiler: gc
  git commit: ""
  go version: go1.12.5
  podman version: 1.4.0
host:
  BuildahVersion: 1.8.3
  Conmon:
    package: Unknown
    path: /usr/libexec/podman/conmon
    version: 'conmon version 0.2.0, commit: '
  Distribution:
    distribution: '"void"'
    version: unknown
  MemFree: 135749632
  MemTotal: 2013794304
  OCIRuntime:
    package: Unknown
    path: /usr/bin/runc
    version: 'runc version spec: 1.0.1-dev'
  SwapFree: 0
  SwapTotal: 0
  arch: amd64
  cpus: 4
  hostname: void-vm
  kernel: 4.19.50_1
  os: linux
  rootless: true
  uptime: 2h 52m 7.41s (Approximately 0.08 days)
registries:
  blocked: null
  insecure: null
  search:
  - docker.io
  - registry.fedoraproject.org
  - registry.access.redhat.com
store:
  ConfigFile: /home/vlad/.config/containers/storage.conf
  ContainerStore:
    number: 1
  GraphDriverName: vfs
  GraphOptions: null
  GraphRoot: /home/vlad/.local/share/containers/storage
  GraphStatus: {}
  ImageStore:
    number: 1
  RunRoot: /tmp/1000
  VolumePath: /home/vlad/.local/share/containers/storage/volumes

Additional environment details (AWS, VirtualBox, physical, etc.):
Hyper-V VM on Windows 10 Ent

@openshift-ci-robot openshift-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label Jun 13, 2019
@mheon mheon added the rootless label Jun 13, 2019
@mheon
Copy link
Member

mheon commented Jun 13, 2019

@mtrmac PTAL.

My first instinct is that this is related to the VFS driver, but not certain yet.

@mtrmac
Copy link
Collaborator

mtrmac commented Jun 13, 2019

@vladkosarev Can you provide the output of podman --log-level=debug run …?

“Storing signatures” is a memory-only operation that can’t really hang; afterwards storeImage.Commit runs, which actually creates all the layers and c/storage objects, so this could just be a c/storage deadlock, or maybe something in the graph drivers.

@vladkosarev
Copy link
Author

After I logged this ticket I set up a bunch of other containers and worked with those.
I just tried the container in question here and it actually worked this time around.
I should've mentioned that it was a fresh install of podman and I did hello world container (which worked) and next container that I tried was the one mentioned here that failed for me.

Since it works now we can close the ticket but there might be something funky going on with a fresh install and particular sequence of containers.

Still attaching debug log for a successful run -

time="2019-06-13T16:17:58-04:00" level=info msg="running as rootless" 
time="2019-06-13T16:17:58-04:00" level=debug msg="Initializing boltdb state at /home/vlad/.local/share/containers/storage/libpod/bolt_state.db" 
time="2019-06-13T16:17:58-04:00" level=debug msg="Using graph driver vfs" 
time="2019-06-13T16:17:58-04:00" level=debug msg="Using graph root /home/vlad/.local/share/containers/storage" 
time="2019-06-13T16:17:58-04:00" level=debug msg="Using run root /tmp/1000" 
time="2019-06-13T16:17:58-04:00" level=debug msg="Using static dir /home/vlad/.local/share/containers/storage/libpod" 
time="2019-06-13T16:17:58-04:00" level=debug msg="Using tmp dir /tmp/run-1000/libpod/tmp" 
time="2019-06-13T16:17:58-04:00" level=debug msg="Using volume path /home/vlad/.local/share/containers/storage/volumes" 
time="2019-06-13T16:17:58-04:00" level=debug msg="Set libpod namespace to """ 
time="2019-06-13T16:17:58-04:00" level=debug msg="[graphdriver] trying provided driver "vfs"" 
time="2019-06-13T16:17:58-04:00" level=debug msg="Initializing event backend file" 
time="2019-06-13T16:17:58-04:00" level=debug msg="parsed reference into "[vfs@/home/vlad/.local/share/containers/storage+/tmp/1000]docker.io/sitespeedio/sitespeed.io:9.3.2"" 
time="2019-06-13T16:17:58-04:00" level=debug msg="reference "[vfs@/home/vlad/.local/share/containers/storage+/tmp/1000]docker.io/sitespeedio/sitespeed.io:9.3.2" does not resolve to an image ID" 
time="2019-06-13T16:17:58-04:00" level=debug msg="parsed reference into "[vfs@/home/vlad/.local/share/containers/storage+/tmp/1000]localhost/sitespeedio/sitespeed.io:9.3.2"" 
time="2019-06-13T16:17:58-04:00" level=debug msg="reference "[vfs@/home/vlad/.local/share/containers/storage+/tmp/1000]localhost/sitespeedio/sitespeed.io:9.3.2" does not resolve to an image ID" 
time="2019-06-13T16:17:58-04:00" level=debug msg="parsed reference into "[vfs@/home/vlad/.local/share/containers/storage+/tmp/1000]docker.io/sitespeedio/sitespeed.io:9.3.2"" 
time="2019-06-13T16:17:58-04:00" level=debug msg="parsed reference into "[vfs@/home/vlad/.local/share/containers/storage+/tmp/1000]registry.fedoraproject.org/sitespeedio/sitespeed.io:9.3.2"" 
time="2019-06-13T16:17:58-04:00" level=debug msg="parsed reference into "[vfs@/home/vlad/.local/share/containers/storage+/tmp/1000]registry.access.redhat.com/sitespeedio/sitespeed.io:9.3.2"" 
Trying to pull docker.io/sitespeedio/sitespeed.io:9.3.2...time="2019-06-13T16:17:58-04:00" level=debug msg="Trying to pull "docker.io/sitespeedio/sitespeed.io:9.3.2" from endpoint "docker.io"" 
time="2019-06-13T16:17:58-04:00" level=debug msg="reference rewritten from 'docker.io/sitespeedio/sitespeed.io:9.3.2' to 'docker.io/sitespeedio/sitespeed.io:9.3.2'" 
time="2019-06-13T16:17:58-04:00" level=debug msg="Using registries.d directory /etc/containers/registries.d for sigstore configuration" 
time="2019-06-13T16:17:58-04:00" level=debug msg=" Using "default-docker" configuration" 
time="2019-06-13T16:17:58-04:00" level=debug msg=" No signature storage configuration found for docker.io/sitespeedio/sitespeed.io:9.3.2" 
time="2019-06-13T16:17:58-04:00" level=debug msg="Looking for TLS certificates and private keys in /etc/docker/certs.d/docker.io" 
time="2019-06-13T16:17:58-04:00" level=debug msg="GET https://registry-1.docker.io/v2/" 
time="2019-06-13T16:17:59-04:00" level=debug msg="Ping https://registry-1.docker.io/v2/ status 401" 
time="2019-06-13T16:17:59-04:00" level=debug msg="GET https://auth.docker.io/token?scope=repository%3Asitespeedio%2Fsitespeed.io%3Apull&service=registry.docker.io" 
time="2019-06-13T16:17:59-04:00" level=debug msg="GET https://registry-1.docker.io/v2/sitespeedio/sitespeed.io/manifests/9.3.2" 
time="2019-06-13T16:17:59-04:00" level=debug msg="Using blob info cache at /home/vlad/.local/share/containers/cache/blob-info-cache-v1.boltdb" 
time="2019-06-13T16:17:59-04:00" level=debug msg="IsRunningImageAllowed for image docker:docker.io/sitespeedio/sitespeed.io:9.3.2" 
time="2019-06-13T16:17:59-04:00" level=debug msg=" Using default policy section" 
time="2019-06-13T16:17:59-04:00" level=debug msg=" Requirement 0: allowed" 
time="2019-06-13T16:17:59-04:00" level=debug msg="Overall: allowed" 
time="2019-06-13T16:17:59-04:00" level=debug msg="Downloading /v2/sitespeedio/sitespeed.io/blobs/sha256:681cf2afef031c611e7f01d5adfdf99bba0570f499d27b562fd41cef2023ebc7" 
time="2019-06-13T16:17:59-04:00" level=debug msg="GET https://registry-1.docker.io/v2/sitespeedio/sitespeed.io/blobs/sha256:681cf2afef031c611e7f01d5adfdf99bba0570f499d27b562fd41cef2023ebc7" 
Getting image source signatures
time="2019-06-13T16:18:00-04:00" level=debug msg="Manifest has MIME type application/vnd.docker.distribution.manifest.v2+json, ordered candidate list [application/vnd.docker.distribution.manifest.v2+json, application/vnd.docker.distribution.manifest.v1+prettyjws, application/vnd.oci.image.manifest.v1+json, application/vnd.docker.distribution.manifest.v1+json]" 
time="2019-06-13T16:18:00-04:00" level=debug msg="... will first try using the original manifest unmodified" 
time="2019-06-13T16:18:00-04:00" level=debug msg="Skipping blob sha256:edb2b29fa1e2c4cc66a334b4ba3a332d57264934be5f89af8ef3be96eb888be9 (already present):" 
Copying blob sha256:edb2b29fa1e2c4cc66a334b4ba3a332d57264934be5f89af8ef3be96eb888be9
time="2019-06-13T16:18:00-04:00" level=debug msg="Skipping blob sha256:473ede7ed136b710ab2dd51579af038b7d00fbbf6a1790c6294c93666203c0a6 (already present):" 
Copying blob sha256:473ede7ed136b710ab2dd51579af038b7d00fbbf6a1790c6294c93666203c0a6
time="2019-06-13T16:18:00-04:00" level=debug msg="Skipping blob sha256:6b1eed27cadec5de8051d56697b0b67527e4076deedceefb41b7b2ea9b900459 (already present):" 
Copying blob sha256:6b1eed27cadec5de8051d56697b0b67527e4076deedceefb41b7b2ea9b900459
time="2019-06-13T16:18:00-04:00" level=debug msg="Skipping blob sha256:93ae3df89c92cb1d20e9c09f499e693d3a8a8cef161f7158f7a9a3b5d06e4ef2 (already present):" 
Copying blob sha256:93ae3df89c92cb1d20e9c09f499e693d3a8a8cef161f7158f7a9a3b5d06e4ef2
time="2019-06-13T16:18:00-04:00" level=debug msg="Skipping blob sha256:76d5ea985833d96e837de773e874939bb93d7100db1cfc95919a75cee4e75f4f (already present):" 
Copying blob sha256:76d5ea985833d96e837de773e874939bb93d7100db1cfc95919a75cee4e75f4f
time="2019-06-13T16:18:00-04:00" level=debug msg="Skipping blob sha256:275861923052b47d0ee66f64972e8a5b8bc16efb629234b611b7c1d8a07798f5 (already present):" 
Copying blob sha256:275861923052b47d0ee66f64972e8a5b8bc16efb629234b611b7c1d8a07798f5
time="2019-06-13T16:18:00-04:00" level=debug msg="Skipping blob sha256:c46b5fa4d940569e49988515c1ea0295f56d0a16228d8f854e27613f467ec892 (already present):" 
Copying blob sha256:c46b5fa4d940569e49988515c1ea0295f56d0a16228d8f854e27613f467ec892
time="2019-06-13T16:18:00-04:00" level=debug msg="Skipping blob sha256:cf82bd0b1aa395c69cddc81e6d3c32a17e63ae953a1d2b5018506acf788a9ec4 (already present):" 
Copying blob sha256:cf82bd0b1aa395c69cddc81e6d3c32a17e63ae953a1d2b5018506acf788a9ec4
time="2019-06-13T16:18:00-04:00" level=debug msg="Skipping blob sha256:3c9757b8e6c7a1e12a6dfe4a303dd3495b1225ad8a3058393dc86d74d641a73d (already present):" 
Copying blob sha256:3c9757b8e6c7a1e12a6dfe4a303dd3495b1225ad8a3058393dc86d74d641a73d
time="2019-06-13T16:18:00-04:00" level=debug msg="Skipping blob sha256:66700b5e394146d8b5f8b78b298b27ef8e34904cf730da408dc1a516299128d4 (already present):" 
Copying blob sha256:66700b5e394146d8b5f8b78b298b27ef8e34904cf730da408dc1a516299128d4
time="2019-06-13T16:18:00-04:00" level=debug msg="Skipping blob sha256:946c4c8160b3c6046ebecc50981bbda4e659dcde72c5ff3e397d762e218b93eb (already present):" 
Copying blob sha256:946c4c8160b3c6046ebecc50981bbda4e659dcde72c5ff3e397d762e218b93eb
time="2019-06-13T16:18:00-04:00" level=debug msg="Skipping blob sha256:4c29465436d4cc42a30ef90c0e365c781c1a44a6c8ffd2ee319782f5d406abe7 (already present):" 
Copying blob sha256:4c29465436d4cc42a30ef90c0e365c781c1a44a6c8ffd2ee319782f5d406abe7
time="2019-06-13T16:18:00-04:00" level=debug msg="Skipping blob sha256:7dc6cf341fb33d2c5ffc8a1e94b6c4fe58f29ba494e71dc3cf4cad63ed718f4e (already present):" 
Copying blob sha256:7dc6cf341fb33d2c5ffc8a1e94b6c4fe58f29ba494e71dc3cf4cad63ed718f4e
time="2019-06-13T16:18:00-04:00" level=debug msg="Skipping blob sha256:b1ac729adf6d626ba6af0e27d3b5b1c5558e6760d2760c87ef85a23e94d9cf1d (already present):" 
Copying blob sha256:b1ac729adf6d626ba6af0e27d3b5b1c5558e6760d2760c87ef85a23e94d9cf1d
time="2019-06-13T16:18:00-04:00" level=debug msg="Skipping blob sha256:a0507231acd737214180fef1cc28b96e7529dc870eb6c62af591be8fe60e27ec (already present):" 
Copying blob sha256:a0507231acd737214180fef1cc28b96e7529dc870eb6c62af591be8fe60e27ec
time="2019-06-13T16:18:00-04:00" level=debug msg="Skipping blob sha256:b965ed368ed7febaede36922a2e0944eda9e5c1b3375fe73fdcb83733f14eaac (already present):" 
Copying blob sha256:b965ed368ed7febaede36922a2e0944eda9e5c1b3375fe73fdcb83733f14eaac
time="2019-06-13T16:18:00-04:00" level=debug msg="Skipping blob sha256:160f3f39f1b553d652de2b6eb7061e378834db0bf6ceca96ffe8a512fd71cefc (already present):" 
Copying blob sha256:160f3f39f1b553d652de2b6eb7061e378834db0bf6ceca96ffe8a512fd71cefc
time="2019-06-13T16:18:00-04:00" level=debug msg="Skipping blob sha256:4d916c8de88f66ec0c301780b8311394711c1c0447223caf3484e29fe7b7a6ca (already present):" 
time="2019-06-13T16:18:00-04:00" level=debug msg="Skipping blob sha256:67578fe28a3d1ad8f4d81700e61b6fa8eabe8054b30bf7bc6f487ee30c1a8c22 (already present):" 
Copying blob sha256:67578fe28a3d1ad8f4d81700e61b6fa8eabe8054b30bf7bc6f487ee30c1a8c22
time="2019-06-13T16:18:00-04:00" level=debug msg="Skipping blob sha256:8791c156ea54ffe9242508667be84754b317611b2241d44d3b00cdc74b3c88c2 (already present):" 
time="2019-06-13T16:18:00-04:00" level=debug msg="Skipping blob sha256:ad9103b58e2da6cd2340bcaed5fc504a69f187b804169f7529de148017888479 (already present):" 
Copying blob sha256:4d916c8de88f66ec0c301780b8311394711c1c0447223caf3484e29fe7b7a6ca
Copying blob sha256:ad9103b58e2da6cd2340bcaed5fc504a69f187b804169f7529de148017888479
Copying blob sha256:8791c156ea54ffe9242508667be84754b317611b2241d44d3b00cdc74b3c88c2
time="2019-06-13T16:18:00-04:00" level=debug msg="Skipping blob sha256:203d222083851df55e2854d8e4517328785a58d716182e7275330d189296c9e1 (already present):" 
Copying blob sha256:203d222083851df55e2854d8e4517328785a58d716182e7275330d189296c9e1
time="2019-06-13T16:18:00-04:00" level=debug msg="Skipping blob sha256:df426434925b00d62438ab04437e4ef4960c388b97f6788de3dc7ee45e9996e7 (already present):" 
Copying blob sha256:df426434925b00d62438ab04437e4ef4960c388b97f6788de3dc7ee45e9996e7
time="2019-06-13T16:18:00-04:00" level=debug msg="Downloading /v2/sitespeedio/sitespeed.io/blobs/sha256:671ce6b824eaaa746e9108951838971528b1ac66b1128b2ca29dd85109ea4cd5" 
time="2019-06-13T16:18:00-04:00" level=debug msg="GET https://registry-1.docker.io/v2/sitespeedio/sitespeed.io/blobs/sha256:671ce6b824eaaa746e9108951838971528b1ac66b1128b2ca29dd85109ea4cd5" 
time="2019-06-13T16:18:00-04:00" level=debug msg="Downloading /v2/sitespeedio/sitespeed.io/blobs/sha256:47f6b4d4a06098db5452efcbe091d1de1a93cd08483309389ef416998b924014" 
time="2019-06-13T16:18:00-04:00" level=debug msg="GET https://registry-1.docker.io/v2/sitespeedio/sitespeed.io/blobs/sha256:47f6b4d4a06098db5452efcbe091d1de1a93cd08483309389ef416998b924014" 
Copying blob sha256:47f6b4d4a06098db5452efcbe091d1de1a93cd08483309389ef416998b924014
time="2019-06-13T16:18:00-04:00" level=debug msg="Detected compression format gzip" 
time="2019-06-13T16:18:00-04:00" level=debug msg="Using original blob without modification" 
Copying blob sha256:671ce6b824eaaa746e9108951838971528b1ac66b1128b2ca29dd85109ea4cd5
time="2019-06-13T16:18:00-04:00" level=debug msg="Detected compression format gzip" 
time="2019-06-13T16:18:00-04:00" level=debug msg="Using original blob without modification" 
Copying config sha256:681cf2afef031c611e7f01d5adfdf99bba0570f499d27b562fd41cef2023ebc7
time="2019-06-13T16:18:00-04:00" level=debug msg="No compression detected" 
time="2019-06-13T16:18:00-04:00" level=debug msg="Using original blob without modification" 
Writing manifest to image destination
Storing signatures
time="2019-06-13T16:18:59-04:00" level=debug msg="Start untar layer" 
time="2019-06-13T16:18:59-04:00" level=debug msg="Untar time: 0.1234235s" 
time="2019-06-13T16:19:47-04:00" level=debug msg="Start untar layer" 
time="2019-06-13T16:19:47-04:00" level=debug msg="Untar time: 0.1500408s" 
time="2019-06-13T16:19:47-04:00" level=debug msg="setting image creation date to 2019-06-13 10:25:35.2120051 +0000 UTC" 
time="2019-06-13T16:19:47-04:00" level=debug msg="created new image ID "681cf2afef031c611e7f01d5adfdf99bba0570f499d27b562fd41cef2023ebc7"" 
time="2019-06-13T16:19:48-04:00" level=debug msg="set names of image "681cf2afef031c611e7f01d5adfdf99bba0570f499d27b562fd41cef2023ebc7" to [docker.io/sitespeedio/sitespeed.io:9.3.2]" 
time="2019-06-13T16:19:48-04:00" level=debug msg="saved image metadata "{}"" 
time="2019-06-13T16:19:48-04:00" level=debug msg="parsed reference into "[vfs@/home/vlad/.local/share/containers/storage+/tmp/1000]docker.io/sitespeedio/sitespeed.io:9.3.2"" 
time="2019-06-13T16:19:48-04:00" level=debug msg="parsed reference into "[vfs@/home/vlad/.local/share/containers/storage+/tmp/1000]@681cf2afef031c611e7f01d5adfdf99bba0570f499d27b562fd41cef2023ebc7"" 
time="2019-06-13T16:19:48-04:00" level=debug msg="exporting opaque data as blob "sha256:681cf2afef031c611e7f01d5adfdf99bba0570f499d27b562fd41cef2023ebc7"" 
time="2019-06-13T16:19:48-04:00" level=debug msg="parsed reference into "[vfs@/home/vlad/.local/share/containers/storage+/tmp/1000]@681cf2afef031c611e7f01d5adfdf99bba0570f499d27b562fd41cef2023ebc7"" 
time="2019-06-13T16:19:48-04:00" level=debug msg="exporting opaque data as blob "sha256:681cf2afef031c611e7f01d5adfdf99bba0570f499d27b562fd41cef2023ebc7"" 
time="2019-06-13T16:19:48-04:00" level=debug msg="parsed reference into "[vfs@/home/vlad/.local/share/containers/storage+/tmp/1000]@681cf2afef031c611e7f01d5adfdf99bba0570f499d27b562fd41cef2023ebc7"" 
time="2019-06-13T16:19:48-04:00" level=debug msg="User mount /home/vlad:/sitespeed.io options []" 
time="2019-06-13T16:19:48-04:00" level=debug msg="Got mounts: [{/sitespeed.io bind /home/vlad []}]" 
time="2019-06-13T16:19:48-04:00" level=debug msg="Got volumes: []" 
time="2019-06-13T16:19:48-04:00" level=debug msg="Using slirp4netns netmode" 
time="2019-06-13T16:19:48-04:00" level=debug msg="Adding mount /proc" 
time="2019-06-13T16:19:48-04:00" level=debug msg="Adding mount /dev" 
time="2019-06-13T16:19:48-04:00" level=debug msg="Adding mount /dev/pts" 
time="2019-06-13T16:19:48-04:00" level=debug msg="Adding mount /dev/mqueue" 
time="2019-06-13T16:19:48-04:00" level=debug msg="Adding mount /sys" 
time="2019-06-13T16:19:48-04:00" level=debug msg="Adding mount /sys/fs/cgroup" 
time="2019-06-13T16:19:48-04:00" level=debug msg="created OCI spec and options for new container" 
time="2019-06-13T16:19:48-04:00" level=debug msg="Allocated lock 1 for container 748d486246c5a121bb6b3c6158657bba8cb1c830bd1a7148b49314bc14b8fa4a" 
time="2019-06-13T16:19:48-04:00" level=debug msg="parsed reference into "[vfs@/home/vlad/.local/share/containers/storage+/tmp/1000]@681cf2afef031c611e7f01d5adfdf99bba0570f499d27b562fd41cef2023ebc7"" 
time="2019-06-13T16:19:48-04:00" level=debug msg="exporting opaque data as blob "sha256:681cf2afef031c611e7f01d5adfdf99bba0570f499d27b562fd41cef2023ebc7"" 
time="2019-06-13T16:20:46-04:00" level=debug msg="created container "748d486246c5a121bb6b3c6158657bba8cb1c830bd1a7148b49314bc14b8fa4a"" 
time="2019-06-13T16:20:47-04:00" level=debug msg="container "748d486246c5a121bb6b3c6158657bba8cb1c830bd1a7148b49314bc14b8fa4a" has work directory "/home/vlad/.local/share/containers/storage/vfs-containers/748d486246c5a121bb6b3c6158657bba8cb1c830bd1a7148b49314bc14b8fa4a/userdata"" 
time="2019-06-13T16:20:47-04:00" level=debug msg="container "748d486246c5a121bb6b3c6158657bba8cb1c830bd1a7148b49314bc14b8fa4a" has run directory "/tmp/1000/vfs-containers/748d486246c5a121bb6b3c6158657bba8cb1c830bd1a7148b49314bc14b8fa4a/userdata"" 
time="2019-06-13T16:20:47-04:00" level=debug msg="New container created "748d486246c5a121bb6b3c6158657bba8cb1c830bd1a7148b49314bc14b8fa4a"" 
time="2019-06-13T16:20:47-04:00" level=debug msg="container "748d486246c5a121bb6b3c6158657bba8cb1c830bd1a7148b49314bc14b8fa4a" has CgroupParent "/libpod_parent/libpod-748d486246c5a121bb6b3c6158657bba8cb1c830bd1a7148b49314bc14b8fa4a"" 
time="2019-06-13T16:20:47-04:00" level=debug msg="Not attaching to stdin" 
time="2019-06-13T16:20:47-04:00" level=debug msg="mounted container "748d486246c5a121bb6b3c6158657bba8cb1c830bd1a7148b49314bc14b8fa4a" at "/home/vlad/.local/share/containers/storage/vfs/dir/6d36569a39082a6a1f51980a939fb2b538343deeb9719edaf9b9720d0675d618"" 
time="2019-06-13T16:20:47-04:00" level=debug msg="Created root filesystem for container 748d486246c5a121bb6b3c6158657bba8cb1c830bd1a7148b49314bc14b8fa4a at /home/vlad/.local/share/containers/storage/vfs/dir/6d36569a39082a6a1f51980a939fb2b538343deeb9719edaf9b9720d0675d618" 
time="2019-06-13T16:20:47-04:00" level=debug msg="skipping loading default AppArmor profile (rootless mode)" 
time="2019-06-13T16:20:47-04:00" level=debug msg="/etc/system-fips does not exist on host, not mounting FIPS mode secret" 
time="2019-06-13T16:20:47-04:00" level=debug msg="Created OCI spec for container 748d486246c5a121bb6b3c6158657bba8cb1c830bd1a7148b49314bc14b8fa4a at /home/vlad/.local/share/containers/storage/vfs-containers/748d486246c5a121bb6b3c6158657bba8cb1c830bd1a7148b49314bc14b8fa4a/userdata/config.json" 
time="2019-06-13T16:20:47-04:00" level=debug msg="/usr/libexec/podman/conmon messages will be logged to syslog" 
time="2019-06-13T16:20:47-04:00" level=debug msg="running conmon: /usr/libexec/podman/conmon" args=[-c 748d486246c5a121bb6b3c6158657bba8cb1c830bd1a7148b49314bc14b8fa4a -u 748d486246c5a121bb6b3c6158657bba8cb1c830bd1a7148b49314bc14b8fa4a -n tender_kirch -r /usr/bin/runc -b /home/vlad/.local/share/containers/storage/vfs-containers/748d486246c5a121bb6b3c6158657bba8cb1c830bd1a7148b49314bc14b8fa4a/userdata -p /tmp/1000/vfs-containers/748d486246c5a121bb6b3c6158657bba8cb1c830bd1a7148b49314bc14b8fa4a/userdata/pidfile --exit-dir /tmp/run-1000/libpod/tmp/exits --conmon-pidfile /tmp/1000/vfs-containers/748d486246c5a121bb6b3c6158657bba8cb1c830bd1a7148b49314bc14b8fa4a/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /home/vlad/.local/share/containers/storage --exit-command-arg --runroot --exit-command-arg /tmp/1000 --exit-command-arg --log-level --exit-command-arg debug --exit-command-arg --cgroup-manager --exit-command-arg cgroupfs --exit-command-arg --tmpdir --exit-command-arg /tmp/run-1000/libpod/tmp --exit-command-arg --runtime --exit-command-arg runc --exit-command-arg --storage-driver --exit-command-arg vfs --exit-command-arg container --exit-command-arg cleanup --exit-command-arg --rm --exit-command-arg 748d486246c5a121bb6b3c6158657bba8cb1c830bd1a7148b49314bc14b8fa4a --socket-dir-path /tmp/run-1000/libpod/tmp/socket -l k8s-file:/home/vlad/.local/share/containers/storage/vfs-containers/748d486246c5a121bb6b3c6158657bba8cb1c830bd1a7148b49314bc14b8fa4a/userdata/ctr.log --log-level debug --syslog] 
time="2019-06-13T16:20:47-04:00" level=warning msg="Failed to add conmon to cgroupfs sandbox cgroup: mkdir /sys/fs/cgroup/freezer/libpod_parent: permission denied" 
[conmon:d]: failed to write to /proc/self/oom_score_adj: Permission denied

time="2019-06-13T16:20:47-04:00" level=debug msg="Received container pid: 9085" 
time="2019-06-13T16:20:47-04:00" level=debug msg="Created container 748d486246c5a121bb6b3c6158657bba8cb1c830bd1a7148b49314bc14b8fa4a in OCI runtime" 
time="2019-06-13T16:20:47-04:00" level=debug msg="Attaching to container 748d486246c5a121bb6b3c6158657bba8cb1c830bd1a7148b49314bc14b8fa4a" 
time="2019-06-13T16:20:47-04:00" level=debug msg="connecting to socket /tmp/run-1000/libpod/tmp/socket/748d486246c5a121bb6b3c6158657bba8cb1c830bd1a7148b49314bc14b8fa4a/attach" 
time="2019-06-13T16:20:47-04:00" level=debug msg="Starting container 748d486246c5a121bb6b3c6158657bba8cb1c830bd1a7148b49314bc14b8fa4a with command [/start.sh https://sitespeed.io]" 
time="2019-06-13T16:20:47-04:00" level=debug msg="Started container 748d486246c5a121bb6b3c6158657bba8cb1c830bd1a7148b49314bc14b8fa4a" 
time="2019-06-13T16:20:47-04:00" level=debug msg="Enabling signal proxying" 
Google Chrome 75.0.3770.80 
Mozilla Firefox 67.0.1
[2019-06-13 20:20:49] INFO: Versions OS: linux 4.19.50_1 nodejs: v10.13.0 sitespeed.io: 9.3.2 browsertime: 5.2.5 coach: 3.6.0
[2019-06-13 20:20:50] INFO: Running tests using Chrome - 3 iteration(s)
[2019-06-13 20:20:54] INFO: Testing url https://sitespeed.io iteration 1
[2019-06-13 20:21:07] INFO: https://sitespeed.io BackEndTime: 804 DomInteractiveTime: 850 DomContentLoadedTime: 1005 FirstPaint: 1026 PageLoadTime: 1044
[2019-06-13 20:21:07] INFO: VisualMetrics FirstVisualChange: 1034 SpeedIndex: 1040 PerceptualSpeedIndex: 1040 LastVisualChange: 1067
[2019-06-13 20:21:08] INFO: Testing url https://sitespeed.io iteration 2
[2019-06-13 20:21:19] INFO: https://sitespeed.io BackEndTime: 433 DomInteractiveTime: 477 DomContentLoadedTime: 638 FirstPaint: 648 PageLoadTime: 639
[2019-06-13 20:21:19] INFO: VisualMetrics FirstVisualChange: 666 SpeedIndex: 672 PerceptualSpeedIndex: 673 LastVisualChange: 700
[2019-06-13 20:21:20] INFO: Testing url https://sitespeed.io iteration 3
[2019-06-13 20:21:32] INFO: https://sitespeed.io BackEndTime: 537 DomInteractiveTime: 595 DomContentLoadedTime: 647 FirstPaint: 797 PageLoadTime: 766
[2019-06-13 20:21:32] INFO: VisualMetrics FirstVisualChange: 800 SpeedIndex: 800 PerceptualSpeedIndex: 800 LastVisualChange: 800
[2019-06-13 20:21:32] INFO: https://sitespeed.io 14 requests, 168.26 kb, backEndTime: 591ms (±90.22ms), firstPaint: 824ms (±89.76ms), firstVisualChange: 833ms (±87.80ms), DOMContentLoaded: 763ms (±98.68ms), Load: 816ms (±97.65ms), speedIndex: 837 (±88.07), visualComplete85: 856ms (±89.44ms), lastVisualChange: 856ms (±89.44ms), rumSpeedIndex: 824 (±89.76) (3 runs)
[2019-06-13 20:21:34] INFO: HTML stored in /sitespeed.io/sitespeed-result/sitespeed.io/2019-06-13-20-20-49
time="2019-06-13T16:21:38-04:00" level=debug msg="Checking container 748d486246c5a121bb6b3c6158657bba8cb1c830bd1a7148b49314bc14b8fa4a status..." 
time="2019-06-13T16:21:38-04:00" level=debug msg="Attempting to read container 748d486246c5a121bb6b3c6158657bba8cb1c830bd1a7148b49314bc14b8fa4a exit code from file /tmp/run-1000/libpod/tmp/exits/748d486246c5a121bb6b3c6158657bba8cb1c830bd1a7148b49314bc14b8fa4a-old" 
time="2019-06-13T16:21:38-04:00" level=debug msg="[graphdriver] trying provided driver "vfs"" 

@mheon
Copy link
Member

mheon commented Jun 13, 2019

Hm. I'll close this, then. If anyone else hits a similar problem, leave a comment with details, we'll reopen.

@mheon mheon closed this as completed Jun 13, 2019
@alyssais
Copy link

I think I had the same issue when I ran podman pull discourse/discourse_dev:release. Extracting the images took nearly two and a half hours.

Logs
INFO[0000] running as rootless                          
DEBU[0000] using conmon: "/nix/store/ijyz1qpfqqwc4qv9vx6s1g7z8ssjv1c0-conmon-2.0.0/bin/conmon" 
DEBU[0000] Initializing boltdb state at /home/state/podman/containers/storage/libpod/bolt_state.db 
DEBU[0005] Using graph driver vfs                       
DEBU[0005] Using graph root /home/state/podman/containers/storage 
DEBU[0005] Using run root /tmp/1000                     
DEBU[0005] Using static dir /home/state/podman/containers/storage/libpod 
DEBU[0005] Using tmp dir /run/user/1000/libpod/tmp      
DEBU[0005] Using volume path /home/state/podman/containers/storage/volumes 
DEBU[0005] Set libpod namespace to ""                   
DEBU[0005] [graphdriver] trying provided driver "vfs"   
DEBU[0005] Initializing event backend journald          
WARN[0005] The configuration is using `runtime_path`, which is deprecated and will be removed in future.  Please use `runtimes` and `runtime` 
WARN[0005] If you are using both `runtime_path` and `runtime`, the configuration from `runtime_path` is used 
DEBU[0005] using runtime "/nix/store/s0jqb63yzcysxmb0v2nqvab38jc2fmvg-runc-1.0.0-rc8-bin/bin/runc" 
DEBU[0005] using runtime "/run/current-system/sw/bin/runc" 
DEBU[0006] parsed reference into "[vfs@/home/state/podman/containers/storage+/tmp/1000]docker.io/discourse/discourse_dev:release" 
Trying to pull docker.io/discourse/discourse_dev:release...
DEBU[0007] reference rewritten from 'docker.io/discourse/discourse_dev:release' to 'docker.io/discourse/discourse_dev:release' 
DEBU[0007] Trying to pull "docker.io/discourse/discourse_dev:release" 
DEBU[0007] Credentials not found                        
DEBU[0007] Using registries.d directory /etc/containers/registries.d for sigstore configuration 
DEBU[0007]  No signature storage configuration found for docker.io/discourse/discourse_dev:release 
DEBU[0007] Looking for TLS certificates and private keys in /etc/docker/certs.d/docker.io 
DEBU[0007] GET https://registry-1.docker.io/v2/         
DEBU[0007] Ping https://registry-1.docker.io/v2/ status 401 
DEBU[0007] GET https://auth.docker.io/token?scope=repository%3Adiscourse%2Fdiscourse_dev%3Apull&service=registry.docker.io 
DEBU[0008] GET https://registry-1.docker.io/v2/discourse/discourse_dev/manifests/release 
DEBU[0008] Using blob info cache at /home/state/containers/cache/blob-info-cache-v1.boltdb 
DEBU[0008] IsRunningImageAllowed for image docker:docker.io/discourse/discourse_dev:release 
DEBU[0008]  Using default policy section                
DEBU[0008]  Requirement 0: allowed                      
DEBU[0008] Overall: allowed                             
DEBU[0008] Downloading /v2/discourse/discourse_dev/blobs/sha256:aaec2f76b83a69fee8fa89ffe83570f8c7d1576226ab10eecd1669bb8e87b53a 
DEBU[0008] GET https://registry-1.docker.io/v2/discourse/discourse_dev/blobs/sha256:aaec2f76b83a69fee8fa89ffe83570f8c7d1576226ab10eecd1669bb8e87b53a 
Getting image source signatures
DEBU[0009] Manifest has MIME type application/vnd.docker.distribution.manifest.v2+json, ordered candidate list [application/vnd.docker.distribution.manifest.v2+json, application/vnd.docker.distribution.manifest.v1+prettyjws, application/vnd.oci.image.manifest.v1+json, application/vnd.docker.distribution.manifest.v1+json] 
DEBU[0009] ... will first try using the original manifest unmodified 
DEBU[0009] Downloading /v2/discourse/discourse_dev/blobs/sha256:f9ea07c2dd645103be1ac73a55ce4f3cffc328aa641517a5926b7d800b78128e 
DEBU[0009] GET https://registry-1.docker.io/v2/discourse/discourse_dev/blobs/sha256:f9ea07c2dd645103be1ac73a55ce4f3cffc328aa641517a5926b7d800b78128e 
DEBU[0009] Downloading /v2/discourse/discourse_dev/blobs/sha256:83aa6f4d64be3fc2cc0d33c22ce0b0dd9cd64c493dc4ccf52130d2a9d3bc1b37 
DEBU[0009] GET https://registry-1.docker.io/v2/discourse/discourse_dev/blobs/sha256:83aa6f4d64be3fc2cc0d33c22ce0b0dd9cd64c493dc4ccf52130d2a9d3bc1b37 
DEBU[0009] Downloading /v2/discourse/discourse_dev/blobs/sha256:314445a2d62ef3b3bdc2b79317c4bfdef0aeaef2ed82207366ac3f0a0993c32c 
DEBU[0009] GET https://registry-1.docker.io/v2/discourse/discourse_dev/blobs/sha256:314445a2d62ef3b3bdc2b79317c4bfdef0aeaef2ed82207366ac3f0a0993c32c 
DEBU[0009] Downloading /v2/discourse/discourse_dev/blobs/sha256:259c2faea530ab8aba90a8e15b11b52852a3cabf257b85b5270f82404684720f 
DEBU[0009] GET https://registry-1.docker.io/v2/discourse/discourse_dev/blobs/sha256:259c2faea530ab8aba90a8e15b11b52852a3cabf257b85b5270f82404684720f 
DEBU[0009] Downloading /v2/discourse/discourse_dev/blobs/sha256:1ab2bdfe97783562315f98f94c0769b1897a05f7b0395ca1520ebee08666703b 
DEBU[0009] GET https://registry-1.docker.io/v2/discourse/discourse_dev/blobs/sha256:1ab2bdfe97783562315f98f94c0769b1897a05f7b0395ca1520ebee08666703b 
DEBU[0009] Downloading /v2/discourse/discourse_dev/blobs/sha256:6d7febef89e821922dcf311f1b47060ded7b30070f634453e27a123d2c95c5da 
DEBU[0009] GET https://registry-1.docker.io/v2/discourse/discourse_dev/blobs/sha256:6d7febef89e821922dcf311f1b47060ded7b30070f634453e27a123d2c95c5da 
DEBU[0010] Detected compression format gzip             
DEBU[0010] Using original blob without modification     
DEBU[0010] Detected compression format gzip             
DEBU[0010] Using original blob without modification     
DEBU[0010] Detected compression format gzip             
DEBU[0010] Using original blob without modification     
DEBU[0011] Detected compression format gzip             
DEBU[0011] Using original blob without modification     
DEBU[0011] Detected compression format gzip             
DEBU[0011] Using original blob without modification     
DEBU[0012] Detected compression format gzip             
DEBU[0012] Using original blob without modification     
DEBU[0013] Downloading /v2/discourse/discourse_dev/blobs/sha256:5b045945df239d15149649fe4dcb1f74de6bfeaf6053644197c86c5ef2c74309 
DEBU[0013] GET https://registry-1.docker.io/v2/discourse/discourse_dev/blobs/sha256:5b045945df239d15149649fe4dcb1f74de6bfeaf6053644197c86c5ef2c74309 
DEBU[0013] Downloading /v2/discourse/discourse_dev/blobs/sha256:dd55daacdc5c4263802940a3bab4b7780794310252db221e996644d585dc8f7d 
DEBU[0013] GET https://registry-1.docker.io/v2/discourse/discourse_dev/blobs/sha256:dd55daacdc5c4263802940a3bab4b7780794310252db221e996644d585dc8f7d 
DEBU[0013] Downloading /v2/discourse/discourse_dev/blobs/sha256:5b50460c132d2c13753b5ecf03ae50861542585cbf3291afba51bc37a1fd2c81 
DEBU[0013] GET https://registry-1.docker.io/v2/discourse/discourse_dev/blobs/sha256:5b50460c132d2c13753b5ecf03ae50861542585cbf3291afba51bc37a1fd2c81 
DEBU[0013] Downloading /v2/discourse/discourse_dev/blobs/sha256:d290e7dab349b9355e01ba6d23c850183e963c6a7db02c58c10f66591ee8a1e6 
DEBU[0013] GET https://registry-1.docker.io/v2/discourse/discourse_dev/blobs/sha256:d290e7dab349b9355e01ba6d23c850183e963c6a7db02c58c10f66591ee8a1e6 
DEBU[0014] Detected compression format gzip             
DEBU[0014] Using original blob without modification     
DEBU[0015] Detected compression format gzip             
DEBU[0015] Using original blob without modification     
DEBU[0016] Detected compression format gzip             
DEBU[0016] Using original blob without modification     
DEBU[0019] Detected compression format gzip             
DEBU[0019] Using original blob without modification     
DEBU[0019] Downloading /v2/discourse/discourse_dev/blobs/sha256:9f95105f91fcf4e40fc82fa1317f10f5710e68f57456b65134ea5ee6263c684e 
DEBU[0019] GET https://registry-1.docker.io/v2/discourse/discourse_dev/blobs/sha256:9f95105f91fcf4e40fc82fa1317f10f5710e68f57456b65134ea5ee6263c684e 
DEBU[0019] Downloading /v2/discourse/discourse_dev/blobs/sha256:5717bfc5732cbcb73b547642c8bb77bb1d4b65ac802315d143233ca177fb7e0f 
DEBU[0019] GET https://registry-1.docker.io/v2/discourse/discourse_dev/blobs/sha256:5717bfc5732cbcb73b547642c8bb77bb1d4b65ac802315d143233ca177fb7e0f 
DEBU[0019] Downloading /v2/discourse/discourse_dev/blobs/sha256:6c161aaf0049976ce1bb7563f77912e5f695f19072c5fc34f1140997228c31f1 
DEBU[0019] GET https://registry-1.docker.io/v2/discourse/discourse_dev/blobs/sha256:6c161aaf0049976ce1bb7563f77912e5f695f19072c5fc34f1140997228c31f1 
DEBU[0020] Detected compression format gzip             
DEBU[0020] Using original blob without modification     
DEBU[0020] Detected compression format gzip             
DEBU[0020] Using original blob without modification     
DEBU[0024] Detected compression format gzip             
DEBU[0024] Using original blob without modification     
DEBU[0025] Downloading /v2/discourse/discourse_dev/blobs/sha256:44aa8de3c334def5166dd666d6eb7d25a14a6753d7b72d4ae06e4b9a96d2ec78 
DEBU[0025] GET https://registry-1.docker.io/v2/discourse/discourse_dev/blobs/sha256:44aa8de3c334def5166dd666d6eb7d25a14a6753d7b72d4ae06e4b9a96d2ec78 
DEBU[0028] Detected compression format gzip             
DEBU[0028] Using original blob without modification     
Coppying blob 5717bfc5732c done
DEBU[0189] No compression detected                      
DEBU[0189] Using original blob without modification     
Copying config aaec2f76b8 [======================================] 18.1KiB / 18.1KiB
Copying config aaec2f76b8 done
Writing manifest to image destination
Storing signatures
DEBU[0190] Start untar layer                            
DEBU[0193] Untar time: 2.7715050100000003s              
DEBU[0202] Start untar layer                            
DEBU[0414] Untar time: 212.281946872s                   
DEBU[0759] Start untar layer                            
DEBU[0764] Untar time: 5.530771338s                     
DEBU[2725] Start untar layer                            
DEBU[2731] Untar time: 5.975134436s                     
DEBU[4712] Start untar layer                            
DEBU[4712] Untar time: 0.269512026s                     
DEBU[5254] Start untar layer                            
DEBU[5255] Untar time: 0.610600993s                     
DEBU[5369] Start untar layer                            
DEBU[5369] Untar time: 0.134516457s                     
DEBU[5576] Start untar layer                            
DEBU[5577] Untar time: 0.708177531s                     
DEBU[5714] Start untar layer                            
DEBU[5719] Untar time: 5.031429221s                     
DEBU[5849] Start untar layer                            
DEBU[5850] Untar time: 1.136541521s                     
DEBU[5987] Start untar layer                            
DEBU[6112] Untar time: 124.770738721s                   
DEBU[7293] Start untar layer                            
DEBU[7305] Untar time: 11.186880398s                    
DEBU[9747] Start untar layer                            
DEBU[9748] Untar time: 1.005790214s                     
DEBU[10190] Start untar layer                            
DEBU[10297] Untar time: 107.175232526s                   
DEBU[10299] setting image creation date to 2019-09-08 23:26:39.039338573 +0000 UTC 
DEBU[10299] created new image ID "aaec2f76b83a69fee8fa89ffe83570f8c7d1576226ab10eecd1669bb8e87b53a" 
DEBU[10300] set names of image "aaec2f76b83a69fee8fa89ffe83570f8c7d1576226ab10eecd1669bb8e87b53a" to [docker.io/discourse/discourse_dev:release] 
DEBU[10301] saved image metadata "{}"                    
DEBU[10302] parsed reference into "[vfs@/home/state/podman/containers/storage+/tmp/1000]docker.io/discourse/discourse_dev:release" 
aaec2f76b83a69fee8fa89ffe83570f8c7d1576226ab10eecd1669bb8e87b53a
INFO[0001] running as rootless                          
DEBU[0002] using conmon: "/nix/store/ijyz1qpfqqwc4qv9vx6s1g7z8ssjv1c0-conmon-2.0.0/bin/conmon" 
DEBU[0002] Initializing boltdb state at /home/state/podman/containers/storage/libpod/bolt_state.db 
DEBU[0002] Using graph driver vfs                       
DEBU[0002] Using graph root /home/state/podman/containers/storage 
DEBU[0002] Using run root /tmp/1000                     
DEBU[0002] Using static dir /home/state/podman/containers/storage/libpod 
DEBU[0002] Using tmp dir /run/user/1000/libpod/tmp      
DEBU[0002] Using volume path /home/state/podman/containers/storage/volumes 
DEBU[0002] Set libpod namespace to ""                   
DEBU[0002] [graphdriver] trying provided driver "vfs"   
DEBU[0002] Initializing event backend journald          
WARN[0002] The configuration is using `runtime_path`, which is deprecated and will be removed in future.  Please use `runtimes` and `runtime` 
WARN[0002] If you are using both `runtime_path` and `runtime`, the configuration from `runtime_path` is used 
DEBU[0002] using runtime "/nix/store/s0jqb63yzcysxmb0v2nqvab38jc2fmvg-runc-1.0.0-rc8-bin/bin/runc" 
DEBU[0002] using runtime "/run/current-system/sw/bin/runc" 
DEBU[0002] parsed reference into "[vfs@/home/state/podman/containers/storage+/tmp/1000]docker.io/discourse/discourse_dev:release" 
DEBU[0002] parsed reference into "[vfs@/home/state/podman/containers/storage+/tmp/1000]@aaec2f76b83a69fee8fa89ffe83570f8c7d1576226ab10eecd1669bb8e87b53a" 
DEBU[0002] exporting opaque data as blob "sha256:aaec2f76b83a69fee8fa89ffe83570f8c7d1576226ab10eecd1669bb8e87b53a" 
DEBU[0002] parsed reference into "[vfs@/home/state/podman/containers/storage+/tmp/1000]@aaec2f76b83a69fee8fa89ffe83570f8c7d1576226ab10eecd1669bb8e87b53a" 
DEBU[0002] exporting opaque data as blob "sha256:aaec2f76b83a69fee8fa89ffe83570f8c7d1576226ab10eecd1669bb8e87b53a" 
DEBU[0002] parsed reference into "[vfs@/home/state/podman/containers/storage+/tmp/1000]@aaec2f76b83a69fee8fa89ffe83570f8c7d1576226ab10eecd1669bb8e87b53a" 
DEBU[0002] User mount /home/src/discourse/data/postgres:/shared/postgres_data options [] 
DEBU[0002] User mount /home/src/discourse:/src options [] 
DEBU[0003] Got mounts: [{/src bind /home/src/discourse []} {/shared/postgres_data bind /home/src/discourse/data/postgres []}] 
DEBU[0003] Got volumes: []                              
DEBU[0003] Using slirp4netns netmode                    
DEBU[0003] Adding mount /proc                           
DEBU[0003] Adding mount /dev                            
DEBU[0003] Adding mount /dev/pts                        
DEBU[0003] Adding mount /dev/mqueue                     
DEBU[0003] Adding mount /sys                            
DEBU[0003] Adding mount /sys/fs/cgroup                  
DEBU[0003] setting container name discourse_dev         
DEBU[0003] created OCI spec and options for new container 
DEBU[0003] Allocated lock 8 for container 8fd48ceaabf0683273e420149616e1316d83df3d0af3c726937e937edbc2256a 
DEBU[0003] parsed reference into "[vfs@/home/state/podman/containers/storage+/tmp/1000]@aaec2f76b83a69fee8fa89ffe83570f8c7d1576226ab10eecd1669bb8e87b53a" 
DEBU[0003] exporting opaque data as blob "sha256:aaec2f76b83a69fee8fa89ffe83570f8c7d1576226ab10eecd1669bb8e87b53a" 
DEBU[0149] created container "8fd48ceaabf0683273e420149616e1316d83df3d0af3c726937e937edbc2256a" 
DEBU[0149] container "8fd48ceaabf0683273e420149616e1316d83df3d0af3c726937e937edbc2256a" has work directory "/home/state/podman/containers/storage/vfs-containers/8fd48ceaabf0683273e420149616e1316d83df3d0af3c726937e937edbc2256a/userdata" 
DEBU[0149] container "8fd48ceaabf0683273e420149616e1316d83df3d0af3c726937e937edbc2256a" has run directory "/tmp/1000/vfs-containers/8fd48ceaabf0683273e420149616e1316d83df3d0af3c726937e937edbc2256a/userdata" 
DEBU[0150] New container created "8fd48ceaabf0683273e420149616e1316d83df3d0af3c726937e937edbc2256a" 
DEBU[0150] container "8fd48ceaabf0683273e420149616e1316d83df3d0af3c726937e937edbc2256a" has CgroupParent "/libpod_parent/libpod-8fd48ceaabf0683273e420149616e1316d83df3d0af3c726937e937edbc2256a" 
DEBU[0153] mounted container "8fd48ceaabf0683273e420149616e1316d83df3d0af3c726937e937edbc2256a" at "/home/state/podman/containers/storage/vfs/dir/6b1bc14b34797b8e973e0f36d36d8ad68195c092a7bff4a44fbcbb424b243068" 
DEBU[0153] Created root filesystem for container 8fd48ceaabf0683273e420149616e1316d83df3d0af3c726937e937edbc2256a at /home/state/podman/containers/storage/vfs/dir/6b1bc14b34797b8e973e0f36d36d8ad68195c092a7bff4a44fbcbb424b243068 
DEBU[0153] /etc/system-fips does not exist on host, not mounting FIPS mode secret 
DEBU[0153] Created OCI spec for container 8fd48ceaabf0683273e420149616e1316d83df3d0af3c726937e937edbc2256a at /home/state/podman/containers/storage/vfs-containers/8fd48ceaabf0683273e420149616e1316d83df3d0af3c726937e937edbc2256a/userdata/config.json 
DEBU[0153] /nix/store/ijyz1qpfqqwc4qv9vx6s1g7z8ssjv1c0-conmon-2.0.0/bin/conmon messages will be logged to syslog 
DEBU[0153] running conmon: /nix/store/ijyz1qpfqqwc4qv9vx6s1g7z8ssjv1c0-conmon-2.0.0/bin/conmon  args="[--api-version 1 -c 8fd48ceaabf0683273e420149616e1316d83df3d0af3c726937e937edbc2256a -u 8fd48ceaabf0683273e420149616e1316d83df3d0af3c726937e937edbc2256a -r /run/current-system/sw/bin/runc -b /home/state/podman/containers/storage/vfs-containers/8fd48ceaabf0683273e420149616e1316d83df3d0af3c726937e937edbc2256a/userdata -p /tmp/1000/vfs-containers/8fd48ceaabf0683273e420149616e1316d83df3d0af3c726937e937edbc2256a/userdata/pidfile -l k8s-file:/home/state/podman/containers/storage/vfs-containers/8fd48ceaabf0683273e420149616e1316d83df3d0af3c726937e937edbc2256a/userdata/ctr.log --exit-dir /run/user/1000/libpod/tmp/exits --socket-dir-path /run/user/1000/libpod/tmp/socket --log-level debug --syslog --conmon-pidfile /tmp/1000/vfs-containers/8fd48ceaabf0683273e420149616e1316d83df3d0af3c726937e937edbc2256a/userdata/conmon.pid --exit-command /nix/store/b6246mx4g59dkrc21wsh3chaivh1651s-podman-1.5.1-bin/bin/podman --exit-command-arg --root --exit-command-arg /home/state/podman/containers/storage --exit-command-arg --runroot --exit-command-arg /tmp/1000 --exit-command-arg --log-level --exit-command-arg debug --exit-command-arg --cgroup-manager --exit-command-arg cgroupfs --exit-command-arg --tmpdir --exit-command-arg /run/user/1000/libpod/tmp --exit-command-arg --runtime --exit-command-arg runc --exit-command-arg --storage-driver --exit-command-arg vfs --exit-command-arg --events-backend --exit-command-arg journald --exit-command-arg container --exit-command-arg cleanup --exit-command-arg 8fd48ceaabf0683273e420149616e1316d83df3d0af3c726937e937edbc2256a]"
WARN[0154] Failed to add conmon to cgroupfs sandbox cgroup: error creating cgroup for cpu: mkdir /sys/fs/cgroup/cpu/libpod_parent: permission denied 
[conmon:d]: failed to write to /proc/self/oom_score_adj: Permission denied

DEBU[0157] Received: 30151
INFO[0157] Got Conmon PID as 30082
DEBU[0157] Created container 8fd48ceaabf0683273e420149616e1316d83df3d0af3c726937e937edbc2256a in OCI runtime
DEBU[0158] Starting container 8fd48ceaabf0683273e420149616e1316d83df3d0af3c726937e937edbc2256a with command [/sbin/boot]
DEBU[0158] Started container 8fd48ceaabf0683273e420149616e1316d83df3d0af3c726937e937edbc2256a
8fd48ceaabf0683273e420149616e1316d83df3d0af3c726937e937edbc2256a

@mheon
Copy link
Member

mheon commented Sep 10, 2019

This image does take quite a long time to finish committing to disk on my setup (Podman master, running as root, overlay2 driver) - but only a few minutes.

I again begin to suspect the VFS graph driver; can you possible try again with fuse-overlayfs?

@alyssais
Copy link

alyssais commented Sep 10, 2019 via email

@Dulani
Copy link

Dulani commented Nov 13, 2019

I had about a 10 minute hang at "storing signatures" on a fresh install (1.4.4) when pulling a small rhel7 image (~300MB). Then I pulled a small (15 MB) nfs image (erichough/nfs-server) and also had a ~5 minute hang at the "storing signatures" stage. I initially thought, it was just the first one, but the second one took WAY longer to "store signatures" than it did to download.

@rhatdan rhatdan reopened this Nov 16, 2019
@mtrmac
Copy link
Collaborator

mtrmac commented Nov 18, 2019

I initially thought, it was just the first one, but the second one took WAY longer to "store signatures" than it did to download.

That, in itself, is not too surprising: the download steps just decompress and write a linear stream of data to disk, the commit step (which actually takes time after “storing signatures”) reads that linear stream and writes it into many small individual files on disk, so filesystem metadata and disk seek performance are rather important for the commit (but completely negligible for the download).

@Dulani
Copy link

Dulani commented Nov 20, 2019

Interesting. Well, I'm VERY surprised at the poor performance. I've had 5+ minute waits on other commands too (not just a pull). If that's expected, I'll just accept it as normal. If it is not normal, I'd be happy to try to gather any relevant information you might need to troubleshoot the problem.

@mtrmac
Copy link
Collaborator

mtrmac commented Nov 20, 2019

I’m not at all saying that long times are expected or acceptable; just that it’s not surprising that the commit step (after “storing signatures”) can take longer than the download.

10 minutes for a 300MB image does seem extremely excessive on modern hardware.

@zhuguoliang
Copy link

I reproduced it on my machine without sudo, but everything went smooth with sudo.

Note that with strace you will see it is stuck on storage lock:
openat(AT_FDCWD, "/home/ubuntu/.local/share/containers/storage/storage.lock", O_RDWR|O_CREAT, 0600) = 5 fcntl(5, F_SETFD, FD_CLOEXEC) = 0 fcntl(5, F_SETLKW, {l_type=F_WRLCK, l_whence=SEEK_SET, l_start=0, l_len=0}

@aerobinsonIV
Copy link

aerobinsonIV commented Jan 10, 2020

I believe I am also having this problem. I'm trying to run
podman pull nvcr.io/nvidia/tensorflow:19.12-tf1-py3.
It hangs for a long time on "Storing Signatures", and then eventually runs out of disk space and fails. I have 180GB free on the drive containing my home directory when I start this command, and df -h verifies that it is all used up. The image is only about 4GB. The same thing happens using buildah from.

It might be able to run if I provided more disk space, but that isn't an option for my use case.

Like for zhuguoliang, it works with sudo. However, having root access is also not an option for my desired use case.

@mheon
Copy link
Member

mheon commented Jan 10, 2020

Are you using the VFS driver? This sounds like it could be caused by using the VFS driver with a large image. VFS does not perform file system level deduplication, which causes a massive expansion of storage required.

Using the overlay driver (as root) or fuse-overlayfs (rootless) should avert this.

@Dulani
Copy link

Dulani commented Jan 10, 2020

I was reading #3799 a few days ago and also noticed the "strong" recommendation for using fuse-overlayfs. When I had the problem above, I was using the VFS driver. I can't migrate to fuse-overlayfs yet because my kernel is too old (It is RHEL 7 and I don't control that part of the server). So, hopefully one day eventually I'll be able to give this a try (until then, podman will have to stay on the shelf).

@aerobinsonIV
Copy link

@mheon I'm very new to containers, I don't know what driver I'm using. How would I find out and/or specify?

@mheon
Copy link
Member

mheon commented Jan 10, 2020

@Dulani I believe fuse-overlay should work in 7.8.

@Jousboxx podman info should contain an option GraphDriverName that will tell you what your current driver is.

@aerobinsonIV
Copy link

aerobinsonIV commented Jan 10, 2020

@mheon I was indeed using vfs. I did
sudo apt install fuse-overlayfs libfuse3-dev
sudo rm -rf ~/.local/share/containers
podman pull --storage-driver fuse-overlayfs nvcr.io/nvidia/tensorflow:19.12-tf1-py3

Now I receive this error:

Error: could not get runtime: failed to GetDriver graph fuse-overlayfs /home/jous/.local/share/containers/storage: driver not supported 

Weird that it would say the driver isn't supported when it is the preferred driver to be used in this situation.

It's probably worth mentioning that I'm on Ubuntu 19.10.

@mheon
Copy link
Member

mheon commented Jan 10, 2020

@giuseppe PTAL

@aerobinsonIV
Copy link

Finally solved the problem by doing
sudo rm -rf ~/.config/containers/storage.conf
and then running podman pull without the --storage-driver option.

Podman sees that I now have fuse-overlayfs and libfuse3-dev installed and creates new config files with overlay as the default filesystem driver.

Specifying the --storage-driver overlay option results in the problem detailed here: containers/buildah#1745, even though if I understand correctly it shouldn't change anything due to overlay being the default

Also, for anyone else googling and finding this, --storage-driver fuse-overlayfs is (I think) incorrect syntax. Proper usage is --storage-driver overlay.

It would be nice if the overlay packages were required dependencies of podman and then overlay was used as the default driver from the beginning. That would save new users a lot of time not having to deal with this difficult to diagnose issue.

@giuseppe
Copy link
Member

It would be nice if the overlay packages were required dependencies of podman and then overlay was used as the default driver from the beginning. That would save new users a lot of time not having to deal with this difficult to diagnose issue.

agreed. At least for the Fedora package we also install fuse-overlayfs and it is automatically picked and used. Have you installed Podman from the Ubuntu PPA?

@baude
Copy link
Member

baude commented Jan 28, 2020

is this issue resolved?

@aerobinsonIV
Copy link

Solved for me, OP hasn't responded

@yamrzou
Copy link

yamrzou commented Feb 7, 2020

I'm encountering the same issue when running the following, even after removing ~/.config/containers/storage.conf :

$ docker container run -p 9999:8888 --name hello cloudnatived/demo:hello
Trying to pull docker.io/cloudnatived/demo:hello...
Getting image source signatures
Copying blob 192dbbd4244e done  
Copying config eeb7d1c2e2 done  
Writing manifest to image destination
Storing signatures

@mheon
Copy link
Member

mheon commented Feb 7, 2020

Can you provide podman info (and podman pull --log-level=debug for the image in question)?

@yamrzou
Copy link

yamrzou commented Feb 7, 2020

My bad, turns out pulling the image exited correctly, it's just that I ran podman container run and the container has no output, so I thought it was stuck on "Storing signatures".

@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Sep 23, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Sep 23, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. rootless
Projects
None yet
Development

No branches or pull requests