Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Mount docker.sock #6015

Closed
lburgazzoli opened this issue Apr 28, 2020 · 44 comments
Closed

Mount docker.sock #6015

lburgazzoli opened this issue Apr 28, 2020 · 44 comments
Labels
HTTP API Bug is in RESTful API kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. stale-issue

Comments

@lburgazzoli
Copy link

Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)

/kind bug

Description

I'm switch a jenkins installation from running with docker to podman and one of the requisite is to be able to access the docker daemon on the host to run containerized builds but mounting the docker socket does not seem to work as expected with podman:

Steps to reproduce the issue:

Run the onesysadmin/jenkins-docker-executors with docker and check docker:

$ docker run --rm -ti  -v /var/run/docker.sock:/var/run/docker.sock:Z --entrypoint bash onesysadmin/jenkins-docker-executors 
root@95aaa2e5497d:/# docker ps
CONTAINER ID        IMAGE                                  COMMAND             CREATED             STATUS              PORTS               NAMES
95aaa2e5497d        onesysadmin/jenkins-docker-executors   "bash"              4 seconds ago       Up 3 seconds        8080/tcp            youthful_jepsen

Run the onesysadmin/jenkins-docker-executors with podman and check docker:

r$ podman run --rm -ti  -v /var/run/docker.sock:/var/run/docker.sock:Z --entrypoint bash onesysadmin/jenkins-docker-executors 
root@ccab5b96b658:/# docker ps
Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get http://%2Fvar%2Frun%2Fdocker.sock/v1.40/containers/json: dial unix /var/run/docker.sock: connect: permission denied
root@ccab5b96b658:/# 

Output of podman version:

Version:            1.9.0
RemoteAPI Version:  1
Go Version:         go1.13.9
OS/Arch:            linux/amd64

Output of podman info --debug:

debug:
  compiler: gc
  gitCommit: ""
  goVersion: go1.13.9
  podmanVersion: 1.9.0
host:
  arch: amd64
  buildahVersion: 1.14.8
  cgroupVersion: v1
  conmon:
    package: conmon-2.0.15-1.fc31.x86_64
    path: /usr/libexec/crio/conmon
    version: 'conmon version 2.0.15, commit: 4152e6044da92e0c5f246e5adf14c85f41443759'
  cpus: 8
  distribution:
    distribution: fedora
    version: "31"
  eventLogger: file
  hostname: mars
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
    uidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
  kernel: 5.6.6-200.fc31.x86_64
  memFree: 30561992704
  memTotal: 66859102208
  ociRuntime:
    name: runc
    package: runc-1.0.0-102.dev.gitdc9208a.fc31.x86_64
    path: /usr/bin/runc
    version: |-
      runc version 1.0.0-rc10
      commit: 96f6022b37cbe12b26c9ad33a24677bec72a9cc3
      spec: 1.0.1-dev
  os: linux
  rootless: true
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: slirp4netns-1.0.0-1.fc31.x86_64
    version: |-
      slirp4netns version 1.0.0
      commit: a3be729152a33e692cd28b52f664defbf2e7810a
      libslirp: 4.1.0
  swapFree: 68719472640
  swapTotal: 68719472640
  uptime: 27h 31m 14.03s (Approximately 1.12 days)
registries:
  brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888:
    Blocked: false
    Insecure: true
    Location: brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888
    MirrorByDigestOnly: false
    Mirrors: []
    Prefix: brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888
  docker-registry.engineering.redhat.com:
    Blocked: false
    Insecure: true
    Location: docker-registry.engineering.redhat.com
    MirrorByDigestOnly: false
    Mirrors: []
    Prefix: docker-registry.engineering.redhat.com
  search:
  - docker.io
  - registry.fedoraproject.org
  - registry.access.redhat.com
store:
  configFile: /home/lburgazz/.config/containers/storage.conf
  containerStore:
    number: 1
    paused: 0
    running: 1
    stopped: 0
  graphDriverName: overlay
  graphOptions:
    overlay.mount_program:
      Executable: /usr/bin/fuse-overlayfs
      Package: fuse-overlayfs-0.7.8-1.fc31.x86_64
      Version: |-
        fusermount3 version: 3.6.2
        fuse-overlayfs: version 0.7.8
        FUSE library version 3.6.2
        using FUSE kernel interface version 7.29
  graphRoot: /home/lburgazz/.local/share/containers/storage
  graphStatus:
    Backing Filesystem: xfs
    Native Overlay Diff: "false"
    Supports d_type: "true"
    Using metacopy: "false"
  imageStore:
    number: 6
  runRoot: /tmp/1000
  volumePath: /home/lburgazz/.local/share/containers/storage/volumes

Package info (e.g. output of rpm -q podman or apt list podman):

podman-1.9.0-1.fc31.x86_64
@openshift-ci-robot openshift-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label Apr 28, 2020
@akerl-unpriv
Copy link

This should be expected, right? Podman doesn't run a daemon like docker does, so there's no docker.sock to mount (unless you're also running docker in parallel on the same host?).

@lburgazzoli
Copy link
Author

Yes I have docker running on the host as I cannot complete replace it with podman as some of the projects I'm building with Jenklins do require to access docker through APIs (i.e. testcontainers that uses the docker-java library under the hood)

@akerl-unpriv
Copy link

Gotcha. Can you add the output of ls -l /var/run/docker.sock on the host, and also ls -l /var/run/docker.sock from within the podman pod? I'm wondering if Docker is helpfully fixing the perms to align with the container's user ID, but podman is not doing likewise.

@lburgazzoli
Copy link
Author

pod:

# ls -l /var/run/*
ls: cannot access '/var/run/docker.sock': Permission denied
total 0
-?????????? ? ?    ?     ?            ? docker.sock
-rw-r--r--. 1 root root  0 Oct 21  2019 init.upgraded
lrwxrwxrwx. 1 root root 25 Oct 10  2019 initctl -> /run/systemd/initctl/fifo
drwxrwxrwt. 3 root root 20 Oct 10  2019 lock
drwxr-xr-x. 2 root root  6 Oct 10  2019 log
drwxr-xr-x. 2 root root 18 Oct 10  2019 mount
drwxr-xr-x. 2 root root 40 Apr 28 07:39 secrets
drwxr-xr-x. 2 root root  6 Oct 10  2019 sendsigs.omit.d
lrwxrwxrwx. 1 root root  8 Oct 10  2019 shm -> /dev/shm
drwxr-xr-x. 9 root root 23 Oct 18  2019 systemd
drwxr-xr-x. 2 root root  6 Oct 10  2019 user
-rw-rw-r--. 1 root utmp  0 Oct 10  2019 utmp

host:

$ ls -l /var/run/docker.sock 
srw-rw----. 1 root docker 0 Apr 27 06:10 /var/run/docker.sock

@lburgazzoli
Copy link
Author

Note that my user is part of the docker group on the host

@akerl-unpriv
Copy link

It seems like this might be that the bind-mounted socket isn't being put into the user namespace for your container (since "root" / uid 0 in the podman pod is not the same as "root" on the outside).

@mheon
Copy link
Member

mheon commented Apr 28, 2020

We automatically drop supplemental groups when entering rootless containers for security reasons, which is why your access to the docker group is being removed.

@mheon
Copy link
Member

mheon commented Apr 28, 2020

If you are using the crun OCI runtime, adding the following annotation to the container will disable that behavior: run.oci.keep_original_groups=1

That may allow you to access the Docker socket, but I will strongly emphasize that you remove any and all security benefits of rootless Podman by mounting a root-owned Docker socket into it (this will enable trivial privesc from the container onto the host).

@lburgazzoli
Copy link
Author

So I think it won't be too different from running Jenkins directly on the host

@rhatdan
Copy link
Member

rhatdan commented Apr 28, 2020

SELinux will block this access also. giving a process access to the docker.sock is the most dangerous thing you can do. You should really run in --privileged mode if you are going to do this, so people would understand that the container has root root access on your system and no confinement.

Have you looked into using buildah to build while running inside of a container?

@lburgazzoli
Copy link
Author

SELinux will block this access also. giving a process access to the docker.sock is the most dangerous thing you can do. You should really run in --privileged mode if you are going to do this, so people would understand that the container has root root access on your system and no confinement.

I know but what other options do we have ? i.e. is there an docker api emulation layer that uses podman under the hoods ?

Have you looked into using buildah to build while running inside of a container?

here it is not about build container but to spawn container as part of the test process of some libraries.

@mheon
Copy link
Member

mheon commented Apr 29, 2020

SELinux will block this access also. giving a process access to the docker.sock is the most dangerous thing you can do. You should really run in --privileged mode if you are going to do this, so people would understand that the container has root root access on your system and no confinement.

I know but what other options do we have ? i.e. is there an docker api emulation layer that uses podman under the hoods ?

There is (podman system service in 1.8.2 and up) but it's still in alpha.

Have you looked into using buildah to build while running inside of a container?

here it is not about build container but to spawn container as part of the test process of some libraries.

You'll likely need to disable SELinux confinement for the container (--security-opt label=disable) as @rhatdan said, but I think it should work.

@lburgazzoli
Copy link
Author

@mheon is it possible to run podman system service inside a container ?

what I'm thinking is to create a pod in which I have one container that runs jenkins and another one that runs podman system service listening on a tcp post and use DOCKER_HOST to let the jenkins service communicate with the podman one

@mheon
Copy link
Member

mheon commented Apr 29, 2020

@rhatdan I know we have Podman-in-Podman working for root containers, but what about rootless ones?

@rhatdan
Copy link
Member

rhatdan commented Apr 29, 2020

Theoretically this would work for rootless containers, but might be difficult to setup. To get full functionality you need to run the initial container as --privileged, most likely.

@lburgazzoli
Copy link
Author

before trying to run podman system service in a container I gave it a try running it on my host against something that emulate what a pipeline would do.

So what I did is:

  1. run podman as service as podman system service --timeout 0 tcp:localhost:9999
  2. export DOCKER_HOST=tpc://localhost:9999
  3. docker pull mongo:4.0
  4. run some java code that under the hoods uses testcontainers-java which is based on the docker-java project

The step 3. works without any issue so I'm able to see the mongo:40 image by using docker or podman:

$ docker pull mongo:4.0
78b4116161e41eccdd131e7d8de17fcb221a39e36a975211784dd3a7247bd109: pulling image () from docker.io/library/mongo:4.0 
docker.io/library/mongo:4.0

$ docker images
REPOSITORY                                 TAG                 IMAGE ID            CREATED             SIZE
mongo                                      4.0                 78b4116161e4        2 weeks ago         425MB

$ podman images
REPOSITORY                                 TAG      IMAGE ID       CREATED       SIZE
docker.io/library/mongo                    4.0      78b4116161e4   2 weeks ago   425 MB

The running my demo code I see the following error:

Caused by: org.testcontainers.containers.ContainerFetchException: Can't get Docker image: RemoteDockerImage(imageName=mongo:4.0, imagePullPolicy=DefaultPullPolicy())
	at org.testcontainers.containers.GenericContainer.getDockerImageName(GenericContainer.java:1268)
	at org.testcontainers.containers.GenericContainer.logger(GenericContainer.java:603)
	at org.testcontainers.containers.GenericContainer.doStart(GenericContainer.java:309)
	... 2 more
Caused by: com.github.dockerjava.api.exception.InternalServerErrorException: {"cause":"no such image","message":"NewFromLocal(): unable to find a name and tag match for alpine in repotags: no such image","response":500}

@mheon
Copy link
Member

mheon commented May 14, 2020

Is Alpine ever explicitly pulled?

@lburgazzoli
Copy link
Author

No, the java code references only mongo:4.0

@mheon
Copy link
Member

mheon commented May 14, 2020

That last InternalServerException seems to indicate it was looking for alpine, but the errors further in the stack do seem to indicate mongo:4.0.

Any chance you can add --log-level=debug in podman system service and provide the logs it produces (particularly parts where your demo is running against it)

@lburgazzoli
Copy link
Author

Here you have:

DEBU[0015] IdleTracker 0xc000496010:new 0/0 connection(s) 
DEBU[0015] IdleTracker 0xc000496010:active 1/1 connection(s) 
DEBU[0015] APIHandler -- Method: GET URL: /_ping        
DEBU[0015] IdleTracker 0xc000496010:idle 1/2 connection(s) 
DEBU[0015] IdleTracker 0xc000496010:active 0/2 connection(s) 
DEBU[0015] APIHandler -- Method: GET URL: /info         
WARN[0015] Failed to retrieve program version for /usr/bin/slirp4netns: <nil> 
DEBU[0015] Loading registries configuration "/etc/containers/registries.conf" 
DEBU[0015] IdleTracker 0xc000496010:idle 1/3 connection(s) 
DEBU[0015] IdleTracker 0xc000496010:active 0/3 connection(s) 
DEBU[0015] APIHandler -- Method: GET URL: /info         
WARN[0015] Failed to retrieve program version for /usr/bin/slirp4netns: <nil> 
DEBU[0015] IdleTracker 0xc000496010:idle 1/4 connection(s) 
DEBU[0015] IdleTracker 0xc000496010:active 0/4 connection(s) 
DEBU[0015] APIHandler -- Method: GET URL: /version      
WARN[0015] Failed to retrieve program version for /usr/bin/slirp4netns: <nil> 
DEBU[0015] IdleTracker 0xc000496010:idle 1/5 connection(s) 
DEBU[0015] IdleTracker 0xc000496010:active 0/5 connection(s) 
DEBU[0015] APIHandler -- Method: GET URL: /images/json?filter=alpine%3A3.5 
DEBU[0015] parsed reference into "[overlay@/home/lburgazz/.local/share/containers/storage+/tmp/1000:overlay.mount_program=/usr/bin/fuse-overlayfs,overlay.mount_program=/usr/bin/fuse-overlayfs]@0f5485af5398bb473b49fc4d41d54edb0f27f4c5de801645bddb27ef786885ea" 
DEBU[0015] exporting opaque data as blob "sha256:0f5485af5398bb473b49fc4d41d54edb0f27f4c5de801645bddb27ef786885ea" 
DEBU[0015] parsed reference into "[overlay@/home/lburgazz/.local/share/containers/storage+/tmp/1000:overlay.mount_program=/usr/bin/fuse-overlayfs,overlay.mount_program=/usr/bin/fuse-overlayfs]@78b4116161e41eccdd131e7d8de17fcb221a39e36a975211784dd3a7247bd109" 
DEBU[0015] exporting opaque data as blob "sha256:78b4116161e41eccdd131e7d8de17fcb221a39e36a975211784dd3a7247bd109" 
DEBU[0015] IdleTracker 0xc000496010:idle 1/6 connection(s) 
DEBU[0016] IdleTracker 0xc000496010:active 0/6 connection(s) 
DEBU[0016] APIHandler -- Method: POST URL: /containers/create?name=testcontainers-checks-cc4ec2eb-8675-4417-b09d-bd0aa041b079 
DEBU[0016] parsed reference into "[overlay@/home/lburgazz/.local/share/containers/storage+/tmp/1000:overlay.mount_program=/usr/bin/fuse-overlayfs,overlay.mount_program=/usr/bin/fuse-overlayfs]docker.io/library/alpine:3.5" 
DEBU[0016] reference "[overlay@/home/lburgazz/.local/share/containers/storage+/tmp/1000:overlay.mount_program=/usr/bin/fuse-overlayfs,overlay.mount_program=/usr/bin/fuse-overlayfs]docker.io/library/alpine:3.5" does not resolve to an image ID 
DEBU[0016] parsed reference into "[overlay@/home/lburgazz/.local/share/containers/storage+/tmp/1000:overlay.mount_program=/usr/bin/fuse-overlayfs,overlay.mount_program=/usr/bin/fuse-overlayfs]localhost/alpine:3.5" 
DEBU[0016] reference "[overlay@/home/lburgazz/.local/share/containers/storage+/tmp/1000:overlay.mount_program=/usr/bin/fuse-overlayfs,overlay.mount_program=/usr/bin/fuse-overlayfs]localhost/alpine:3.5" does not resolve to an image ID 
INFO[0016] Request Failed(Internal Server Error): NewFromLocal(): unable to find a name and tag match for alpine in repotags: no such image 
DEBU[0016] IdleTracker 0xc000496010:idle 1/7 connection(s) 
DEBU[0016] IdleTracker 0xc000496010:closed 0/7 connection(s) 

@baude
Copy link
Member

baude commented May 14, 2020

can you provide the json that is embedded as part of the body on the create call?

@lburgazzoli
Copy link
Author

I'm trying to get it but in the meantime I found out that the library does also try to download alpine:3.5 as part of the process bootstrap

@lburgazzoli
Copy link
Author

This is the entire "conversation":

127.000.000.001.56450-127.000.000.001.09999: GET /_ping HTTP/1.1
Host: localhost:9999
Connection: Keep-Alive
Accept-Encoding: gzip
User-Agent: okhttp/3.14.6


127.000.000.001.09999-127.000.000.001.56450: HTTP/1.1 200 OK
Api-Version: 1.40
Buildkit-Version: 
Cache-Control: no-cache
Docker-Experimental: true
Libpod-Api-Version: 1.40
Libpod-Buildha-Version: 1.14.8
Pragma: no-cache
Date: Thu, 14 May 2020 15:34:22 GMT
Content-Length: 3
Content-Type: text/plain; charset=utf-8

OK

127.000.000.001.56450-127.000.000.001.09999: GET /info HTTP/1.1
Host: localhost:9999
Connection: Keep-Alive
Accept-Encoding: gzip
User-Agent: okhttp/3.14.6


127.000.000.001.09999-127.000.000.001.56450: HTTP/1.1 200 OK
Content-Type: application/json
Date: Thu, 14 May 2020 15:34:22 GMT
Content-Length: 1994

{"ID":"97b71fd1-efa1-4bee-931c-7691cae0750a","Containers":0,"ContainersRunning":0,"ContainersPaused":0,"ContainersStopped":0,"Images":1,"Driver":"overlay","DriverStatus":[["Backing Filesystem","xfs"],["Supports d_type","true"],["Native Overlay Diff","false"],["Using metacopy","false"]],"SystemStatus":null,"Plugins":{"Volume":null,"Network":null,"Authorization":null,"Log":null},"MemoryLimit":true,"SwapLimit":true,"KernelMemory":true,"KernelMemoryTCP":false,"CpuCfsPeriod":true,"CpuCfsQuota":true,"CPUShares":true,"CPUSet":true,"PidsLimit":true,"IPv4Forwarding":true,"BridgeNfIptables":true,"BridgeNfIp6tables":true,"Debug":true,"NFd":12,"OomKillDisable":true,"NGoroutines":6,"SystemTime":"2020-05-14T17:34:22.83920704+02:00","LoggingDriver":"","CgroupDriver":"cgroupfs","NEventsListener":0,"KernelVersion":"5.6.11-300.fc32.x86_64","OperatingSystem":"fedora","OSVersion":"32","OSType":"linux","Architecture":"amd64","IndexServerAddress":"","RegistryConfig":null,"NCPU":8,"MemTotal":66859233280,"GenericResources":null,"DockerRootDir":"/home/lburgazz/.local/share/containers/storage","HttpProxy":"","HttpsProxy":"","NoProxy":"","Name":"mars","Labels":null,"ExperimentalBuild":true,"ServerVersion":"1.9.1","ClusterStore":"","ClusterAdvertise":"","Runtimes":{"crun":{"path":"/usr/bin/crun"},"kata":{"path":"/usr/bin/kata-runtime"},"runc":{"path":"/usr/bin/runc"}},"DefaultRuntime":"runc","Swarm":{"NodeID":"","NodeAddr":"","LocalNodeState":"inactive","ControlAvailable":false,"Error":"","RemoteManagers":null},"LiveRestoreEnabled":false,"Isolation":"","InitBinary":"","ContainerdCommit":{"ID":"","Expected":""},"RuncCommit":{"ID":"","Expected":""},"InitCommit":{"ID":"","Expected":""},"SecurityOptions":["name=seccomp,profile=default"],"ProductLicense":"Apache-2.0","Warnings":[],"BuildahVersion":"1.14.8","CPURealtimePeriod":false,"CPURealtimeRuntime":false,"CgroupVersion":"v1","Rootless":true,"SwapFree":68719472640,"SwapTotal":68719472640,"Uptime":"9h 57m 26.74s (Approximately 0.38 days)"}

127.000.000.001.56450-127.000.000.001.09999: GET /info HTTP/1.1
Host: localhost:9999
Connection: Keep-Alive
Accept-Encoding: gzip
User-Agent: okhttp/3.14.6


127.000.000.001.09999-127.000.000.001.56450: HTTP/1.1 200 OK
Content-Type: application/json
Date: Thu, 14 May 2020 15:34:23 GMT
Content-Length: 1995

{"ID":"2cb783dc-c8df-44e6-9483-1091cc7ff080","Containers":0,"ContainersRunning":0,"ContainersPaused":0,"ContainersStopped":0,"Images":1,"Driver":"overlay","DriverStatus":[["Native Overlay Diff","false"],["Using metacopy","false"],["Backing Filesystem","xfs"],["Supports d_type","true"]],"SystemStatus":null,"Plugins":{"Volume":null,"Network":null,"Authorization":null,"Log":null},"MemoryLimit":true,"SwapLimit":true,"KernelMemory":true,"KernelMemoryTCP":false,"CpuCfsPeriod":true,"CpuCfsQuota":true,"CPUShares":true,"CPUSet":true,"PidsLimit":true,"IPv4Forwarding":true,"BridgeNfIptables":true,"BridgeNfIp6tables":true,"Debug":true,"NFd":12,"OomKillDisable":true,"NGoroutines":6,"SystemTime":"2020-05-14T17:34:23.085823913+02:00","LoggingDriver":"","CgroupDriver":"cgroupfs","NEventsListener":0,"KernelVersion":"5.6.11-300.fc32.x86_64","OperatingSystem":"fedora","OSVersion":"32","OSType":"linux","Architecture":"amd64","IndexServerAddress":"","RegistryConfig":null,"NCPU":8,"MemTotal":66859233280,"GenericResources":null,"DockerRootDir":"/home/lburgazz/.local/share/containers/storage","HttpProxy":"","HttpsProxy":"","NoProxy":"","Name":"mars","Labels":null,"ExperimentalBuild":true,"ServerVersion":"1.9.1","ClusterStore":"","ClusterAdvertise":"","Runtimes":{"crun":{"path":"/usr/bin/crun"},"kata":{"path":"/usr/bin/kata-runtime"},"runc":{"path":"/usr/bin/runc"}},"DefaultRuntime":"runc","Swarm":{"NodeID":"","NodeAddr":"","LocalNodeState":"inactive","ControlAvailable":false,"Error":"","RemoteManagers":null},"LiveRestoreEnabled":false,"Isolation":"","InitBinary":"","ContainerdCommit":{"ID":"","Expected":""},"RuncCommit":{"ID":"","Expected":""},"InitCommit":{"ID":"","Expected":""},"SecurityOptions":["name=seccomp,profile=default"],"ProductLicense":"Apache-2.0","Warnings":[],"BuildahVersion":"1.14.8","CPURealtimePeriod":false,"CPURealtimeRuntime":false,"CgroupVersion":"v1","Rootless":true,"SwapFree":68719472640,"SwapTotal":68719472640,"Uptime":"9h 57m 26.99s (Approximately 0.38 days)"}

127.000.000.001.56450-127.000.000.001.09999: GET /version HTTP/1.1
accept: application/json
Host: localhost:9999
Connection: Keep-Alive
Accept-Encoding: gzip
User-Agent: okhttp/3.14.6


127.000.000.001.09999-127.000.000.001.56450: HTTP/1.1 200 OK
Content-Type: application/json
Date: Thu, 14 May 2020 15:34:23 GMT
Content-Length: 556

{"Platform":{"Name":"linux/amd64/fedora-32"},"Components":[{"Name":"Podman Engine","Version":"1.9.1","Details":{"APIVersion":"1.40","Arch":"amd64","BuildTime":"1970-01-01T01:00:00+01:00","Experimental":"true","GitCommit":"","GoVersion":"go1.14.2","KernelVersion":"5.6.11-300.fc32.x86_64","MinAPIVersion":"1.24","Os":"linux"}}],"Version":"1.9.1","ApiVersion":"1.40","MinAPIVersion":"1.24","GitCommit":"","GoVersion":"go1.14.2","Os":"linux","Arch":"amd64","KernelVersion":"5.6.11-300.fc32.x86_64","Experimental":true,"BuildTime":"1970-01-01T01:00:00+01:00"}

127.000.000.001.56450-127.000.000.001.09999: GET /images/json?filter=alpine%3A3.5 HTTP/1.1
accept: application/json
Host: localhost:9999
Connection: Keep-Alive
Accept-Encoding: gzip
User-Agent: okhttp/3.14.6


127.000.000.001.09999-127.000.000.001.56450: HTTP/1.1 200 OK
Content-Type: application/json
Date: Thu, 14 May 2020 15:34:23 GMT
Transfer-Encoding: chunked

914
[{"Id":"0f5485af5398bb473b49fc4d41d54edb0f27f4c5de801645bddb27ef786885ea","RepoTags":["registry.access.redhat.com/ubi8/ubi-init:latest"],"Created":1585672855,"Size":255783159,"Labels":{"architecture":"x86_64","authoritative-source-url":"registry.access.redhat.com","build-date":"2020-03-31T16:39:24.354003","com.redhat.build-host":"cpt-1004.osbs.prod.upshift.rdu2.redhat.com","com.redhat.component":"ubi8-init-container","com.redhat.license_terms":"https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI","description":"The Universal Base Image Init is designed is designed to run an init system as PID 1 for running multi-services inside a container. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.","distribution-scope":"public","io.k8s.description":"The Universal Base Image Init is designed is designed to run an init system as PID 1 for running multi-services inside a container. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.","io.k8s.display-name":"Red Hat Universal Base Image 8 Init","io.openshift.expose-services":"","io.openshift.tags":"base rhel8","maintainer":"Red Hat, Inc.","name":"ubi8/ubi8-init","release":"45","summary":"Provides the latest release of the Red Hat Universal Base Image 8 Init for multi-service containers.","url":"https://access.redhat.com/containers/#/registry.access.redhat.com/ubi8/ubi8-init/images/8.1-45","usage":"Do not use directly. Use as a base image for daemons. Install chosen packages and 'systemctl enable' them.","vcs-ref":"52ea3738a708f584b0c148ebd44b510f79c36041","vcs-type":"git","vendor":"Red Hat, Inc.","version":"8.1"},"Containers":1,"Names":["registry.access.redhat.com/ubi8/ubi-init:latest"],"Digest":"sha256:3efb04161abb638d62263c7cfd8bc28e01eedcf4a3e2e7d0c2b8b47e57518cb9","Digests":["sha256:3efb04161abb638d62263c7cfd8bc28e01eedcf4a3e2e7d0c2b8b47e57518cb9","sha256:55b1322d85375c3fd7e59948f403eff6babfc8ad4395f5d4960e9a820195eea0"],"History":["0f5485af5398bb473b49fc4d41d54edb0f27f4c5de801645bddb27ef786885ea","\u003cmissing\u003e","\u003cmissing\u003e"]}]

0


127.000.000.001.56450-127.000.000.001.09999: POST /containers/create?name=testcontainers-checks-0a2c6165-1879-4318-bb29-30ab575a1b35 HTTP/1.1
accept: application/json
Content-Type: application/json
Content-Length: 1940
Host: localhost:9999
Connection: Keep-Alive
Accept-Encoding: gzip
User-Agent: okhttp/3.14.6

{"name":"testcontainers-checks-0a2c6165-1879-4318-bb29-30ab575a1b35","authConfig":null,"Hostname":null,"Domainname":null,"User":null,"AttachStdin":null,"AttachStdout":null,"AttachStderr":null,"PortSpecs":null,"Tty":null,"OpenStdin":null,"StdinOnce":null,"Env":null,"Cmd":["tail","-f","/dev/null"],"Healthcheck":null,"ArgsEscaped":null,"Entrypoint":null,"Image":"alpine:3.5","Volumes":{},"WorkingDir":null,"MacAddress":null,"OnBuild":null,"NetworkDisabled":null,"ExposedPorts":{},"StopSignal":null,"StopTimeout":null,"HostConfig":{"Binds":null,"BlkioWeight":null,"BlkioWeightDevice":null,"BlkioDeviceReadBps":null,"BlkioDeviceWriteBps":null,"BlkioDeviceReadIOps":null,"BlkioDeviceWriteIOps":null,"MemorySwappiness":null,"NanoCPUs":null,"CapAdd":null,"CapDrop":null,"ContainerIDFile":null,"CpuPeriod":null,"CpuRealtimePeriod":null,"CpuRealtimeRuntime":null,"CpuShares":null,"CpuQuota":null,"CpusetCpus":null,"CpusetMems":null,"Devices":null,"DeviceCgroupRules":null,"DeviceRequests":null,"DiskQuota":null,"Dns":null,"DnsOptions":null,"DnsSearch":null,"ExtraHosts":null,"GroupAdd":null,"IpcMode":null,"Cgroup":null,"Links":null,"LogConfig":null,"LxcConf":null,"Memory":null,"MemorySwap":null,"MemoryReservation":null,"KernelMemory":null,"NetworkMode":null,"OomKillDisable":null,"Init":null,"AutoRemove":true,"OomScoreAdj":null,"PortBindings":null,"Privileged":null,"PublishAllPorts":null,"ReadonlyRootfs":null,"RestartPolicy":null,"Ulimits":null,"CpuCount":null,"CpuPercent":null,"IOMaximumIOps":null,"IOMaximumBandwidth":null,"VolumesFrom":null,"Mounts":null,"PidMode":null,"Isolation":null,"SecurityOpt":null,"StorageOpt":null,"CgroupParent":null,"VolumeDriver":null,"ShmSize":null,"PidsLimit":null,"Runtime":null,"Tmpfs":null,"UTSMode":null,"UsernsMode":null,"Sysctls":null,"ConsoleSize":null},"Labels":{"org.testcontainers":"true","org.testcontainers.sessionId":"0a2c6165-1879-4318-bb29-30ab575a1b35"},"Shell":null,"NetworkingConfig":null}
127.000.000.001.09999-127.000.000.001.56450: HTTP/1.1 500 Internal Server Error
Content-Type: application/json
Date: Thu, 14 May 2020 15:34:23 GMT
Content-Length: 143

{"cause":"no such image","message":"NewFromLocal(): unable to find a name and tag match for alpine in repotags: no such image","response":500}

@lburgazzoli
Copy link
Author

when using docker the flow is the following one:

127.000.000.001.34004-127.000.000.001.02376: GET /_ping HTTP/1.1
Host: localhost:2376
Connection: Keep-Alive
Accept-Encoding: gzip
User-Agent: okhttp/3.14.6


127.000.000.001.02376-127.000.000.001.34004: HTTP/1.1 200 OK
Api-Version: 1.40
Cache-Control: no-cache, no-store, must-revalidate
Docker-Experimental: false
Ostype: linux
Pragma: no-cache
Server: Docker/19.03.8 (linux)
Date: Thu, 14 May 2020 20:03:47 GMT
Content-Length: 2
Content-Type: text/plain; charset=utf-8

OK
127.000.000.001.34004-127.000.000.001.02376: GET /info HTTP/1.1
Host: localhost:2376
Connection: Keep-Alive
Accept-Encoding: gzip
User-Agent: okhttp/3.14.6


127.000.000.001.02376-127.000.000.001.34004: HTTP/1.1 200 OK
Api-Version: 1.40
Content-Type: application/json
Docker-Experimental: false
Ostype: linux
Server: Docker/19.03.8 (linux)
Date: Thu, 14 May 2020 20:03:47 GMT
Transfer-Encoding: chunked

9f4
{"ID":"4ATB:W3WQ:ZLXT:ZT3Y:G466:MX6I:34C5:QXP6:FQYK:EGCN:QGOO:MMWM","Containers":0,"ContainersRunning":0,"ContainersPaused":0,"ContainersStopped":0,"Images":0,"Driver":"overlay2","DriverStatus":[["Backing Filesystem","<unknown>"],["Supports d_type","true"],["Native Overlay Diff","true"]],"SystemStatus":null,"Plugins":{"Volume":["local"],"Network":["bridge","host","ipvlan","macvlan","null","overlay"],"Authorization":null,"Log":["awslogs","fluentd","gcplogs","gelf","journald","json-file","local","logentries","splunk","syslog"]},"MemoryLimit":true,"SwapLimit":true,"KernelMemory":true,"KernelMemoryTCP":true,"CpuCfsPeriod":true,"CpuCfsQuota":true,"CPUShares":true,"CPUSet":true,"PidsLimit":true,"IPv4Forwarding":true,"BridgeNfIptables":true,"BridgeNfIp6tables":true,"Debug":false,"NFd":26,"OomKillDisable":true,"NGoroutines":43,"SystemTime":"2020-05-14T22:03:47.607121725+02:00","LoggingDriver":"journald","CgroupDriver":"systemd","NEventsListener":0,"KernelVersion":"5.6.11-300.fc32.x86_64","OperatingSystem":"Fedora 32 (Thirty Two)","OSType":"linux","Architecture":"x86_64","IndexServerAddress":"https://index.docker.io/v1/","RegistryConfig":{"AllowNondistributableArtifactsCIDRs":[],"AllowNondistributableArtifactsHostnames":[],"InsecureRegistryCIDRs":["127.0.0.0/8"],"IndexConfigs":{"docker.io":{"Name":"docker.io","Mirrors":[],"Secure":true,"Official":true}},"Mirrors":[]},"NCPU":4,"MemTotal":12240330752,"GenericResources":null,"DockerRootDir":"/var/lib/docker","HttpProxy":"","HttpsProxy":"","NoProxy":"","Name":"moon","Labels":[],"ExperimentalBuild":false,"ServerVersion":"19.03.8","ClusterStore":"","ClusterAdvertise":"","Runtimes":{"runc":{"path":"runc"}},"DefaultRuntime":"runc","Swarm":{"NodeID":"","NodeAddr":"","LocalNodeState":"inactive","ControlAvailable":false,"Error":"","RemoteManagers":null},"LiveRestoreEnabled":true,"Isolation":"","InitBinary":"docker-init","ContainerdCommit":{"ID":"","Expected":""},"RuncCommit":{"ID":"fbdbaf85ecbc0e077f336c03062710435607dbf1","Expected":"fbdbaf85ecbc0e077f336c03062710435607dbf1"},"InitCommit":{"ID":"N/A","Expected":"fec3683b971d9c3ef73f284f176672c44b448662"},"SecurityOptions":["name=seccomp,profile=default","name=selinux"],"Warnings":["WARNING: API is accessible on http://localhost:2376 without encryption.\n         Access to the remote API is equivalent to root access on the host. Refer\n         to the 'Docker daemon attack surface' section in the documentation for\n         more information: https://docs.docker.com/engine/security/security/#docker-daemon-attack-surface"]}

0


127.000.000.001.34004-127.000.000.001.02376: GET /info HTTP/1.1
Host: localhost:2376
Connection: Keep-Alive
Accept-Encoding: gzip
User-Agent: okhttp/3.14.6


127.000.000.001.02376-127.000.000.001.34004: HTTP/1.1 200 OK
Api-Version: 1.40
Content-Type: application/json
Docker-Experimental: false
Ostype: linux
Server: Docker/19.03.8 (linux)
Date: Thu, 14 May 2020 20:03:47 GMT
Transfer-Encoding: chunked

9f4
{"ID":"4ATB:W3WQ:ZLXT:ZT3Y:G466:MX6I:34C5:QXP6:FQYK:EGCN:QGOO:MMWM","Containers":0,"ContainersRunning":0,"ContainersPaused":0,"ContainersStopped":0,"Images":0,"Driver":"overlay2","DriverStatus":[["Backing Filesystem","<unknown>"],["Supports d_type","true"],["Native Overlay Diff","true"]],"SystemStatus":null,"Plugins":{"Volume":["local"],"Network":["bridge","host","ipvlan","macvlan","null","overlay"],"Authorization":null,"Log":["awslogs","fluentd","gcplogs","gelf","journald","json-file","local","logentries","splunk","syslog"]},"MemoryLimit":true,"SwapLimit":true,"KernelMemory":true,"KernelMemoryTCP":true,"CpuCfsPeriod":true,"CpuCfsQuota":true,"CPUShares":true,"CPUSet":true,"PidsLimit":true,"IPv4Forwarding":true,"BridgeNfIptables":true,"BridgeNfIp6tables":true,"Debug":false,"NFd":26,"OomKillDisable":true,"NGoroutines":43,"SystemTime":"2020-05-14T22:03:47.769816772+02:00","LoggingDriver":"journald","CgroupDriver":"systemd","NEventsListener":0,"KernelVersion":"5.6.11-300.fc32.x86_64","OperatingSystem":"Fedora 32 (Thirty Two)","OSType":"linux","Architecture":"x86_64","IndexServerAddress":"https://index.docker.io/v1/","RegistryConfig":{"AllowNondistributableArtifactsCIDRs":[],"AllowNondistributableArtifactsHostnames":[],"InsecureRegistryCIDRs":["127.0.0.0/8"],"IndexConfigs":{"docker.io":{"Name":"docker.io","Mirrors":[],"Secure":true,"Official":true}},"Mirrors":[]},"NCPU":4,"MemTotal":12240330752,"GenericResources":null,"DockerRootDir":"/var/lib/docker","HttpProxy":"","HttpsProxy":"","NoProxy":"","Name":"moon","Labels":[],"ExperimentalBuild":false,"ServerVersion":"19.03.8","ClusterStore":"","ClusterAdvertise":"","Runtimes":{"runc":{"path":"runc"}},"DefaultRuntime":"runc","Swarm":{"NodeID":"","NodeAddr":"","LocalNodeState":"inactive","ControlAvailable":false,"Error":"","RemoteManagers":null},"LiveRestoreEnabled":true,"Isolation":"","InitBinary":"docker-init","ContainerdCommit":{"ID":"","Expected":""},"RuncCommit":{"ID":"fbdbaf85ecbc0e077f336c03062710435607dbf1","Expected":"fbdbaf85ecbc0e077f336c03062710435607dbf1"},"InitCommit":{"ID":"N/A","Expected":"fec3683b971d9c3ef73f284f176672c44b448662"},"SecurityOptions":["name=seccomp,profile=default","name=selinux"],"Warnings":["WARNING: API is accessible on http://localhost:2376 without encryption.\n         Access to the remote API is equivalent to root access on the host. Refer\n         to the 'Docker daemon attack surface' section in the documentation for\n         more information: https://docs.docker.com/engine/security/security/#docker-daemon-attack-surface"]}

0


127.000.000.001.34004-127.000.000.001.02376: GET /version HTTP/1.1
accept: application/json
Host: localhost:2376
Connection: Keep-Alive
Accept-Encoding: gzip
User-Agent: okhttp/3.14.6


127.000.000.001.02376-127.000.000.001.34004: HTTP/1.1 200 OK
Api-Version: 1.40
Content-Type: application/json
Docker-Experimental: false
Ostype: linux
Server: Docker/19.03.8 (linux)
Date: Thu, 14 May 2020 20:03:47 GMT
Content-Length: 726

{"Platform":{"Name":""},"Components":[{"Name":"Engine","Version":"19.03.8","Details":{"ApiVersion":"1.40","Arch":"amd64","BuildTime":"2020-03-16T00:00:00.000000000+00:00","Experimental":"false","GitCommit":"afacb8b","GoVersion":"go1.14rc1","KernelVersion":"5.6.11-300.fc32.x86_64","MinAPIVersion":"1.12","Os":"linux"}},{"Name":"containerd","Version":"1.3.3","Details":{"GitCommit":""}},{"Name":"runc","Version":"1.0.0-rc10+dev","Details":{"GitCommit":"fbdbaf85ecbc0e077f336c03062710435607dbf1"}}],"Version":"19.03.8","ApiVersion":"1.40","MinAPIVersion":"1.12","GitCommit":"afacb8b","GoVersion":"go1.14rc1","Os":"linux","Arch":"amd64","KernelVersion":"5.6.11-300.fc32.x86_64","BuildTime":"2020-03-16T00:00:00.000000000+00:00"}

127.000.000.001.34004-127.000.000.001.02376: GET /images/json?filter=alpine%3A3.5 HTTP/1.1
accept: application/json
Host: localhost:2376
Connection: Keep-Alive
Accept-Encoding: gzip
User-Agent: okhttp/3.14.6


127.000.000.001.02376-127.000.000.001.34004: HTTP/1.1 200 OK
Api-Version: 1.40
Content-Type: application/json
Docker-Experimental: false
Ostype: linux
Server: Docker/19.03.8 (linux)
Date: Thu, 14 May 2020 20:03:47 GMT
Content-Length: 3

[]

127.000.000.001.34004-127.000.000.001.02376: POST /images/create?fromImage=alpine%3A3.5 HTTP/1.1
accept: application/octet-stream
Transfer-Encoding: chunked
Host: localhost:2376
Connection: Keep-Alive
Accept-Encoding: gzip
User-Agent: okhttp/3.14.6

4
null
0


127.000.000.001.02376-127.000.000.001.34004: HTTP/1.1 200 OK
Api-Version: 1.40
Content-Type: application/json
Docker-Experimental: false
Ostype: linux
Server: Docker/19.03.8 (linux)
Date: Thu, 14 May 2020 20:03:50 GMT
Transfer-Encoding: chunked

35
{"status":"Pulling from library/alpine","id":"3.5"}

@lburgazzoli
Copy link
Author

My understanding is that the call

GET /images/json?filter=alpine%3A3.5

is not properly handled by podman as it does not apply the given filter

@mheon mheon added the HTTP API Bug is in RESTful API label Jun 2, 2020
@mheon
Copy link
Member

mheon commented Jun 2, 2020

@baude @jwhonce PTAL

@github-actions
Copy link

github-actions bot commented Jul 4, 2020

A friendly reminder that this issue had no activity for 30 days.

@rhatdan
Copy link
Member

rhatdan commented Jul 6, 2020

@baude @jwhonce @mheon Any movement on this?

@rhatdan
Copy link
Member

rhatdan commented Jul 27, 2020

@lburgazzoli Is this still an issue, I believe this is fixed in the current release. At this point I am not sure what this issue is covering?

@psakar
Copy link
Contributor

psakar commented Jul 27, 2020

Possibly fixed by #6878 (currently merged only to the master branch)

@rhatdan
Copy link
Member

rhatdan commented Sep 10, 2020

Closing as I believe this is fixed.

@heyakyra
Copy link

heyakyra commented Jan 5, 2022

Sorry to necro this ticket, but this seems like the most relevant discussion I've seen.

@rhatdan

SELinux will block this access also. giving a process access to the docker.sock is the most dangerous thing you can do. You should really run in --privileged mode if you are going to do this, so people would understand that the container has root root access on your system and no confinement.

Have you looked into using buildah to build while running inside of a container?

Is --security-opt label=disable safer than --privileged as @mheon suggests?

I'm looking to be able to run https://github.com/nginx-proxy/nginx-proxy with podman. Shouldn't it be possible to do this in a safe way?

I think it would involve a systemd service which invokes something like podman run [--privileged|--security-opt label=disable] --name=nginx-proxy -p=80:80 -p 443:443 -v /etc/nginx/certs:/etc/nginx/certs -v /etc/nginx/vhost.d:/etc/nginx/vhost.d -v /etc/nginx/html:/usr/share/nginx/html -v /var/run/podman/podman.sock:/tmp/docker.sock:ro docker.io/jwilder/nginx-proxy:latest

@rhatdan
Copy link
Member

rhatdan commented Jan 5, 2022

No there is nothing safe about giving a container access to the docker.socket on the Host Period.

https://danwalsh.livejournal.com/78373.html?
https://opensource.com/business/14/10/docker-user-rights-fedora

I believe access to the docker.socket with docker running as root or Podman.socket with Podman runing as root are two of the most dangerous things you can do on a Linux system. Worse then giving a process SUDO without root.

@heyakyra
Copy link

heyakyra commented Jan 5, 2022

Thank you for putting it clearly, and those links are very helpful. They both touch on where I was going with my second question, "I'm looking to be able to run https://github.com/nginx-proxy/nginx-proxy with podman. Shouldn't it be possible to do this in a safe way?"

Usually people are doing this because they want the container to do benign operations, like list which containers are on the system, or look a the container logs. But Docker does not have a nice RBAC system, you basically get full access or no access. I choose to default to NO ACCESS.

and in the second

Docker currently does not have any authorization controls. If you can talk to the docker socket or if docker is listening on a network port and you can talk to it, you are allowed to execute all docker commands.

I was trying to figure out how safe this is with docker, and whether the severe danger comes only when using podman. Given the above, I have an question in addition to that—is read-only access just as dangerous? If not it seems that should be all that is needed, no? Forgive me for ignorance if that doesn't make sense, I am not very familiar with SELinux or podman (yet).

@rhatdan
Copy link
Member

rhatdan commented Jan 6, 2022

With Podman you could potentially -volume /var/lib/containers:/var/lib/containers:ro, and be able to do listing of containers.
You would have to disable SELinux to make this work.

podman run --security-opt label=disable -v /var/lib/containers:/var/lib/containers:ro quay.io/podman/stable podman ps -a

Error: error opening database /var/lib/containers/storage/libpod/bolt_state.db: open /var/lib/containers/storage/libpod/bolt_state.db: read-only file system

Looks like we would need to do more work, to make this possible.

@rhatdan
Copy link
Member

rhatdan commented Jan 6, 2022

@flouthoc WDYT

#cat /tmp/Containerfile
from fedora
run dnf -y install podman
# podman build -t dan /tmp
...
# podman run --security-opt label=disable -v /var/lib/containers:/var/lib/containers:O dan podman ps -a
time="2022-01-06T15:01:16Z" level=warning msg="\"/\" is not a shared mount, this could cause issues or missing mounts with rootless containers"
Error: 'overlay' is not supported over overlayfs, a mount_program is required: backing file system is unsupported for this graph driver

Where is the upper layer of the overlay mounted?

Should we move this to a tmpfs /dev/shm?

@rhatdan
Copy link
Member

rhatdan commented Jan 6, 2022

@giuseppe WDYT?

@heyakyra
Copy link

heyakyra commented Jan 6, 2022

Why -volume /var/lib/containers:/var/lib/containers:ro instead of a read only mount of the socket itself? It sounds like this would not be compatible with nginx-proxy

Looks like we would need to do more work, to make this possible.

Thank you, this is hopefully what many of us need. Should I file a new ticket or would you like to since you have a better grasp on the needs/scope/limitation?

@rhatdan
Copy link
Member

rhatdan commented Jan 6, 2022

Yes create an issue asking for read/only access to container storage. I think there is potential to give you this with overlay mounts

@flouthoc
Copy link
Collaborator

flouthoc commented Jan 6, 2022

@flouthoc WDYT

#cat /tmp/Containerfile
from fedora
run dnf -y install podman
# podman build -t dan /tmp
...
# podman run --security-opt label=disable -v /var/lib/containers:/var/lib/containers:O dan podman ps -a
time="2022-01-06T15:01:16Z" level=warning msg="\"/\" is not a shared mount, this could cause issues or missing mounts with rootless containers"
Error: 'overlay' is not supported over overlayfs, a mount_program is required: backing file system is unsupported for this graph driver

Where is the upper layer of the overlay mounted?

Should we move this to a tmpfs /dev/shm?

This is a loop we already create overlay in storage. The reason is to make easier to maintain and common cleanup for all the overlays.

I think we could add a tmpfs in between when src is already on an overlay should that make it work ? @giuseppe @rhatdan WDYT ?

@heyakyra
Copy link

heyakyra commented Jan 6, 2022

create an issue asking for read/only access to container storage

I did my best, thanks for all the guidance and explanations

@rhatdan
Copy link
Member

rhatdan commented Jan 6, 2022

I would be fine with that, But we would be limiting the size of the overlay.

@giuseppe
Copy link
Member

giuseppe commented Jan 7, 2022

This is a loop we already create overlay in storage. The reason is to make easier to maintain and common cleanup for all the overlays.

I think we could add a tmpfs in between when src is already on an overlay should that make it work ? @giuseppe @rhatdan WDYT ?

now we can specify upperdir and workdir. Wouldn't that help in this case as well?

@flouthoc
Copy link
Collaborator

flouthoc commented Jan 7, 2022

@giuseppe ah yes once this gets merged this should get easier #12712

@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Sep 21, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Sep 21, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
HTTP API Bug is in RESTful API kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. stale-issue
Projects
None yet
Development

No branches or pull requests

10 participants