In some build setups it may be the case that the build is triggered via Jenkins which itself runs in a docker container. With respect to testcontainers that means that a dockerized Jenkins runs unit tests which in turn start docker containers.
In order to be able to start a docker container from within a docker container, the currently proposed "solution" (it's merely a workaround) is to mount the file /var/run/docker.sock and the binary /usr/bin/docker into the Jenkins container.
This works fine with testcontainers. It detects a working docker.sock file and continues with starting containers. However if those containers do expose ports, testcontainers seems to wait for the port to become available. In the scenario described above it waits on localhost:port if docker communication via UNIX socket has been detected implying that if a socket is used, then the docker daemon must also run on localhost.
This implication is not true when doing docker in docker by the help of mounted socket files.
As a result, testcontainers times out waiting for the exposed port to become available.
Our current workaround for this is to not mount the socket and the binary but instead installing docker client into the container and passing the environment variables DOCKER_HOST, DOCKER_TLS_VERIFY and DOCKER_CERT_PATH to the Jenkins container.
Is it possible to implement some kind of switch that is able to influence the "wait for port to become available" behavior?
In some build setups it may be the case that the build is triggered via Jenkins which itself runs in a docker container. With respect to testcontainers that means that a dockerized Jenkins runs unit tests which in turn start docker containers.
In order to be able to start a docker container from within a docker container, the currently proposed "solution" (it's merely a workaround) is to mount the file /var/run/docker.sock and the binary /usr/bin/docker into the Jenkins container.
This works fine with testcontainers. It detects a working docker.sock file and continues with starting containers. However if those containers do expose ports, testcontainers seems to wait for the port to become available. In the scenario described above it waits on localhost:port if docker communication via UNIX socket has been detected implying that if a socket is used, then the docker daemon must also run on localhost.
This implication is not true when doing docker in docker by the help of mounted socket files.
As a result, testcontainers times out waiting for the exposed port to become available.
Our current workaround for this is to not mount the socket and the binary but instead installing docker client into the container and passing the environment variables DOCKER_HOST, DOCKER_TLS_VERIFY and DOCKER_CERT_PATH to the Jenkins container.
Is it possible to implement some kind of switch that is able to influence the "wait for port to become available" behavior?