Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Kubernetes Support Revisited #1135

Closed
deas opened this issue Jan 12, 2019 · 82 comments
Closed

Kubernetes Support Revisited #1135

deas opened this issue Jan 12, 2019 · 82 comments

Comments

@deas
Copy link

deas commented Jan 12, 2019

Testcontainers depends on Docker, and raw Docker is an issue in a Kubernetes managed environment (i.e. Jenkins X). It ends up either using the /var/run/docker.sock escape hatch or dind. Both approaches have issues. Would you like to be aiming at adding native Kubernetes support? Native Kubernetes support would even make Docker optional in a Kubernetes environment.

See also: #449

@jstrachan
Copy link

it’s easy to create containers as Pods in kubernetes using a java client for kubernetes- like this one:
https://github.com/fabric8io/kubernetes-client

@bsideup
Copy link
Member

bsideup commented Jan 12, 2019

Hi @deas & @jstrachan,

First of all, I'm sure @rnorth @kiview have something to add and might have an opposite opinion, please don't read my answer as a strong position of the whole team, it is just my perspective on it :)


We, of course, want to support as many platforms as possible! But we also need to keep focus and understand the problem we're solving.

Native k8s support is something we hear quite often. But does it actually solve the problem? Or is it the problem?

Just a theoretical example: if we focus on integrating Testcontainers with runc (without Docker daemon) and root-less containers, or Kata Containers, it will work locally, on CI environments, and on k8s. Because by doing it we will solve the problem of a Docker daemon requirement.

But if we add k8s support (worth mentioning that k8s is not a container engine like Docker, but orchestrator), we will have to support two completely different ways of spinning up containers in one code base.
If somebody wants to volunteer himself to contribute & support it - that could be an option, but so far nobody did, and it means that our small team will have to develop & support both "ways".

So, do we solve a problem by supporting k8s APIs or add another one?

@jstrachan

it’s easy to create containers as Pods in kubernetes using a java client for kubernetes

That's true. But starting containers is a very small part of Testcontainers.
We could adapt the DSL to some extent (although we do use some Docker specific APIs in a few places). But the amount of limitations is huge. Networks can't be done the Docker way, file mounting will not work as expected (or not at all), and many more tiny details which are hidden behind Testcontainers, accumulated after years of development.


To keep the conversation going, I suggest we first define the problem and find how we can quickly solve it :)

@kiview
Copy link
Member

kiview commented Jan 12, 2019

I agree with @bsideup regarding splitting the actual issue at hand:

  1. Supporting container engines beside Docker
  2. Supporting orchestrators

1 is definitely something I'd like keep in mind when going forward with Testcontainers (at least having an internal software architecture that allows for other engines). As far as I understand, this would solve issues with JenkinsX, wouldn't it?

2 is something I could see as a different module (like we already have now with docker-compose support), but probably not something that will be built into Testcontainers core, like the container engine abstraction.

@deas
Copy link
Author

deas commented Jan 13, 2019

Ok, so here is my story.

Testcontainers was saving my ass wrt integration testing which requires a bunch of containers for me (on the dev box). Unfortunately, Kubernetes introduced new challenges. I just had a quick glimpse and I know pretty much nothing about Testcontainers other that it uses docker-compose under the covers.

At first I thought it should pretty much boil down to swapping docker-compose with kubernetes equivalent calls. Well in fact, they already made docker-compose play with kubernetes: https://github.com/docker/compose-on-kubernetes . Helm may be another alternative to spin things up.

Other than spinning things up, I guess you guys do various docker calls to check the state of containers and do other things, right? Hence, appears there is other functionality which should be implemented if we are aiming at parity with docker-compose.

@bsideup
Copy link
Member

bsideup commented Jan 13, 2019

Testcontainers does not use Docker Compose at all :) We have a module for it, but the core and every other container are implemented with Docker API and only it :)

@kiview
Copy link
Member

kiview commented Jan 13, 2019

@deas I kind of like your narrative here:

At first I thought it should pretty much boil down to swapping docker-compose with kubernetes equivalent calls. Well in fact, they already made docker-compose play with kubernetes: https://github.com/docker/compose-on-kubernetes . Helm may be another alternative to spin things up.

This is exactly something I want to look into for some time now (but sadly didn't have the time yet), a common abstraction between Docker Compose, Docker Swarm Mode and Kubernetes, relying on the existing abstraction built into recent Docker Compose versions.

As @bsideup has mentioned, the current Docker Compose support is already implemented as its own module and we might be able to come up with something similar for the other orchestrators.

However, Docker Compose support already doesn't have real feature parity compared to using Testcontainers directly, since we are adding an additional layer of indirection, so I would expect the same for other orchestrator implementations.

@deas
Copy link
Author

deas commented Jan 13, 2019

Don't want to open a can of worms, but is it actually still reasonable to work on the container/pod/microvm level in Testcontainers?

@kiview
Copy link
Member

kiview commented Jan 13, 2019

I don't really get this question, could you clarify a bit more?

People start to use Testcontainers for all kinds of use cases and we are really happy see this. We also try to support as much of them as reasonable possible.

However, we also have to look at the history and main use case of Testcontainers, which is IMO (the opinion of the other team members might differ) white box integration testing. This is where Testcontainers is strongest and where I also see the most development happening (and it's also the most probable way how people start to use Testcontainers).

Then there is an increasing amount of user which use Testcontainers for black box integration testing as well as for system testing or acceptance testing. This is also great and definitely a desired use case. However, the maximum size of those tests and systems should be as such, that they are still runnable on a local developer machine. That's how we mainly think about Testcontainers, a tool to give developers fast feedback while developing with the added benefit, of also having the same tests executed in your CI environment without any additional setup or environment duplication necessary (and we do our best here, to support as many CI systems as possible).

I'm not sure if this answers your question, but it explains why I generally struggle to see the need for actual Kubernetes integration. Still we are generally open to ideas and would greatly appreciate contributions from the Kubernetes community in order to tackle those topics.

Regarding your other question:

Other than spinning things up, I guess you guys do various docker calls to check the state of containers and do other things, right? Hence, appears there is other functionality which should be implemented if we are aiming at parity with docker-compose.

We are using the Docker API for multiple different features, like file mounting/copying, executing commands in containers, networking, etc.
Another big part of the Testcontainers UX are ẀaitStrategies (blocking test execution until the application in the container is ready, not just the container is running), but since they can also work in a black box way, this concepts could probably be adopted for orchestrators.

@deas
Copy link
Author

deas commented Jan 14, 2019

Again, just been jumping into Testcontainers so you can be sure I am missing a bit.

My use case was: Run maven integration tests on that composite. I was dropping a docker-compose.yaml and everything was fine on my dev box. I use that very same file to spin up that composite and work on the system locally. I was very surprised how easy my problems were solved so far.

Hence, I was wondering why there is a need to deal with the parts (containers/pod/microvm) of that composite. And in fact WaitsStrategies and things along those lines fall in that category. Still not sure whether some of the functionality could also be covered by a tool at the composite level (e.g. docker-compose or helm).

@postulka
Copy link

We would also like to see some module for kubernetes orchestrator, similar to docker compose module ... maybe support for helm? Our use case is that we want to run some integration / automation tests and for that we need to spin up quite a few different services and some of them can be quite heavy (not all the processes are microservices) - depending on test the environment we need to spin up can get quite complex and therefore it is not very feasible to run it on single machine. Having support for kubernetes would resolve this for us because kubernetes would scale the cluster up and down and distribute the resources as needed.

@tonicsoft
Copy link

For our use case (Gitlab CI builds on AWS using kubernetes runners with docker installed) I believe it should be enough to add --network=host CLI parameter to all docker calls (or equivalent if CLI is not used by TestContainers. Is this something that is currently possible or would be less controversial to add? Perhaps it merits a separate feature request?

@jzabroski
Copy link

jzabroski commented Dec 25, 2019

Just a theoretical example: if we focus on integrating Testcontainers with [runc](https://github.com/opencontainers/runc) (without Docker daemon) and root-less containers, or Kata Containers, it will work locally, on CI environments, and on k8s. Because by doing it we will solve the problem of a Docker daemon requirement.

No, you got it wrong. Don't focus on integrating anything just yet. Focus on designing an API that people can use to plug-in whatever integrations they want. Lay out a roadmap so that people understand what needs to be done. Then they will do it. The average person probably wants to help, but doesn't even know where to look. Adding a "help-wanted" tag does nothing to facilitate, mentor, and advocate for that average person to go help you. What really is needed is not help, but guidance/resources to get started. Otherwise, you might as well tag this "serenity-now" for those of us with brittle integration tests and searching for solutions.

And this sort of roadmap should be done at a level higher than testcontainers-java or testcontainers-go. It's literally the blueprint for the whole project.

@imochurad
Copy link

Our company would like to use TestContainers, but lack of Kubernetes support is an issue.

@bountin
Copy link

bountin commented Apr 25, 2020

I'd be interested in the expectation of "Kubernetes Support": Is it just about scheduling a pod on a k8s cluster, similar to what testcontainers does now with running a docker container? Or is it more about deployment and removal of any k8s objects (e.g. secrets) to for instance test k8s integration?

@bert-laverman
Copy link

bert-laverman commented Aug 27, 2020

Kubernetes support will be very important once k8s 1.18 becomes mainstream because it will by default no longer support docker access to the underlying container platform. Instead, it will go directly to containerd. So any build platform in k8s will not be able to easily run Maven jobs depending on testcontainers.

@stormobile
Copy link

The problem with testcontainers and K8s is that TCs give developers an easy way to couple deployment entities together when there is no actual need or requirement to do so. It puts quite a lot of strain on infrastructure/ops teams as TCs adoption forces them to deploy larger VMs/Workers that cost more and are harder to achieve best resource utilization with. For example some of our teams run the test scenario where they need:

  1. A service written in Java
  2. Kafka
  3. Oracle

They are hooked on Testcontainers that spawn everything inside the same root container (or Pod in case of K8s) thus making scheduling harder and driving vertical nodes size scaling (instead of horizontal infrastructure scaling). This becomes even more of a problem with managed cloud runners like the ones provided by Github/Gitlab as they are small (c4r8 tops).

It could be great if testcontainers adopted the benefits of modern orchestration (mainly K8s) by providing an abstraction layer that could preserve the simplicity of configuration/test delivery for developers but with the ability to decouple deployment entities. K8s already features service discovery and isolation (including new cool hierarchy of namespaces from 1.19) that should make this possible.

@ralph089
Copy link

ralph089 commented Aug 28, 2020

I have found an interesting project that addresses this problem. Has anyone tried this out yet? https://github.com/JeanBaptisteWATENBERG/junit5-kubernetes

Nevertheless, a better integration of Kubernetes in the Testcontainers project itself would be better suited, because a developer could work locally via Docker and in the CI environment via Kubernetes without having to use different libraries/APIs.

@rnorth
Copy link
Member

rnorth commented Aug 28, 2020

Allowing Testcontainers to work atop a Kubernetes or Docker backend has long been something we would quite like to be able to do.

The question is time. We have a huge amount of other work that we need to do, and Kubernetes support is only a benefit to a subset of users. @bsideup, @kiview and I work on this almost entirely in our personal time, which is limited.

The extent of changes would be so big that I don't think this is something we can throw open to the community to work on either. Thinking about the PR review volume, plus setting up/owning test infrastructure, and then support, it would still place a very heavy burden on us as the core team.

Realistically I think the only practical way forward would be if a company, or group of companies, would sponsor development of this feature. If it's a feature worth having, then I'd hope that this would be a reasonable proposal, and it's one we could explore further. Otherwise, I'm afraid it's likely going to remain as one of those things that we'd like to do, but don't have the capacity for.

@jzabroski
Copy link

jzabroski commented Aug 28, 2020

example some of our teams run the test scenario where they need:

  1. A service written in Java
  2. Kafka
  3. Oracle

This becomes even more of a problem with managed cloud runners like the ones provided by Github/Gitlab as they are small (c4r8 tops).

So... in your example, if you're using GitLab or GitHub, how do you spin up a CI build that launches Oracle, Kafka and your java service? I didn't understand how Kubernetes support solves your problem.

@stormobile
Copy link

So... in your example, if you're using GitLab or GitHub, how do you spin up a CI build that launches Oracle, Kafka and your java service? I didn't understand how Kubernetes support solves your problem.

We have to run big workers in CI, but testcontainers could provide the abstraction layer to specify how to spawn services in a target K8s cluster (much like how it is done with compose). All this could be done in CI itself with dynamic environment preparation (with yaml, helm and all other possible K8s deployment options) but the thing about Testcontainers approach is that all the same stuff is done in code which is the main value of the project (not the ability to spawn containers themselves)

@jzabroski
Copy link

Is the k8s cluster a virtual setup within the CI build server, or are you deploying the CI build to a k8s cluster with physical nodes? Based on your answer, you didn't answer my question. It matters in terms of who the customers are for your feature request. If the issue is truly that you can't properly integrate everything in the same CI build, then that's "Enterprise solution" territory and not an open source project.

That said, you can look at C# and .net: Micronetes and TestEnvironment.Docker

These are a bit closer to the tech stack you want.

@Asgoret
Copy link

Asgoret commented Sep 2, 2020

Hi to all!
It may be little out of scope, so I'm sorry for that ;D
My developers come with a problem that TC can't connect to docker.socket due security restrict. We have vanilla Jenkins as a CI\CD tool and using DinD on bare-metal slaves, also we move to generated slaves in OKD|K8S cluster. Is there any opportunity to use TS in secure mode? Without given high privileges to TC containers.

cc @rnorth @jstrachan @bsideup

@rnorth
Copy link
Member

rnorth commented Sep 2, 2020

@Asgoret it's not an out of scope question - that's the topic of this issue 😄 . Basically, Testcontainers requires a docker daemon in order to be able to launch containers. We don't have a way to launch containers unless you can provide some kind of docker daemon.

@czunker
Copy link

czunker commented Aug 19, 2021

Hi,

Thanks a lot. The issues #700 and #1135 helped me, to get testcontainers running in Jenkins with dind-rootless inside Kubernetes.
In hope, this might help someone else, a stripped down example for a Jenkins build job pod definition:

apiVersion: v1
kind: Pod
spec:
  securityContext:
    fsGroup: 1000
    runAsGroup: 1000
    runAsNonRoot: true
    runAsUser: 1000
    seccompProfile:
      type: RuntimeDefault
  containers:
    - name: dind
      image: docker:20.10-dind-rootless
      imagePullPolicy: Always
      env:
        - name: DOCKER_TLS_CERTDIR
          value: ""
      securityContext:
        # still needed: https://docs.docker.com/engine/security/rootless/#rootless-docker-in-docker
        privileged: true
        readOnlyRootFilesystem: false
    - name: gradle
      image: openjdk:11
      imagePullPolicy: Always
      env:
        - name: DOCKER_HOST
          value: tcp://localhost:2375
        # needed to get it working with dind-rootless
        # more details: https://www.testcontainers.org/features/configuration/#disabling-ryuk
        - name: TESTCONTAINERS_RYUK_DISABLED
          value: "true"
      command:
        - cat
      tty: true
      volumeMounts:
        - name: docker-auth-cfg
          mountPath: /home/ci/.docker
      securityContext:
        capabilities:
          drop:
            - ALL
        allowPrivilegeEscalation: false
        privileged: false
        readOnlyRootFilesystem: false
  volumes:
    - name: docker-auth-cfg
      secret:
        secretName: docker-auth

@DRoppelt
Copy link

DRoppelt commented Aug 24, 2021

@czunker

Your file, especially the hint to https://www.testcontainers.org/features/configuration/#disabling-ryuk, has helped a lot.

I think I got it now based on your sample. But instead of disabling Ryuk completely, the socket needs to be made available to the image that is running testcontainers.

The disadvantage of TESTCONTAINERS_RYUK_DISABLED=true is that you cannot reuse a pod for another job, as the containers spanwed within testcontainers are still running. This will only work if the pod is thrown awaway each time.

If your environment already implements automatic cleanup of containers after the execution, but does not allow starting privileged containers, you can turn off the Ryuk container by setting TESTCONTAINERS_RYUK_DISABLED environment variable to true.

In our setup, we want to re-use pods (with a limit of idling X minutes), since all builds download a bunch of stuff via maven.

By doing top within the "dind" container, I found this:

Mem: 7732036K used, 8666016K free, 1508K shrd, 863852K buff, 5010748K cached
CPU:   0% usr   0% sys   0% nic 100% idle   0% io   0% irq   0% sirq
Load average: 1.43 1.42 0.90 3/730 2782
  PID  PPID USER     STAT   VSZ %VSZ CPU %CPU COMMAND
   88    85 rootless S    1535m   9%   0   0% dockerd --host=unix:///run/user/1000/docker.sock --host=tcp://0.0.0.0:2376 --tlsverify --tlscacert /certs/server/ca.pem --tlscert /certs/server/cert.pem --tlskey /certs/server/key.pem
   96    88 rootless S    1297m   8%   1   0% containerd --config /run/user/1000/docker/containerd/containerd.toml --log-level info
   59     1 rootless S     695m   4%   1   0% /proc/self/exe --net=vpnkit --mtu=1500 --disable-host-loopback --port-driver=builtin --copy-up=/etc --copy-up=/run -p 0.0.0.0:2376:2376/tcp docker-init -- dockerd --host=unix:///run/user/1000/docker.sock --ho
    1     0 rootless S     694m   4%   1   0% rootlesskit --net=vpnkit --mtu=1500 --disable-host-loopback --port-driver=builtin --copy-up=/etc --copy-up=/run -p 0.0.0.0:2376:2376/tcp docker-init -- dockerd --host=unix:///run/user/1000/docker.sock --host=
   68     1 rootless S     127m   1%   1   0% vpnkit --ethernet /tmp/rootlesskit148726098/vpnkit-ethernet.sock --mtu 1500 --host-ip 0.0.0.0
 2776     0 rootless S     1660   0%   1   0% /bin/sh
 2782  2776 rootless R     1588   0%   0   0% top
   85    59 rootless S      992   0%   1   0% docker-init -- dockerd --host=unix:///run/user/1000/docker.sock --host=tcp://0.0.0.0:2376 --tlsverify --tlscacert /certs/server/ca.pem --tlscert /certs/server/cert.pem --tlskey /certs/server/key.pem

Which shows us a socket being located at /run/user/1000/docker.sock

Which means that adding

a) a empty-dir volume at /run/user/1000 creates a shared folder at that location, that "dind" can share with others
b) setting TESTCONTAINERS_DOCKER_SOCKET_OVERRIDE=/run/user/1000/docker.sock (see https://www.testcontainers.org/features/configuration/#customizing-docker-host-detection) makes this socket available for Ryuk to manage containers.

Also, in my environment the "just disable TLS for docker" was not valid as I already dared to ask to run privilidged containers. With doing following, TLS can be used without issues:

a) shared empty-dir volume at /certs, as the dind-rootless will write /certs/*, especially the client relevant certs at /certs/client/*
b) DOCKER_HOST=tcp://localhost:2376 (for the client) as the TLS encrpyted port is at 2376 not 2375
c) DOCKER_TLS_VERIFY=1 (for the client) as otherwise certs are not used when talking to the daemon
d) DOCKER_CERT_PATH=/certs/client (for the client) as otherwise the certs are looked up somewhere in userhome

@masinger
Copy link

masinger commented Aug 27, 2021

I'm currently trying to hide the static com.github.dockerjava dependency behind a custom facade. Then it would be possible to implement modular container providers (e.g. Kubernetes, containerd, ...), which could even be maintained by independent projects.

Would that be a feasible approach?

@kiview
Copy link
Member

kiview commented Aug 27, 2021

I would not expect that you find a serviceable and working abstraction based on the dockerjava abstractions (which are ultimately the Docker API), that transparently works for all the possible providers for all features that are provided by Testcontainers.

However, feel free to explore the approach further and share your findings 🙂

@masinger
Copy link

@kiview Yeah, this would definitly be an issue. But I think it would be acceptable to let Testcontainers itself define the set of required and therefore supported container functionalities.

@joyrex2001
Copy link

I'm currently trying to hide the static com.github.dockerjava dependency behind a custom facade. Then it would be possible to implement modular container providers (e.g. Kubernetes, containerd, ...), which could even be maintained by independent projects.

I have done something similar, only one layer down and implementing the docker api instead (kubedock). I got reasonable results, and solved a few of the challenges you will encounter. Maybe this is helpful for your project too :-)

@masinger
Copy link

masinger commented Aug 29, 2021

I was able to make some progess. Event though I wasn't able to implement/test every feature provided by testcontainers, most of the basic functionalities and tests are working right now (see gif below).

Some findings

  • The ExposedPort and some other network related functionalities are currently depending on a K8S NodePort service, created for each container. But one could implement different "exposition strategies" utilizing techniques like kubectl port-forward or other service types like LoadBalancer, etc
  • The required changes will definitely break the public api. The impact could be minimized by sticking to the dockerjava API as close as possible. Ideally this would only require dependent projects to "reimport" the used types and do some minor renaming.
  • One Casandra test is currently failing, because it uses a configuration file statically binding to Docker's bridge address (172.17.0.2).
  • It's a little tiresome to hide all dockerjava functionalities behind interface, even though it could be worth the effort

Overall it seems like this could be a viable approach.

Not (yet) fully tested/implemented:

  • Building images (I started to experiment with a dynamically spawned Kaniko build agent - looks promising)
  • Docker Compose
  • Inter-container networking

@masinger
Copy link

masinger commented Sep 6, 2021

In case someone want's to try it out - I'm happy to get some feedback: https://github.com/masinger/testcontainers-java/blob/master/docs/features/kubernetes.md

@guhilling
Copy link

guhilling commented Sep 14, 2021

Also, consider asking your sys admin (or any other person responsible for your Kubernetes installation) whether they would be okay giving access to k8s API from inside the CI tasks, he-he :)

Actually that wouldn't be a problem. CI tasks need to spin up pods an even setup storage (as Tekton does) anyway.

@lynch19
Copy link

lynch19 commented Jul 23, 2022

@masinger This seems like an awesome direction. What's the status of this?

@bsideup NOTE - until there's k8s support, this project is pretty unusable for MOST of the Java developers, since they work with k8s.

@lynch19
Copy link

lynch19 commented Jul 26, 2022

@kiview What's your opinion regarding the awesome project of @masinger, and what's the main direction in which Testcontainers is heading regarding this issue?

This super important and anticipated issue has been inactive for some months now. Were there been some decisions/progress on the subject?

@steve-todorov
Copy link

@lynch19 this issue has been here for 4 years. If you scroll back into the comments you will clearly see they don't have any plans fixing it. The main argument is that the core team doesn't want to do the necessary abstraction to have different providers (i e. K8s, docker, podman, etc) due to some challenges and incompatibilities :)

It is highly unlikely this will change any time soon. And this is the reason why our team is not using this as well.

@kiview
Copy link
Member

kiview commented Jul 27, 2022

@lynch19 The project/fork by @masinger is public and open source, just try it out and see if it fits your need.

While I understand that this can seem to be a very important issue for individuals, we don't see this as important for the Testcontainers community as a whole, given current priorities and project focus. We thank everyone for sharing their feedback and suggestions and there is a possibility we will explore further abstractions over Docker in the future. However, as of today, we can't any more concrete info.

I will close this issue for now to communicate our current intent, but that does not mean it won't get revisited in the future.

@kiview kiview closed this as not planned Won't fix, can't repro, duplicate, stale Jul 27, 2022
@jaybi4
Copy link

jaybi4 commented Dec 23, 2022

Sorry I jump in this late, we've just started using Testcontainers. My first thing to say about the project is that you're doing a really good job, It is nice and useful 👏.

Right now we use GitLab CI and Testcontainers is running well. However, all our deployment runs on k8s and at some point in the future we would like to move to k8s runners. So, knowing you don't plan integrating with it makes me a bit uneasy.

Quoting @kiview (in an old comment, sorry):

However, the maximum size of those tests and systems should be as such, that they are still runnable on a local developer machine

This makes me understand that you see k8s as something not used locally. And nowadays you're mostly right. However, there are two strong reasons why this is changing pretty fast:

  • Docker Desktop being paid, making people look for alternatives
  • Many companies moving to k8s for deployment

In my opinion the second point is very important. One of the reasons I see Docker was pretty benefitial was to prevent the "works on my machine". However, if companies move to k8s (as said, this is happening quickly) and developers keep working on Docker we could experience again the "works on my machine". In my understanding, that's why we start seeing many alternatives to Docker Desktop based on k8s and even Docker Desktop supports having a local k8s cluster. And this is why I'm advocating to switch to k8s locally. You get two benefits from this, prevent the "works on my machine" and improve developer familiarity with k8s. My perception is that this trend will increase.

Take this insights just as added reasons to consider supporting k8s.

Thanks.

@sharkymcdongles
Copy link

sharkymcdongles commented Dec 23, 2022 via email

@kiview
Copy link
Member

kiview commented Dec 23, 2022

Hey @jaybi4, thanks for sharing your view.

I don't see real issues with the plans and considerations you have outlined here. You can use k8s as the executor for you GitlabCI and still use Testcontainers, since Testcontainers works equally well with a remote Docker daemon. And there are many ways to provide a remote Docker daemon for your Testcontainers workloads (you might also want to check out https://www.testcontainers.cloud/, which greatly mitigates any kind of "works on my machine issues").

In addition, the alternatives for Docker Desktop we see emerging tend to provide Docker compatible REST-APIs (Colima, Rancher Desktop, Podman Desktop). While they don't provide a 100% compatible API in all cases, they are doing a good job improving in the recent pasts. That's also the reason why Testcontainers works with those Docker Desktop alternatives if configured correctly.

Many companies moving to k8s for deployment

I think this is not a factor that influences which API to use for creating and instrumenting ephemeral integration testing environments. We also see users switching to Cloud IDEs (such as Codespaces), how would they work with Testcontainers if Testcontainers uses k8s as its container runtime environment? How would CIs such as GitHub Actions work? In my opinion, this would simply create an even bigger external dependency (having k8s available instead of having a Docker daemon available). Or we would need to develop another abstraction layer, which allows either Docker or k8s as the container runtime. And while this might be theoretically possible, there are big risks of an impedance mismatch in concepts between Docker and k8s and it would be a considerable engineering effort.

I'd also like to point out, that Testcontainers supports testing of k8s components, through our k3s module or https://github.com/dajudge/kindcontainer.

So when we talk about k8s support, it is often difficult to understand what is meant with k8s support, depending on the context.

@jaybi4
Copy link

jaybi4 commented Dec 23, 2022

Thanks a lot @sharkymcdongles , I'll keep this for the future me😁

@kiview thanks for your complete answer. Maybe the fact that I haven't tried yet to integrate Testcontainers and k8s is what makes my issues not real. In that case, thanks again for the clarification. I'll forward any issues/concerns I face when I'll integrate Testcontainers and k8s.

@sharkymcdongles
Copy link

Hey @jaybi4, thanks for sharing your view.

I don't see real issues with the plans and considerations you have outlined here. You can use k8s as the executor for you GitlabCI and still use Testcontainers, since Testcontainers works equally well with a remote Docker daemon. And there are many ways to provide a remote Docker daemon for your Testcontainers workloads (you might also want to check out https://www.testcontainers.cloud/, which greatly mitigates any kind of "works on my machine issues").

In addition, the alternatives for Docker Desktop we see emerging tend to provide Docker compatible REST-APIs (Colima, Rancher Desktop, Podman Desktop). While they don't provide a 100% compatible API in all cases, they are doing a good job improving in the recent pasts. That's also the reason why Testcontainers works with those Docker Desktop alternatives if configured correctly.

Many companies moving to k8s for deployment

I think this is not a factor that influences which API to use for creating and instrumenting ephemeral integration testing environments. We also see users switching to Cloud IDEs (such as Codespaces), how would they work with Testcontainers if Testcontainers uses k8s as its container runtime environment? How would CIs such as GitHub Actions work? In my opinion, this would simply create an even bigger external dependency (having k8s available instead of having a Docker daemon available). Or we would need to develop another abstraction layer, which allows either Docker or k8s as the container runtime. And while this might be theoretically possible, there are big risks of an impedance mismatch in concepts between Docker and k8s and it would be a considerable engineering effort.

I'd also like to point out, that Testcontainers supports testing of k8s components, through our k3s module or https://github.com/dajudge/kindcontainer.

So when we talk about k8s support, it is often difficult to understand what is meant with k8s support, depending on the context.

@kiview what people mean by k8s support would mean deploying containers directly into k8s and running tests there rather than via a local docker daemon. So instead of docker compose up, you would deploy multiple containers as pods or as a single pod to k8s where the tests would run.

One option(this is just a very basic example I wrote in 1 min and not fully fleshed out and is provided as a simple example): testcontainers runs outside the cluster and uses a kubeconfig or other auth to a kubeapi for a cluster, which will then deploy all of the various containers needed to run the tests. In this instance, there is no docker daemon involved meaning it will run with any container runtime interface since kubernetes would be handling the deployment and running of the containers. You also wouldn't need socat container since calls would be able to run via the kube internal networking or via localhost if the jobs are spun up in a singular pod.

Even if kubernetes support like the above isn't something on the radar I would at least thing native support for containerd would be a nice addition since docker is losing out more and more everyday.

@kiview
Copy link
Member

kiview commented Dec 23, 2022

what people mean by k8s support would mean deploying containers directly into k8s and running tests there rather than via a local docker daemon

This is what some people mean, while others simply mean being able to run Testcontainers based tests in their k8s powered CI, the reality is much more complex 😉 (as is an implementation of k8s as the container runtime, as also others have found out in the past).

since docker is losing out more and more everyday

Can you back this up with data in the context of development (not operations)? It does not really reflect what I perceive as a Testcontainers maintainer, supporting a wide range of different users.

I hope this answer and the previous answers by me and @bsideup in this thread help to understand the view of the Testcontainers project on this topic. We don't plan to dive into more discussions around this in the short-term future.

@sharkymcdongles
Copy link

sharkymcdongles commented Dec 23, 2022

what people mean by k8s support would mean deploying containers directly into k8s and running tests there rather than via a local docker daemon

This is what some people mean, while others simply mean being able to run Testcontainers based tests in their k8s powered CI, the reality is much more complex 😉 (as is an implementation of k8s as the container runtime, as also others have found out in the past).

since docker is losing out more and more everyday

Can you back this up with data in the context of development (not operations)? It does not really reflect what I perceive as a Testcontainers maintainer, supporting a wide range of different users.

I hope this answer and the previous answers by me and @bsideup in this thread help to understand the view of the Testcontainers project on this topic. We don't plan to dive into more discussions around this in the short-term future.

Testcontainers based tests in their k8s powered CI, the reality is much more complex

This is already achievable as I showed above and works fine. It would just be nice to not need privileged containers to do it, but this isn't a problem from testcontainers.

(as is an implementation of k8s as the container runtime, as also others have found out in the past).

You shouldn't need to do anything with the k8s container runtime to implement this. To do this you would generate objects to pass to the kubeapi. There shouldn't be any need to touch the container runtime directly. The most complicated issue would be adjusting the library to generate manifests instead of deploying directly to the docker socket because this would be fully new code and not reusable or even repurposable. I suppose you could make some shim thing to translate the docker plans into k8s manifests to make it easier and then it is just spec transformation instead of actual logic. Then the way to verify and fetch logs and metrics would also need it's own adjustment.

But yes it is a larger effort than a quick one.

Can you back this up with data in the context of development (not operations)? It does not really reflect what I perceive as a Testcontainers maintainer, supporting a wide range of different users.

It isn't about operations so much as it is about the ecosystem around containers in general. Many Linux distros are phasing out or dumping docker completely e.g. Fedora, RHEL, and CentOS:
https://access.redhat.com/solutions/3696691

When you install "docker" on newer versions, you actually get podman with an aliased wrapper to mimic docker instead. Ubuntu seems to be following suit as well. Kubernetes also killed docker completely and now uses containerd or crio. In general, we will see this trend continue especially now that Kubernetes and GCP are pushing non-docker setups heavily.

Another reason why switching makes sense is performance. Docker is more of a shim/api for talking to containerd and then allowing containerd to create cgroups and processes. With podman and more current implementations, this entire shim layer is removed allowing for direct communication with containerd meaning pulls are quicker, containers perform better and containers boot up faster. By some metrics, you can see 30% compute performance increases. I can try and dig some up after the holiday if you want because I am on mobile right now and heading out for the weekend.

@kiview

@seveneves
Copy link
Contributor

seveneves commented Dec 28, 2022

@sharkymcdongles

Config I use: http://pastie.org/p/2yokK0akSbbjDOsrevOo7r

The link doesn't work. Can you please repost it as code in github? It'd be a great resource for someone like me moving to gitlab

@sharkymcdongles
Copy link

@sharkymcdongles

Config I use: http://pastie.org/p/2yokK0akSbbjDOsrevOo7r

The link doesn't work. Can you please repost it as code in github? It'd be a great resource for someone like me moving to gitlab

Sorry, I thought I put it to never expire. I added it there because for some reason code formatting wasn't working on GitHub. It may still not work, but are my helm values for the gitlab-runner helm chart:

checkInterval: 10
concurrent: 30
fullnameOverride: gitlab-runner
gitlabUrl: https://git.x.com
image:
  image: library/gitlab-runner
  registry: X
imagePullPolicy: IfNotPresent
logFormat: json
logLevel: error
metrics:
  enabled: true
podSecurityContext:
  fsGroup: 65533
  runAsUser: 100
probeTimeoutSeconds: 5
rbac:
  clusterWideAccess: false
  create: false
  podSecurityPolicy:
    enabled: false
    resourceNames:
    - gitlab-runner
  serviceAccountName: gitlab-runner
resources:
  limits:
    memory: 512Mi
  requests:
    cpu: 200m
    memory: 512Mi
runnerRegistrationToken: RUNNER_TOKEN
runners:
  config: |-
    [[runners]]
      name = "infrastructure"
      output_limit = 20480
      request_concurrency = 30
      environment = ["FF_USE_FASTZIP=true"]
      builds_dir = "/builds"
      [runners.cache]
        Type = "gcs"
        Path = "cache"
        Shared = true
        [runners.cache.gcs]
          BucketName      = "gitlabbucketxxx69"
      [runners.custom_build_dir]
        enabled = true
      [runners.kubernetes]
        host = ""
        bearer_token_overwrite_allowed = false
        namespace = "gitlab-runner"
        namespace_overwrite_allowed = ""
        privileged = true
        cpu_request = "500m"
        memory_limit = "4Gi"
        memory_request = "4Gi"
        memory_limit_overwrite_max_allowed = "24Gi"
        memory_request_overwrite_max_allowed = "24Gi"
        service_cpu_request = "100m"
        service_memory_limit = "8Gi"
        service_memory_request = "8Gi"
        service_memory_limit_overwrite_max_allowed = "12Gi"
        service_memory_request_overwrite_max_allowed = "12Gi"
        helper_cpu_request = "250m"
        helper_memory_limit = "2Gi"
        helper_memory_request = "256Mi"
        helper_memory_limit_overwrite_max_allowed = "4Gi"
        helper_memory_request_overwrite_max_allowed = "4Gi"
        image_pull_secrets = ["secret"]
        poll_timeout = 900
        pull_policy = "if-not-present"
        service_account = "gitlab-runner"
        service_account_overwrite_allowed = ""
        pod_annotations_overwrite_allowed = ""
        [runners.kubernetes.node_selector]
          runner = "true"
        [runners.kubernetes.affinity]
          [runners.kubernetes.affinity.pod_anti_affinity]
            [[runners.kubernetes.affinity.pod_anti_affinity.required_during_scheduling_ignored_during_execution]]
              topology_key = "kubernetes.io/hostname"
              [runners.kubernetes.affinity.pod_anti_affinity.required_during_scheduling_ignored_during_execution.label_selector]
                [[runners.kubernetes.affinity.pod_anti_affinity.required_during_scheduling_ignored_during_execution.label_selector.match_expressions]]
                  key = "job_name"
                  operator = "In"
                  values = ["build","release"]
        [runners.kubernetes.pod_annotations]
          "cluster-autoscaler.kubernetes.io/safe-to-evict" = "false"
        [runners.kubernetes.pod_labels]
          "job_id" = "${CI_JOB_ID}"
          "job_name" = "${CI_JOB_NAME}"
          "ci_commit_sha" = "${CI_COMMIT_SHA}"
          "ci_project_path" = "${CI_PROJECT_PATH}"
        [runners.kubernetes.pod_security_context]
        [runners.kubernetes.volumes]
          [[runners.kubernetes.volumes.empty_dir]]
            name = "build-folder"
            mount_path = "/builds"
            medium = "Memory"
          [[runners.kubernetes.volumes.empty_dir]]
            name = "buildah-containers"
            mount_path = "/var/lib/containers"
          [[runners.kubernetes.volumes.empty_dir]]
            name = "docker-certs"
            mount_path = "/certs/client"
            medium = "Memory"
          [[runners.kubernetes.volumes.empty_dir]]
            name = "docker"
            mount_path = "/var/lib/docker"
        [runners.kubernetes.dns_config]
  executor: kubernetes
  locked: false
  name: infrastructure
  protected: false
  runUntagged: true
  tags: infrastructure
securityContext:
  allowPrivilegeEscalation: false
  capabilities:
    drop:
    - ALL
  privileged: false
  readOnlyRootFilesystem: false
  runAsNonRoot: true
terminationGracePeriodSeconds: 300
unregisterRunner: true

The mounts are integral to this working because of overlay2. You also need to ensure you use ubuntu containerd nodes and the ubuntu image for the runner image due to the kernel patch for overlay2.

Then you can just run jobs as normal by setting the service image to dind-rootless instead of dind e.g.

stages:
  - build

before_script:
  - docker info

build:
  stage: build
  image: docker:20.10.5
  services:
    - docker:20.10.5-dind-rootless
  variables:
    DOCKER_HOST: tcp://docker:2376
    DOCKER_TLS_CERTDIR: "/certs"
    DOCKER_TLS_VERIFY: 1
    DOCKER_CERT_PATH: "$DOCKER_TLS_CERTDIR/client"
  script:
    - docker ps

@spuranam
Copy link

spuranam commented Mar 28, 2023

Anyone tried https://github.com/joyrex2001/kubedock

@rcgeorge23
Copy link

Anyone tried https://github.com/joyrex2001/kubedock

Yes, we’re kubedock pretty successfully for testing (reasonably large london fintech). We are still ironing out some issues but generally it seems pretty stable.

@olivierboudet
Copy link

Anyone tried https://github.com/joyrex2001/kubedock

Yes, we’re kubedock pretty successfully for testing (reasonably large london fintech). We are still ironing out some issues but generally it seems pretty stable.

I use it also for a single project, no issue.

@jonesbusy
Copy link

Anyone tried https://github.com/joyrex2001/kubedock

Yes, we’re kubedock pretty successfully for testing (reasonably large london fintech). We are still ironing out some issues but generally it seems pretty stable.

I use it also for a single project, no issue.

No issues with testcontainer java and node. But not able to make it work for .NET. Looks like kubedock doesn't create bind sidecar to mount the socket for ryuk reapper. But only happen for dotnet container.

If anyone is facing the same... I'm not sure where the problem is

@ketanbijwe
Copy link

Hi,

Thanks a lot. The issues #700 and #1135 helped me, to get testcontainers running in Jenkins with dind-rootless inside Kubernetes. In hope, this might help someone else, a stripped down example for a Jenkins build job pod definition:

apiVersion: v1
kind: Pod
spec:
  securityContext:
    fsGroup: 1000
    runAsGroup: 1000
    runAsNonRoot: true
    runAsUser: 1000
    seccompProfile:
      type: RuntimeDefault
  containers:
    - name: dind
      image: docker:20.10-dind-rootless
      imagePullPolicy: Always
      env:
        - name: DOCKER_TLS_CERTDIR
          value: ""
      securityContext:
        # still needed: https://docs.docker.com/engine/security/rootless/#rootless-docker-in-docker
        privileged: true
        readOnlyRootFilesystem: false
    - name: gradle
      image: openjdk:11
      imagePullPolicy: Always
      env:
        - name: DOCKER_HOST
          value: tcp://localhost:2375
        # needed to get it working with dind-rootless
        # more details: https://www.testcontainers.org/features/configuration/#disabling-ryuk
        - name: TESTCONTAINERS_RYUK_DISABLED
          value: "true"
      command:
        - cat
      tty: true
      volumeMounts:
        - name: docker-auth-cfg
          mountPath: /home/ci/.docker
      securityContext:
        capabilities:
          drop:
            - ALL
        allowPrivilegeEscalation: false
        privileged: false
        readOnlyRootFilesystem: false
  volumes:
    - name: docker-auth-cfg
      secret:
        secretName: docker-auth

This solved my problem of running testcontainer for the test (in maven) in EKS 1.24

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests