Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Option to specify host for communication with ResourceReaper container #678

Closed
BenasB opened this issue Nov 16, 2022 · 11 comments
Closed

Option to specify host for communication with ResourceReaper container #678

BenasB opened this issue Nov 16, 2022 · 11 comments
Assignees
Labels
question Have you tried our Slack workspace (https://testcontainers.slack.com)?

Comments

@BenasB
Copy link
Contributor

BenasB commented Nov 16, 2022

Is your feature request related to a problem? Please describe.
Right now (by default), when you start any test container, a resource reaper container starts automatically beforehand. The starting of this container takes the same IDockerEndpointAuthenticationConfiguration as the container you're trying to start which specifies the endpoint/hostname, which is fine in my scenario. Then, when actually communicating/sending data over TCP to the resource reaper container, it takes the same host name as in the container starting step, which is not fine for my scenario as that endpoint is used only to create containers/communicate with the Docker API (I communicate with the actual containers in my tests through a different host name).

Describe the solution you'd like
It would be great if a user could specify the host name on which to communicate with the resource reaper container. I am just not sure how best to propose it.

  • There's ResourceReaperPublicHostPort already, so there's possibility for a ResourceReaperHost, although I'm not the biggest fan of this static setup.
  • With a .With builder config (e. g. WithResourceReaperCommunicationHost) (but that goes against the fact that the resource reaper is shared between the containers)

Describe alternatives you've considered
I initially thought that maybe specifying .WithHostname when creating my container would help, but the TestcontainersContainer.Hostname (which is used to determine the communication with resource reaper host) has nothing to do with that and furthermore that would only affect the actual test container but not the resource reaper container since it's created here and the configuration can't be reached from outside

@HofmeisterAn
Copy link
Collaborator

that endpoint is used only to create containers/communicate with the Docker API (I communicate with the actual containers in my tests through a different host name).

Can you explain your use case more in detail? It sounds like a real edge case. Why do you split or distinguish the container communication? Why can't you treat the Resource Reaper communication as a Docker API communication? What is the advantage?

@HofmeisterAn HofmeisterAn added the question Have you tried our Slack workspace (https://testcontainers.slack.com)? label Nov 17, 2022
@BenasB
Copy link
Contributor Author

BenasB commented Nov 17, 2022

Indeed, it is an edge case.

I'm working in an enclaved network (private cloud) where there are a lot of security concerns (for example all traffic between resources must be explicitly allowed by firewall rules, encrypted traffic between enclaves).

I have a VM with an exposed Docker API that acts as a "hub" for running test containers. Communication to this "hub" comes from different places (e. g. developer machines, build agents) which don't have Docker on them, hence we're going this "hub" way. The Docker API is exposed with additional authentication and the VM is put behind a load balancer (actually we can have multiple VMs in this "hub") which also gives us SSL offloading – basically communication with the Docker API is secured/restricted and limited on a specific port. Communication with the containers on the other hand is way less strict (since they're short lived and only used for integration testing) – we communicate with the VM directly. This is why we differentiate between API and container communication.

Hopefully this explains my situation a bit.

@BenasB
Copy link
Contributor Author

BenasB commented Nov 18, 2022

I forgot to mention that we've had this setup running successfully for quite some time now, although we had Resource Reaper disabled and were using a pretty basic cron job to clean up the containers up until now. We'd like to take advantage of Resource Reaper.

Thinking about this further, it would even make more sense if I could start the testcontainers/moby-ryuk container manually, keep it always running and then on the code side have something like WithResourceReaperEndpoint that would skip in the usual Resource Reaper container start up process and would go straight to communicating using the given value to WithResourceReaperEndpoint. I'm just not sure if testcontainers/moby-ryuk could be running continuously like that (maybe it's designed to be throw away too). How does that sound?

@HofmeisterAn
Copy link
Collaborator

Thinking about this further, it would even make more sense if I could start the testcontainers/moby-ryuk container manually

You can start your own instances of Ryuk, but Ryuk requires a network connection to stay alive.

I'm just not sure if testcontainers/moby-ryuk could be running continuously like that

Keeping Ryuk running is probably not difficult, it just requires a durable connection, but it won't remove resources then. Ryuk removes the labeled resources when the connection drops (test session completes). Furthermore, you need to make sure that Ryuk runs on the same Docker host as your resources (not sure if that works with the load balancer in between).

The configuration is pretty complex. I am still thinking of how a support of such a complex and edge case might look like. For now, I really think you will have much less pain if you allow TCP traffic to Ryuk.

@kiview
Copy link
Member

kiview commented Nov 21, 2022

Sorry, I still don't understand your use case here @BenasB. You said:

Communication with the containers on the other hand is way less strict (since they're short lived and only used for integration testing)

The Ryuk container is also short-lived, it is bound to the lifecycle of the test session (by design). Every interaction with it should look similar to any other container interaction. So why do you have to distinguish here?

@BenasB
Copy link
Contributor Author

BenasB commented Nov 21, 2022

Hi @kiview, yes, that's exactly right - the Ryuk container is also short-lived and I would like it to be treated as other test containers in my system. The distinguishing is not between Ryuk and other test containers, but between the Docker API (making the request to start/stop/orchestrate the containers) and the test containers themselves (Ryuk included).

Right now, the starting of the Ryuk container (request to Docker API) and maintaining connection (request to Ryuk container) use the same hostname/endpoint when making requests

Edit: as for the design of Ryuk itself, I now understand that it is not intended to be reused between test processes (what I've though (in my 2nd comment) maybe could've been achieved)

@HofmeisterAn
Copy link
Collaborator

@kiview and I had a quick chat, probably you are looking for something like TESTCONTAINERS_HOST_OVERRIDE. The environment variable allows to override the getHost() result in Java, which is the equivalent to our IDockerContainer.Hostname property. As long as the hostname is the same for the entire test session this should work. We need to extend CustomConfiguration then.

@BenasB
Copy link
Contributor Author

BenasB commented Nov 24, 2022

Furthermore, you need to make sure that Ryuk runs on the same Docker host as your resources (not sure if that works with the load balancer in between).

This is a good point, that is not guaranteed right now.

TESTCONTAINERS_HOST_OVERRIDE could be an option if we reorganized the system to start containers only on the same Docker host. Also as far as I understand, it would overwrite the Hostname for all containers, which is also not that ideal in a multiple Docker host scenario (unless there would be a per container setting WithHostnameOverride etc.)

All in all, I think our edge case setup is a bit too complex and we'll have to stick without having ryuk for now. Thanks for your time (and effort for this library in general) @HofmeisterAn and @kiview. I will be closing this issue now.

@BenasB BenasB closed this as completed Nov 24, 2022
@HofmeisterAn
Copy link
Collaborator

Also as far as I understand, it would overwrite the Hostname for all containers, which is also not that ideal in a multiple Docker host scenario (unless there would be a per container setting WithHostnameOverride etc.)

Indeed. That is something your load balancer needs to take care of, I guess. I will add the custom configuration TESTCONTAINERS_HOST_OVERRIDE anyway today, to align with the other languages. In case you would like to test it.

@BenasB
Copy link
Contributor Author

BenasB commented Nov 25, 2022

I can test it, I can see there's 2.3.0-beta.3547636764 released very recently. Will let you know the results.

@HofmeisterAn
Copy link
Collaborator

HofmeisterAn commented Nov 25, 2022

The snapshot just contains the custom configuration. Not the necessary Resource Reaper adjustments. I will do them next week and get back to you.

HofmeisterAn added a commit that referenced this issue Nov 28, 2022
…VERRIDE and TESTCONTAINERS_DOCKER_SOCKET_OVERRIDE
HofmeisterAn added a commit that referenced this issue Nov 28, 2022
…VERRIDE and TESTCONTAINERS_DOCKER_SOCKET_OVERRIDE
HofmeisterAn added a commit that referenced this issue Nov 28, 2022
…VERRIDE and TESTCONTAINERS_DOCKER_SOCKET_OVERRIDE
HofmeisterAn added a commit that referenced this issue Nov 30, 2022
…VERRIDE and TESTCONTAINERS_DOCKER_SOCKET_OVERRIDE (#695)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Have you tried our Slack workspace (https://testcontainers.slack.com)?
Projects
None yet
Development

No branches or pull requests

3 participants