Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

E2E tests: use testcontainers? #202

Closed
rzadp opened this issue Feb 20, 2023 · 8 comments · Fixed by #317
Closed

E2E tests: use testcontainers? #202

rzadp opened this issue Feb 20, 2023 · 8 comments · Fixed by #317
Assignees

Comments

@rzadp
Copy link
Contributor

rzadp commented Feb 20, 2023

Following this suggestion, I spend some time to try and use testcontainers-node for the E2E tests instead of the current shell script setup.

However, I run into several issues with it:

  • I could not use a docker-compose with supplied environment variables. I suspect the fault of this bug uncovered just 2 days ago.
  • Somehow, the docker build cache of the bot&server docker images is not reused between test runs. So, every run of the tests, we're waiting for their yarn install steps.
  • The container logs are not separated. Every container just logs into testcontainers:containers DEBUG scope. So it is not possible to easily dig into the logs of a single container.
  • The setup and development does not seem to be getting any simpler.

I think that testcontainers might be useful but not in this particular repo. Or maybe I approached it incorrectly.

I have pushed my WIP changes to the rzadp/testcontainers branch, if anyone would like to take a look.

I'm creating this issue as a discussion point.

@cristianrgreco
Copy link

Hi @rzadp, is there something in particular that's not working using testcontainers-node / I can help with?

I could not use a docker-compose with supplied environment variables. I suspect the fault of testcontainers/testcontainers-node#459 uncovered just 2 days ago.

The bug you reference is simply an API change from Docker between docker-compose v1 and v2, it isn't internal to testcontainers-node and as such you can reference the container however you wish (environment.getContainer('service_1') or environment.getContainer('service-1')). The bug referenced is for testcontainers-node's internal pipelines.

Somehow, the docker build cache of the bot&server docker images is not reused between test runs. So, every run of the tests, we're waiting for their yarn install steps.

Is this related to testcontainers? If you'd like the image to be kept after a build you can disable the resource reaper by setting this env variable: TESTCONTAINERS_RYUK_DISABLED=true. Note though that without manual cleanup you may run out of disk.

The container logs are not separated. Every container just logs into testcontainers:containers DEBUG scope. So it is not possible to easily dig into the logs of a single container.

All the logs should reference the ID of the container. You can also get your own log stream with await container.logs() and interact with them however you wish.

The setup and development does not seem to be getting any simpler.

In what way / is this something I can help with?

@rzadp
Copy link
Contributor Author

rzadp commented Mar 1, 2023

@cristianrgreco Thanks for reaching out!

  1. About the docker-compose v1 vs v2 bug - actually I realize now it was a different thing.

My issue was referencing a non-existent docker-compose.yml file, e.g.:

await new DockerComposeEnvironment(
  path.resolve(__dirname, "./anything"),
  "doesnt-exist-docker-compose.yml"
).up()

Then I was getting an error:

ERROR Failed to up DockerCompose environment: spawn docker-compose ENOENT

The error message led me to believe that the system couldn't find the docker-compose bin (so I thought it has something to do with v1 vs v2), but the issue was actually that docker-compose.yml file was not found.

It would be nice if the error message pointed that out.

  1. Build cache

If you'd like the image to be kept after a build you can disable the resource reaper by setting this env variable: TESTCONTAINERS_RYUK_DISABLED=true

Oh, I wasn't aware of this ryuk resource reaper running in background, that explains so much.
Disabling it helps.

Isn't the default delay of 10s waay too small? If I'm developing&testing a containerized app locally, I would be making changes and re-running tests many times. And after each 10s I would have to wait for a full build.
Shouldn't the default delay be measured in minutes/hours, not seconds? Or maybe I'm missing something here.

  1. The container logs are not separated.

All the logs should reference the ID of the container.
You can also get your own log stream with await container.logs() and interact with them however you wish.

True, but I don't see a way to dig into logs of one container without changing code. It would be nice to have something like DEBUG=testcontainers:containers:my-service-name to dig into one.
In my test case, I have 4 different containers, one of which is very noisy so it is a problem.
When debugging a test run, I would like the possibility to have either 4 different terminal windows with streaming logs from different containers, or 4 different files where the logs are appended.
I guess the latter can be quite simply implemented using the await container.logs() stream.


  1. The setup and development does not seem to be getting any simpler.

In what way / is this something I can help with?

Well, there is nothing in particular, it's just having already implemented E2E tests without testcontainers, trying to use testcontainers seemed like doing the same, but fighting a different set of idiosyncrasies.

But I see the appeal of the lib, I will be having another try definitely, maybe in a different repo when starting from scratch.

@cristianrgreco
Copy link

Sure, thanks for the reply!

It would be nice if the error message pointed that out.

Yes makes sense, I'll look into it.

Isn't the default delay of 10s waay too small? If I'm developing&testing a containerized app locally, I would be making changes and re-running tests many times. And after each 10s I would have to wait for a full build.

I'm not sure exactly what you mean. The resource reaper starts per "session", so any containers/images will be removed by the Resource reaper after 10s if the node process shuts down. If you start a new process, a new reaper would be started and will handle resources created within that session. Pls let me know if something is unclear

True, but I don't see a way to dig into logs of one container without changing code. It would be nice to have something like DEBUG=testcontainers:containers:my-service-name to dig into one.

I see. I will look into this - other testcontainers languages support providing a logger to a container, perhaps something like this would help?

@rzadp
Copy link
Contributor Author

rzadp commented Mar 21, 2023

I'm not sure exactly what you mean. The resource reaper starts per "session", so any containers/images will be removed by the Resource reaper after 10s if the node process shuts down. If you start a new process, a new reaper would be started and will handle resources created within that session. Pls let me know if something is unclear

That is clear.
Cleaning up started docker containers, networks etc. totally makes sense, because they should be re-created from scratch for the next run of the tests anyway in order to have a clean state.

But removing docker images with it's layer cache so quickly is a little problematic for me because every run of the tests I have to wait for a full re-build of my images which can take some time.

other testcontainers languages support providing a logger to a container, perhaps something like this would help?

Yeah, that sounds good. Or simply providing a debugging namespace string (like testcontainers:containers:my-service-name) to a container.

@mordamax mordamax assigned rzadp and mutantcornholio and unassigned rzadp Apr 5, 2023
mutantcornholio added a commit that referenced this issue Jul 10, 2023
Key points here:
* application container is built manually, in order to reuse cache
* changed startup, so web part would be last to start, and listening to
web port might be considered as "ready" event. This change will be
essential for proper service replication.
Things that are now being waited for are matrix sync and `isReady` on
polkadotjs api

Fixes #202
mutantcornholio added a commit that referenced this issue Jul 10, 2023
Key points here:
* application container is built manually, in order to reuse cache
* changed startup, so web part would be last to start, and listening to
web port might be considered as "ready" event. This change will be
essential for proper service replication.
Things that are now being waited for are matrix sync and `isReady` on
polkadotjs api
* had to create additional Dockerfile for synapse, as I couldn't solve
permissions issue through testcontainers. However, building that is
super fast, fine to do every time

Fixes #202
mutantcornholio added a commit that referenced this issue Jul 10, 2023
Key points here:
* application container is built manually, in order to reuse cache
* changed startup, so web part would be last to start, and listening to
web port might be considered as "ready" event. This change will be
essential for proper service replication.
Things that are now being waited for are matrix sync and `isReady` on
polkadotjs api
* had to create additional Dockerfile for synapse, as I couldn't solve
permissions issue through testcontainers. However, building that is
super fast, fine to do every time

Fixes #202
mutantcornholio added a commit that referenced this issue Jul 11, 2023
Key points here:
* application container is built manually, in order to reuse cache
* changed startup, so web part would be last to start, and listening to
web port might be considered as "ready" event. This change will be
essential for proper service replication.
Things that are now being waited for are matrix sync and `isReady` on
polkadotjs api
* had to create additional Dockerfile for synapse, as I couldn't solve
permissions issue through testcontainers. However, building that is
super fast, fine to do every time

Fixes #202
mutantcornholio added a commit that referenced this issue Jul 11, 2023
Key points here:
* application container is built manually, in order to reuse cache
* changed startup, so web part would be last to start, and listening to
web port might be considered as "ready" event. This change will be
essential for proper service replication.
Things that are now being waited for are matrix sync and `isReady` on
polkadotjs api
* had to create additional Dockerfile for synapse, as I couldn't solve
permissions issue through testcontainers. However, building that is
super fast, fine to do every time

Fixes #202
mutantcornholio added a commit that referenced this issue Jul 11, 2023
Key points here:
* application container is built manually, in order to reuse cache
* changed startup, so web part would be last to start, and listening to
web port might be considered as "ready" event. This change will be
essential for proper service replication.
Things that are now being waited for are matrix sync and `isReady` on
polkadotjs api
* had to create additional Dockerfile for synapse, as I couldn't solve
permissions issue through testcontainers. However, building that is
super fast, fine to do every time

Fixes #202
@mutantcornholio
Copy link
Contributor

@cristianrgreco So, I've managed to get it running.

I was also struggling with getting output for specific container, and I went with saving output to files, something like this:

/** @example 
* const container = await new GenericContainer("alpine")
*   .withLogConsumer(logConsumer("test-app")) // logs are in `test-app.log`
*   .start();
*/
function logConsumer(name: string): (stream: Readable) => Promise<void> {
  return async (stream: Readable) => {
    const logsfile = await fs.open(`${name}.log`, "w");
    stream.on("data", line => logsfile.write(`[${(new Date()).toISOString()] ${line}`));
    stream.on("err", line => logsfile.write(`[${(new Date()).toISOString()] ${line}`));
    stream.on("end", () => {
      logsfile.write("Stream closed\n");
      logsfile.close();
    });
  };
}

Maybe some log consumers should be provided by testcontainers, like wait strategies?

A thing that I couldn't solve with testcontainers, was a permission problem in the image.
What I needed was to take base image from the internet, copy files to it, and run chown on them. I ended up with a Dockerfile, and then building it with .fromDockerfile().
Is is something that's possible to do with testcontainers natively?

@cristianrgreco
Copy link

cristianrgreco commented Jul 12, 2023

Hi both. Though I don't yet have answers, I wanted to give an update:

Removing docker images with it's layer cache so quickly is a little problematic for me because every run of the tests I have to wait for a full re-build of my images which can take some time.

I agree. I am asking the other Testcontainers maintainers for opinions, but I think that if you build an image and give it a name (GenericContainer.fromDockerfile("...").build("my-custom-image")), then it is safe to assume you want to reuse it, and it should be excluded from cleanup.

Maybe some log consumers should be provided by testcontainers, like wait strategies?

It is a good idea but I haven't heard of any other demand for it. There likely exist many libraries which can pipe a stream to a file easily so I don't think it makes sense to add it to Testcontainers.

What I needed was to take base image from the internet, copy files to it, and run chown on them. I ended up with a Dockerfile, and then building it with .fromDockerfile().
Is is something that's possible to do with testcontainers natively?

To make sure I understand, you cannot use Testcontainers' withCopyFilesToContainer method, because you need to copy the files over with specific permissions - is this correct? If yes, I have also asked internally what can be done here.

I'll respond here once I have more info.

@mutantcornholio
Copy link
Contributor

To make sure I understand, you cannot use Testcontainers' withCopyFilesToContainer method, because you need to copy the files over with specific permissions - is this correct? If yes, I have also asked internally what can be done here.

Yeah, an ability to set permissions/owner/group for the files that are being copied would solve it.
Also, withCopyFilesToContainer doesn't seem to be working with directories:
https://github.com/testcontainers/testcontainers-node/blob/main/src/generic-container/generic-container.ts#L386-L390

@cristianrgreco
Copy link

cristianrgreco commented Jul 12, 2023

Thanks both. These all sound good and I've confirmed them with the other Testcontainers maintainers.
I've created 3 issues/enhancements, if interested, please feel free to enable notifications on them:

  1. Support copying directories to the container testcontainers/testcontainers-node#602
  2. Add ability to exclude build images from being cleaned up testcontainers/testcontainers-node#601
  3. Specify mode when copying files/content to container testcontainers/testcontainers-node#600

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants