-
Notifications
You must be signed in to change notification settings - Fork 32
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
E2E tests: use testcontainers
?
#202
Comments
Hi @rzadp, is there something in particular that's not working using testcontainers-node / I can help with?
The bug you reference is simply an API change from Docker between docker-compose v1 and v2, it isn't internal to testcontainers-node and as such you can reference the container however you wish (
Is this related to testcontainers? If you'd like the image to be kept after a build you can disable the resource reaper by setting this env variable:
All the logs should reference the ID of the container. You can also get your own log stream with
In what way / is this something I can help with? |
@cristianrgreco Thanks for reaching out!
My issue was referencing a non-existent
Then I was getting an error:
The error message led me to believe that the system couldn't find the It would be nice if the error message pointed that out.
Oh, I wasn't aware of this Isn't the default delay of 10s waay too small? If I'm developing&testing a containerized app locally, I would be making changes and re-running tests many times. And after each 10s I would have to wait for a full build.
True, but I don't see a way to dig into logs of one container without changing code. It would be nice to have something like
Well, there is nothing in particular, it's just having already implemented E2E tests without But I see the appeal of the lib, I will be having another try definitely, maybe in a different repo when starting from scratch. |
Sure, thanks for the reply!
Yes makes sense, I'll look into it.
I'm not sure exactly what you mean. The resource reaper starts per "session", so any containers/images will be removed by the Resource reaper after 10s if the node process shuts down. If you start a new process, a new reaper would be started and will handle resources created within that session. Pls let me know if something is unclear
I see. I will look into this - other testcontainers languages support providing a logger to a container, perhaps something like this would help? |
That is clear. But removing docker images with it's layer cache so quickly is a little problematic for me because every run of the tests I have to wait for a full re-build of my images which can take some time.
Yeah, that sounds good. Or simply providing a debugging namespace string (like |
Key points here: * application container is built manually, in order to reuse cache * changed startup, so web part would be last to start, and listening to web port might be considered as "ready" event. This change will be essential for proper service replication. Things that are now being waited for are matrix sync and `isReady` on polkadotjs api Fixes #202
Key points here: * application container is built manually, in order to reuse cache * changed startup, so web part would be last to start, and listening to web port might be considered as "ready" event. This change will be essential for proper service replication. Things that are now being waited for are matrix sync and `isReady` on polkadotjs api * had to create additional Dockerfile for synapse, as I couldn't solve permissions issue through testcontainers. However, building that is super fast, fine to do every time Fixes #202
Key points here: * application container is built manually, in order to reuse cache * changed startup, so web part would be last to start, and listening to web port might be considered as "ready" event. This change will be essential for proper service replication. Things that are now being waited for are matrix sync and `isReady` on polkadotjs api * had to create additional Dockerfile for synapse, as I couldn't solve permissions issue through testcontainers. However, building that is super fast, fine to do every time Fixes #202
Key points here: * application container is built manually, in order to reuse cache * changed startup, so web part would be last to start, and listening to web port might be considered as "ready" event. This change will be essential for proper service replication. Things that are now being waited for are matrix sync and `isReady` on polkadotjs api * had to create additional Dockerfile for synapse, as I couldn't solve permissions issue through testcontainers. However, building that is super fast, fine to do every time Fixes #202
Key points here: * application container is built manually, in order to reuse cache * changed startup, so web part would be last to start, and listening to web port might be considered as "ready" event. This change will be essential for proper service replication. Things that are now being waited for are matrix sync and `isReady` on polkadotjs api * had to create additional Dockerfile for synapse, as I couldn't solve permissions issue through testcontainers. However, building that is super fast, fine to do every time Fixes #202
Key points here: * application container is built manually, in order to reuse cache * changed startup, so web part would be last to start, and listening to web port might be considered as "ready" event. This change will be essential for proper service replication. Things that are now being waited for are matrix sync and `isReady` on polkadotjs api * had to create additional Dockerfile for synapse, as I couldn't solve permissions issue through testcontainers. However, building that is super fast, fine to do every time Fixes #202
@cristianrgreco So, I've managed to get it running. I was also struggling with getting output for specific container, and I went with saving output to files, something like this: /** @example
* const container = await new GenericContainer("alpine")
* .withLogConsumer(logConsumer("test-app")) // logs are in `test-app.log`
* .start();
*/
function logConsumer(name: string): (stream: Readable) => Promise<void> {
return async (stream: Readable) => {
const logsfile = await fs.open(`${name}.log`, "w");
stream.on("data", line => logsfile.write(`[${(new Date()).toISOString()] ${line}`));
stream.on("err", line => logsfile.write(`[${(new Date()).toISOString()] ${line}`));
stream.on("end", () => {
logsfile.write("Stream closed\n");
logsfile.close();
});
};
} Maybe some log consumers should be provided by testcontainers, like wait strategies? A thing that I couldn't solve with testcontainers, was a permission problem in the image. |
Hi both. Though I don't yet have answers, I wanted to give an update:
I agree. I am asking the other Testcontainers maintainers for opinions, but I think that if you build an image and give it a name (
It is a good idea but I haven't heard of any other demand for it. There likely exist many libraries which can pipe a stream to a file easily so I don't think it makes sense to add it to Testcontainers.
To make sure I understand, you cannot use Testcontainers' I'll respond here once I have more info. |
Yeah, an ability to set permissions/owner/group for the files that are being copied would solve it. |
Thanks both. These all sound good and I've confirmed them with the other Testcontainers maintainers. |
Following this suggestion, I spend some time to try and use
testcontainers-node
for the E2E tests instead of the current shell script setup.However, I run into several issues with it:
yarn install
steps.testcontainers:containers
DEBUG scope. So it is not possible to easily dig into the logs of a single container.I think that
testcontainers
might be useful but not in this particular repo. Or maybe I approached it incorrectly.I have pushed my WIP changes to the rzadp/testcontainers branch, if anyone would like to take a look.
I'm creating this issue as a discussion point.
The text was updated successfully, but these errors were encountered: