New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Run docker-compose down before build? #228
Comments
Hi @ianwremmel. Sorry you've been hitting that. Are you seeing this with the latest version of the agent? We had some signal handling bugs in previous versions, which might have prevented the plugin from having a chance to cleanup properly. |
I'm seeing it on v3.0.3 |
Thanks @ianwremmel! I can’t find 3.0.3 in https://github.com/buildkite/agent/releases 🤔 |
oh, sorry, I misread, that's the version of the docker-compose plugin. I'm using version |
Ah cool, thanks for finding out the exact agent version @ianwremmel! Between the hook and this function, we should be cleaning everything up:
The -1 agent lost log output, that's usually if the instance gets forcefully terminated? (or a spot instance doesn't gracefully terminate). If you head to the timeline tab on the job with the For the job that had the original I wonder why it's trying to bind the docker host's port for the redis instance, rather than just using the internal networking between containers? What's the Sorry for all the questions! Hopefully something will lead us to a clue, because we should already be doing a |
Has any headway been made on this issue? I'm also running into something similar. |
Just commenting to say I just started running into this on my pipeline, after a relatively long time not seeing it, and now I'm seeing it on a relative majority of builds |
Hi @glittershark! which agent version are you running? we made several changes on exit status and cleaning dir on the latest releases. Could you confirm if this still happening on the last version? (v3.33.3) |
Based on the fact that it has not been reported or upvoted in almost a year I will proceed to close this but feel free to re-open or create a new issue if this is still hapenning |
yeah, this seems to have cleared itself up for us 🤔 |
I'm getting errors like the following but I only run one agent per host. I'm pretty sure a prior build got cancelled and didn't properly clean up the running containers. Can an option be added to kill any containers from previous jobs? Or, is there a more robust way to force cleanup to happen after the build completes?
The text was updated successfully, but these errors were encountered: