-
Notifications
You must be signed in to change notification settings - Fork 25
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[WIP] Fix premature logouts during parallel pushes #142
Conversation
Hi @huyz, From the looks of it, the code that handles logging out was commited all the way back in 2014, and hasn't changed since. While there is no justification that I could find in the intro commit or the related issue, I presume the reason why we want to logout is so we remove the authentication information from the docker config file. Failure to do so potentially causes a security risk since the credentials are on-disk when logging in through Now regarding this problem, I understand this is an issue when building multiple builds in parallel, as the first one that finishes the post-processing step will logout for all potential other post-processors that run simultaneously. A cleaner way to process this would be that we logout only when all the builds finish (irrespective of success or failure), but this is non-trivial with the current architecture. Given the security risk of not removing the credentials here, I would suggest maybe introducing an option in the configs to not invoke What do you think? Let me know if you want us to pick-up on this development, otherwise I'll wait for your reroll. Thanks for first reporting the problem and suggesting this fix, we appreciate the involvement! |
Actually, rather than adding a So the quickest win would be to add a warning to that effect in the documentation, which I can file as a separate PR. Since this PR was never intended to implement a proper logout *only once when all post-processors were done), which I figured could be difficult to implement, I'll close this PR. Btw, here are a couple of interesting notes, for anyone who wants to pick up from here:
To summarize, for my use case, handling |
Regarding the locking statement, there is indeed a Mutex that is being locked on login, and released on logout, so while the documentation is accurate regarding how things work with respects to the plugin, it is probably outdated. My assumption is that this model worked as advertised in older versions of Packer, when plugins did not exist, and the codebase for handling builders/provisioners/post-processors was embedded in Packer itself, but this doesn't work anymore now that plugins are separate entities that are booted by packer. With how things work nowadays, each separate build will have their own instances of the plugin, hence why the lock/unlock approach does not prevent logging-in/out when performing builds in parallel, since they each have their own memory space, and their own mutexes. This does point out that we should think about that piece of code though, since it might have been relevant some time ago, but is not anymore. Again thanks for the investigative work here @huyz, we'll see what we can do about this, glad you found a way to make it work for your use case and for making it clearer what are the implications of using |
@lbajolet-hashicorp Ah very interesting. Thanks for the explanation. |
It seems the only thing to do here is just to remove the
docker logout
. This way, thedocker-push
works regardless of the number of post-processors run in parallel.I don't think that the
docker logout
is needed, but perhaps I'm not aware of any negative consequences of skipping that.If a user wanted to really logout at the end of the
packer build
, they could just do so manually. So maybe a documentation change may be warranted?Closes #141