-
-
Notifications
You must be signed in to change notification settings - Fork 107
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
containerized iamlive proxy doesn't generate --output-file on SIGHUP nor on exit #57
Comments
Since my goal here is actually to monitor I will report back later if I have the same issue with the file not getting dumped, as another data point. The terraform project I'm running takes about 20 minutes to run and is creating and bootstrapping an EKS cluster. |
Hey @timblaktu, Thanks for raising and for providing your detailed setup. I've heard from others that containerising can significantly mess with the signals processing. I also haven't been able to nail down the true solution here, however there was an addition that may help you. If you use the Let me know how you go. |
Thanks for the tip, @iann0036. So far, no joy using
Still no output file appearing the specified location. But now I have some ideas:
|
@iann0036 re: signal handling in iamlive (or any) containers, I have learned that signals issued to containers by In my case, I defined a docker
So, it's clear that I was never seeing an output file because I didn't really want the added complexity of signal handlers in my entrypoint script, but I preferred that over installing another package (gosu) onto my container, so I decided to just trap the 4 signals |
I'm definitely now getting an |
@iann0036 I have since discovered NO_PROXY and as such have less reason to "control" iamlive with signals from my client process. I'd like to keep this issue open until I resolve the issue, but for now wanted to suggest that the README includes NO_PROXY in the proxy mode instructions alongside HTTP_PROXY to fix situations where the monitored client process sends requests other than through the AWS api. |
Have you tried adding tini to your container with iamlive? https://computingforgeeks.com/use-tini-init-system-in-docker-containers/ |
@jgrumboe No, ive never used --init or the tini init system. Thanks. Perhaps I'll try it as a sanity check since it looks like a c implementation of what ive done in bash (probably with some error..) It's -g option makes it forward SIGHUP (and most other interesting ones) to the child process group. Reading the docs i domt see any way to pass tini args using docker --init, so i probably have to call tini from my entrypoint (literally replacing my bash script with tini). Thanks again.. |
You're welcome. 👍 Maybe you can report back your findings. |
Suggestion, probably aimed towards @iann0036 - would it make sense to add a Dockerfile to this repo, with the findings and ideas presented here? There could even be a workflow which builds and pushes an image to ghcr.io, along with the rest of the binary releases. |
Thank you for creating this amazing project! My iamlive container, running v0.49.0, is now successfully proxying aws cli requests, as proven by its stdout captured in the following docker log entry, output in response to
aws sts get-caller-identity --debug --profile <myprofile>
:..but it is not dumping this text into its --output-file, neither at graceful exit nor on SIGHUP.
The iamlive container is based on this one, and executes iamlive in its entrypoint as:
The
${IAMLIVE_SHARED_PATH}
folder (actually is/home/appuser/.iamlive
) is the container mount point for a named docker volume that is shared with another "client container" that is being monitored for AWS api calls. Below is the relevant excerpt from the docker compose config that orchestrates these two containers.(All I've removed from the above are many superfluous and noisy variable definitions.)
Mutual access to the shared volume has already been proven working correctly, since:
IAMLIVE_SHARED_PATH
mount point in theiamlive
container is where iamlive dumps the requiredca.pem
andca.key
files andIAMLIVE_SHARED_PATH
mount point in themain
(client) container is where the client application instructs the AWS CLI to read the certificate bundle from, using:As mentioned at the top, I've also proven that this scheme of two applications/containers is working in terms of networking and application configuration. The
iamlive
proxy is receiving thests get-caller-identity
request and dumping to stdout a policy document correctly containing asts:GetCallerIdentity
action.The issue
I've yet to see a
iamlive.log
file get dumped.Use case 1:
iamlive
exitsAt first, I had the
main
container sleep several seconds after the successfulaws sts get-caller-identity
transaction, and then exit. Becausemain
depends_on
iamlive
,main
is stopped first, theniamlive
. Here, I expected that theiamlive
application would be sent and would catchSIGTERM
, and then run this code to write (and flush??)GetPolicyDocument()
's return value to theoutputFileFlag
. Since the file path being written is in a mounted folder backed by docker volume on the host, if this file got written, I expected that to persist until the next container run. (There is nothing in this system that deletes files in that shared volume folder.)Use case 2:
SIGHUP
Next, I modified the project to enable the client application running in the
main
container to send UNIX process signals to other containers, specifically so that it could sendSIGHUP
to theiamlive
container as a way to force it to dump the policy to disk before exiting. For the curious, this required:main
the path to the host's docker daemon socket via a host docker volume, andmain
has the correct permissions to access that socket, andhttp://localhost/containers/<target_container_id>/kill?signal=SIGHUP
When testing this, however, everything worked just fine (the
POST
gets a 204 No Content response, which is the expected "successful" result for this api call), except that theiamlive.log
file did not get dumped. I confirmed that the I was using the correct docker daemon api and that I was using the correcttarget_container_id
by removing the?signal=SIGHUP
part of the URL which sendsSIGKILL
by default, and when running observing theiamlive
container exiting immediately after the requestPOST
ed from the client application running in themain
container.Summary
So, this feels like a bug, but I could also use some help in troubleshooting this from the
iamlive
side, so please send me any ideas you have on troubleshooting techniques for this app. I've not seen any debugging or verbose mode, nor have I looked at the source code much yet, but I am now stumped and receptive to any help. I realize that this usage mode is unusual -most people seem to be monitoring AWS cli activity from the host system instead of from another container - but this is why I explained myself so thoroughly. Still, let me know if you need any more info to help.Thanks again for creating this amazing project!
The text was updated successfully, but these errors were encountered: