Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ECS Exec proposal #2589

Closed
fierlion opened this issue Aug 24, 2020 · 1 comment
Closed

ECS Exec proposal #2589

fierlion opened this issue Aug 24, 2020 · 1 comment

Comments

@fierlion
Copy link
Member

fierlion commented Aug 24, 2020

Containers Roadmap link: aws/containers-roadmap#187

Presently without workarounds the only way to access containers on ECS EC2 is with privilege and credentials to SSH onto the Container Instance, as well as user-level access to the Docker daemon to docker exec a container directly. On Fargate, such container-level exec access is currently not supported. Sometimes, it’s difficult to get a full picture of an application from the CloudWatch logs alone. There are times when direct container access is necessary.

We’re planning on adding an ECS exec capability, which will make it possible to gain immediate access to containers running as part of ECS EC2 and Fargate tasks.

ECS Exec Solution Proposal

In order to support exec functionality for ECS, we’ll use the SSM Agent and its session management capabilities.

We plan on bind-mountng an SSM Agent and its dependencies https://github.com/aws/amazon-ssm-agent into the container at container start. A customer’s execute-command session will be directly linked to their container. We’ll also mount the SSM Agent logs from inside the container to a unique directory on the EC2 host.

                             <-HOST->
                             
                             aws ecs run-task ------------------|
                                                                V
                                                           _____________
                                                           |           |
      ___customer_task___________    |---------------------| ECS Agent |
      | _______________________ |    |(1.mount)            |           |    
      | | customer container  | |    V                     -------------
      | | /exec-deps/    *<---|-|--/SSM Agent (Exec Agent)
      | |                *<---|-|--/session-worker & logger
      | |                *<---|-|--/certs
      | |                *<---|-|--/configuration
      | |                     | |
      | |/var/log/amazon/ssm/-|-|->/var/log/ecs/execAgent/<containerID>/
      | ----------------------- |
      ---------------------------

In this scenario, the ECS Agent takes on the same supervisor role over its owned containers that the init (systemd, upstart) process does on the host, watching over the SSM Agent and its dependent processes. This responsibility includes starting up the process using docker exec and ensuring that this startup is successful; watching over the process while it’s expected to be running, restarting on failure; stopping the process when required.

So what happens to my container?

If I enable this feature on my task/service, all containers launched in that task/service will have the SSM Agent dependencies directory /exec-deps (subject to change) mounted at startup, as well as the log directory /var/log/ecs/execAgent// (subject to change) mount added to the host.

An ECS exec command from my laptop would establish a session-manager session directly inside my container.

@fierlion
Copy link
Member Author

moved to containers-roadmap aws/containers-roadmap#1050

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant