Skip to content

[EKS + Fargate] [request]: Multiple pods in the same microVM #751

@bencompton

Description

@bencompton

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Tell us about your request

Currently, Fargate has the ability to schedule all containers from one pod in the same microVM. Would like the ability to schedule multiple pods in the same microVM. This might be accomplished via a pod affinity group annotation of some sort (i.e., specifying the same affinity group for multiple pods schedules all containers for those pods in the same microVM).

Which service(s) is this request for?

Fargate with EKS

Tell us about the problem you're trying to solve. What are you trying to do, and why is it hard?

My workload includes many operator containers that consume little memory and are idle most of the time. For example, I run ExternalDNS, Bitnami Sealed Secrets, and several other containers that only really do work during a deployment. All of these mostly idle "operator" containers together represent more cost with Fargate than my actual user-facing applications. If I could consolidate all of these mostly idle containers into a smaller number of microVMs instead of wasting a whole microVM for each of them, Fargate would be much more compelling. As it stands now, Fargate doesn't make economic sense for me because I would be spending more to run mostly idle infra containers than I'm spending to run my actual applications.

EDIT:

I would also think that supporting multiple pods in each microVM would pave the way to support DaemonSets in Fargate, which would be useful for cases like FluentBit, where the only option today is running as a sidecar in the same pod.

Are you currently working around this issue?

Not using Fargate due to this issue. Some work-arounds that come to mind are:

  • Use small EC2 workers for these mostly idle workloads and Fargate for the rest
  • Group these mostly idle containers into one pod (yuck)
  • Encourage the maintainers of these "mostly idle" projects to switch to a more event-driven architectures that run short-lived jobs instead of running perpetually (probably not happening soon)

Metadata

Metadata

Assignees

No one assigned

    Labels

    EKSAmazon Elastic Kubernetes ServiceFargateAWS FargateProposedCommunity submitted issue

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions