Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Discussion: Alternate containerization strategies for workers #2037

Closed
topherbullock opened this issue Feb 14, 2018 · 3 comments
Closed

Discussion: Alternate containerization strategies for workers #2037

topherbullock opened this issue Feb 14, 2018 · 3 comments

Comments

@topherbullock
Copy link
Member

A Modest Proposal

With the consolidation of some worker operations to a new worker component (for #1959), it makes sense to start treating workers as a separate interface which encapsulates a great deal more than just registration.

Currently we treat workers as "a registered Garden and BaggageClaim" (from an architectural standpoint), rather than a separate "thing which can create containers and volumes". This new "worker" component will eventually be handling GC of containers and volumes, so it will need to know at least how to delete containers and volumes... which opens the door to this:

Workers should be the interface for the ATC to say "hey I need some work done", where "work" involves containers and volumes.

For now lets stick to the topic of Containers.

The ATC Should not care how the work gets done, and the worker should deal with the details of containerization.

There have been a lot of conversations and suggestions around how to better deploy Concourse on, and leverage K8s, Nomad, and other container orchestration technologies, rather than deploying workers to containers and (yo dawg) create containers in Garden inside those containers.

Here's some terms for the types of containers Concourse creates, just to start off with a common vernacular:

  • Task Containers
  • Resource Check Containers
  • Resource Fetching Containers (includes image_resource)
  • Resource Put Containers

Hopefully with some discussion we can come to a collective agreement about what types of "work to be done" should be abstracted by the worker, the interface a worker should have for containerization, and the options for various containerization "strategies" beyond Garden.

@evanphx
Copy link
Contributor

evanphx commented Feb 15, 2018

Having basically done this exercise to get nomad-atc (https://github.com/nomad-ci/nomad-atc) going, I'm happy to give notes on what a new, more abstract interface might look like.

Probably the biggest thing is removing everything related to scheduling. No more calls to try to find the worker for a container, etc. Basically the interface should just ask for a container and the concrete implementation should do all that finding.

One interest and deep interface is the Volume abstraction. It is hit during execution and ATC also has scheduling expectations around that as it decides if it needs to stream the volume to the new location. All that needs to be behind a more abstract interface.

@topherbullock
Copy link
Member Author

@evanphx Yeah, there's a lot of fuzzy boundaries in the responsibility of ATCs and workers, and a lot of scheduling context within the ATC. The ATC currently cares more deeply than it should about which worker containers and volumes live on; the "random" scheduling strategy helps this a bit, but there's a lot of steps to getting a worker which you've identified.

Regarding the domain the ATC and workers:
My thought around this is to sharpen the currently fuzzy boundaries between the ATC's knowledge of the state of the workers and the real operations on workers. I think as a first step, the ATC's interface to a specific worker should be limited to directly saying "hey worker I need a container for x, y, or z" or even more abstractly, "I need you to do work of type x, y, or z" and the "container" or "volume" aspect doesn't matter. Let's call things the ATC needs from a worker a "workload" to keep it intentionally abstract.

A good starting point for me is looking at the existing worker.Client which has some reasonable (albeit poorly named) responsibilities as an interface to a worker like FindOrCreateContainer FindContainerByHandle and LookupVolume, but those are all muddied down by database operations which mix the "conceptual" ATC view of the world with the "concrete" state of the worker. The domain of the client to a single worker should be limited to specific operations that the ATC knows it requires of that worker.

Regarding scheduling:
The worker.Client also has operations like Satisfying, AllSatisfying and RunningWorkers, which result from this interface being overloaded; being used an entire cluster of workers and determine their capabilities. The selection of a specific worker or narrowing the pool down based on constraints should be in one place, and we shouldn't perform this shotgun surgery, whittling down the list of workers as the ATC is going about running a particular workload.

The important bits that the scheduling and worker selection aspects really cover are:
a. 'tags' and 'platform' can be used to define what subset of workers the user wants to use to run a specific workload.
b. workers can be exclusive to a specific team or part of a global pool, so some teams can only run workloads on a specific subset.
c. workers advertise support for different resource types, so certain resource workloads can only be run on that specific worker.
d. all of the above apply to a specific workload and we need to hunt down the one worker that has tag "a", is in team 'b', and supports resource 'c'

As we define the clear interface between the ATC and workers, and reduce the amount that the ATC cares about the specifics of the workload to run, the closer we can get to offloading the scheduling concerns from the ATC as well. As a first pass, I think its safe a multi-host container orchestrator to be a single "worker" for now, and use that to drive out the worker client's responsibilities.

@ddadlani
Copy link
Contributor

Some work on this has already begun. Closing this in favor of #3695

Runtime automation moved this from Icebox to Done Jul 30, 2019
@ddadlani ddadlani removed this from Done in Runtime Jul 30, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants