Join GitHub today
GitHub is home to over 40 million developers working together to host and review code, manage projects, and build software together.Sign up
Add content about replicationController to document use patterns #492
Currently replication controllers define an exact value. Are there disadvantages to allowing the replication controller to define more of a policy - at least X, at most X, exactly X?
Since replication controllers aren't really aware of overlapping sets, I guess you can't really use that to guide allocation between two pools because they would just end up fighting unproductively
I'm thinking about this from the mechanics of doing a deployment by composing replication controllers. With controllers as they are today:
The thought around atLeast was about explicitly representing the invariant you want during an available deployment:
I'm not sure that's better.... just the for loop for deletes seems more reasonable. Not a huge deal, just trying to work through how we'd model common deploy cases for end users.
Overlapping replicationController sets: I think we should validate that this isn't possible upon creation of a replicationController. We can do this by determining whether the label selector may overlap with any others, though to do this analytically without looking at the combinations of labels attached to pods it requires that all replicationController label selectors have at least one key in common, with a different value than all other replicationControllers.
Policy: As @brendandburns mentioned, we intended the policy to go in a separate auto-scaling policy entity, in a higher-level layer / separate component.
As @lavalamp pointed out, it is possible to create a new replicationController and gradually scale it up while scaling the original down.
Or, one could do the Asgard trick of creating a new replicationController and then shift traffic to it. It's possible to add/remove instances from a service by tweaking their labels. For instance one could toggle the value of an in_service=true/false label.
The update demo changes the template of a single replication controller and then kills instances one by one until they all get replaced. I don't like that due to the risk of unpredictable replacements and complexity involved with rollbacks. Splitting out the template ( #170 ) would address the rollback issue, but not the unpredictability. It seems to me that the atMost approach suffers from the same risk.
That said, I see what you mean that atLeast/atMost are not auto-scaling policies, but are invariants to maintain. I could imagine that flexibility could be useful. For example, a worker pool manager could use that to opportunistically request resources, where some pod requests may fail. Or, perhaps in the case that we need to temporarily create a new pod for some reason (CRIU-based migration?) but we don't want an old one to be killed.
So, I like the idea, just not the use case. :-)
FWIW, exactly X could be implemented by combining atLeast and atMost.
Agree that without a kill policy on the replication controller the uses cases described don't work. Is a kill policy something you've discussed for the replication controller (oldest, ones that don't match my template first, ones on most overloaded systems)?
@smarterclayton I agree that kill policy would be useful and all of the policies you mention are reasonable in particular scenarios. However, I see this as a rich, unbounded policy space, much like scheduling. IIRC, Joe and/or Brendan proposed kill policies early on and I asked for them to be removed.
Instead, I propose that we leave the default policy entirely up to the system, much like scheduling, so we can take as many factors into account as we like, and otherwise I'd like to enable the user to implement an arbitrary policy using event hooks #140 . For instance, I could imagine POSTing to a URL whenever a pod needed to be killed. The user's hook would have N seconds to respond (perhaps configurable), after which the replication controller would kill its default choice.