-
Notifications
You must be signed in to change notification settings - Fork 7
Placement Pools #21
Comments
Looks good. CC has a first class notion of stacks. How would this play out? Right now you have to tell CC up front the stacks in play which is annoying. Changes to your "tags" would require a restage? Will there be a way to see all the tags exposed in Diego? |
@fraenkel in order to model stacks as placement pools we would need to be change the placement pool concept to allow for placement pools that the user can set and ones that the operator controls and how to do conflict resolution since a developer can specify a stack in today's world, but the placement pools for an app is going to be derived by the space it's in as-written by @onsi |
@jbayer Sure. Which will mean that we will remove any PP tag that begins with stack:, so they cannot do something stupid. |
@fraenkel - a reply point-by-point:
From Diego's perspective you cannot modify tags on a DesiredLRP. You must create a new one. We are free to decide how CC interacts with this API. We don't have a strong distinction in CC today between: "I've made a change that requires a restage", "I've made a change that requires a restart", "I've made a change that doesn't require either". This is a source of great complexity and confusion between CLI & CC and is unfortunate. Strictly speaking, changing tags would only require a restart - in todays world, however, I think we tend to express that as a full-blown restage.
Who's the "we" in that sentence? Diego knows nothing about stacks - but you do have to provide a rootfs when you deploy the Cell. I imagine we would tag Cells based on what rootfs they have and using "stack:lucid64" and "stack:trusty64" seems like a natural tagging that jives with CC.
I don't have any strong opinions here. It makes sense to me that they be as parallel as possible (except, perhaps, that we wouldn't distinguish between staging and running).
The Receptor API's As for:
I said this in the writeup: Finally: I imagine stack will remain a first-class concept in the CC. Either the CC-Bridge or the CC itself will need to fold the stack into the PP by appending it to the Require field. Telling CC up front about the stacks is annoying, I agree - but if you give users the ability to specify arbitrary stacks then having validations (by e.g. providing stacks up-front) is probably necessary. It's a little ugly but I think this picture could work:
|
The one sore point is again stacks. If PPs push me toward a Cell which provides a stack, switching PPs could push me to a Cell with a completely different stack. I would assume in this case we would restage? This could only happen using the Diego APIs and not CC.
From CC, a PP contains a set of "tags". One of those tags could be "stack:trusty". We didn't say you couldn't have a tag that begins with
I am a bit concerned about the usability of PPs. It seems like its for admins only. I guess I would like to see a set of use cases that we are trying to solve to verify that admins are truly the only ones involved. I can imagine use cases where that isn't true, e.g., special hardware, updated software, etc... that a space/app dev would like to experiment or test against. It would seem that its doable with admins only, just a bigger pain since the Org manager has to create the space then the CF operation needs to apply PPs to it. I believe you answered what I wrote in your last section above.
/v1/cells won't cut it, not when you have 400+ cells. But I was just curious. |
I think stack and PP are separate concepts in CC and in CC's API. Under the hood, CC combines stack + PP to produce the final PP that's sent to Diego. CC could have rules around disallowing PP entries that have With this we would have the flexibility in CC to do things like:
Does that make sense? I can update the doc to clarify this once we get some consensus.
I agree that there are some usecases where this would be developer-driven (your example of hardware stacks is a good one). The primary concern, though, is one of security/sla-like behavior. Things like separating prod from staging environments, etc.
Yeah, but |
I think we do need to prevent stupidity and block
This one brings forth a whole set of issues. As we have already discussed, the admin vs user. However if you dig deeper, you now have a potential restage. As an end user, do I even know what the tags of the current running app are? Yes, we have it in Diego-land, but from CC we have nothing. I re-read your statement all the way above and was wondering how anyone could tell if they should restart/restage their app due to tag changes.
You are correct building the complete list from /v1/cells is trivial enough. Lets see what people really need. |
I've updated the proposal further clarifying how stacks & placement pools relate. I'd like to e-mail vcap-dev with this soon. On this:
I wonder if this should really be a Worst case we say that PP ∆ always triggers a restage. Slightly better: we teach the CC about which tags require a restage vs not (could see that getting hairy real quick, though). I imagine we ship this and then get feedback around how people are actually using it and then we'll know ;) |
I agree, but we do need to come to terms on this issue. I just want us to resolve this issue as part of completing this proposal. |
I can see a few options:
I prefer this option. Any downsides to it? On Friday, February 6, 2015, Michael Fraenkel notifications@github.com
|
I, too, prefer option 2. We haven't had more than one stack until now. |
ok. option 2 it is -- which is already what I have in the doc. I'll try to emphasize the point when i e-mail vcap-dev |
@onsi - If a Task or LRP does not specify I'm assuming the intention is to eventually paint Windows cells with a |
@flavorjones - yes, if you have a mixed-OS collection of Cells you'll almost certainly have to provide a constraint, but Diego's not going to enforce that. Windows would be painted with the |
Hey all, I've thought about this a bunch more and I think I have a better way of dealing with Stack and supporting multiple rootfses (something we're going to need to make the transition to Diego + cflinuxfs2 smoother). I've updated the proposal here I quite like where things ended up: rootfs is much more well defined, and placement pools aren't polluted by stacks. The only last awkward bit is the translation in NSYNC/Stager from "stack" to "rootfs" but I think that's OK. Thoughts? |
I prefer the Rep get the info from Garden, that way we have an easier time of keeping things in sync. Rep already needs Garden up and running before it does anything so might as well extend it to some data as well. The preloaded stuff makes sense since we actually match on the rootfs name. Today we assume Docker is linux. If Docker is available on Windows what is the RootFSProvider called? DockerWin? I am leaning toward Option A because I don't see how Option B really plays out. Its easy to Marshal this stuff but difficult to Unmarshal since we have no idea what it is unless we carry the type and then we somehow have to map it to something concrete. I was wondering why we bother with 2 fields. Couldn't we just get away with a URL? Docker is already handled, and for the other it would just be something like preload:///cflinux2. |
I think ppc would necessarily have a different rootfs - call it re docker: |
I like the idea of just having the scheme -- will update. |
I'm thinking of killing of |
Met with @dieucao @zrob @ematpl today. Couple of tweaks. We're going to go with A One can set the default staging |
Stories are in. I'm closing it out. |
https://github.com/pivotal-cf-experimental/diego-dev-notes/blob/master/accepted_proposals/placement_pools.md
is the MVP proposal for placement pools. I'll post details to vcap-dev after a round of initial internal feedback!
The text was updated successfully, but these errors were encountered: