Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add ability to orchestrate multiple (depending) deployments #8507

Closed
livelace opened this issue Apr 14, 2016 · 13 comments
Closed

Add ability to orchestrate multiple (depending) deployments #8507

livelace opened this issue Apr 14, 2016 · 13 comments

Comments

@livelace
Copy link

Main discussion - openshift/jenkins-plugin#33

@ironcladlou

If your concern is the updater scaling up without regards to pod readiness, we'll need to take the discussion to origin/kubernetes, because the rolling updater progresses scale-ups based on replica count of the RC which doesn't take readiness into account.

@0xmichalis
Copy link
Contributor

@smarterclayton I remember you were saying the rolling updater shouldn't care about readiness when scaling up.

@smarterclayton
Copy link
Contributor

Growth should not be readiness limited. Deployments must satisfy their readiness criteria to succeed and ensure that failed deployments are recognized

@bparees
Copy link
Contributor

bparees commented Apr 15, 2016

the fundamental issue in my mind is that the replication controller is reporting an active count of 1 despite the fact that the only pod that exists is in a FAILED state.

@livelace
Copy link
Author

Hello. Any news about this ?

@0xmichalis
Copy link
Contributor

Not really. Unless somebody else wants to take a stab at it (which will first require discussion upstream) I am not going to get at this for some time (most probably not before 1.3)

@0xmichalis 0xmichalis changed the title Doesn't detect failed replication controller/deployment configuration Add ability to orchestrate multiple (depending) deployments Jun 8, 2016
@0xmichalis
Copy link
Contributor

@GrahamDumpleton also had a request about orchestrating multiple depending deployments

@livelace
Copy link
Author

livelace commented Jun 8, 2016

It's great :)

@0xmichalis
Copy link
Contributor

AppController seems like it's trying to solve a problem related to this issue:

https://www.youtube.com/watch?v=BXRToNV4Rdw
kubernetes/kubernetes#29453
https://github.com/Mirantis/k8s-AppController

@smarterclayton
Copy link
Contributor

smarterclayton commented Aug 25, 2016 via email

@livelace
Copy link
Author

I glad these mechanism are appearing. On video was asked interesting question about conditions between namespaces.

@0xmichalis
Copy link
Contributor

We need to see a concrete proposal before we get too excited - they're not doing a good job of following the Kube process yet, so it's hard to say how it will fall out.

Having dependecies as third-party object is a nice idea. Maybe something we can think about more when we introduce a graph API? cc: @deads2k

@deads2k
Copy link
Contributor

deads2k commented Aug 26, 2016

Having dependecies as third-party object is a nice idea. Maybe something we can think about more when we introduce a graph API? cc: @deads2k

Depends on how tightly you want to control ACLs. Assuming you want the correct data and you can't leak information, you cannot pre-compute all possible dependency graphs to show to a user. Inside a project its probably possible, but since projects in openshift depend on external projects for things like images and builds, an editor in project A does not have the same access another editor in A that also has access to B. They should get different graphs.

Precomputation is an interesting though for a few common cases, but stitching would still be required.

@0xmichalis
Copy link
Contributor

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

8 participants