-
Notifications
You must be signed in to change notification settings - Fork 4.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
a new DeploymentConfig with replicas=1 creates a ReplicationController with replicas=0 #9216
Comments
BTW if I manually scale the RC it works fine and scales up a new pod. Just not sure the magic to make the DC make an RC of replicas=1 to start with? |
I tried |
If its any help here's the actual YAML used to create the DC without the status stuff etc
|
Here's the log of the deploy pod:
In case this helps:
|
There is nothing special about your deployment config. I run the template you provided and managed to get that pod up, albeit I don't have the image you are pointing to so I get ImagePullBackOff
The rc is always created with zero replicas and then handed off to the deployer pod that is responsible for scaling it up. There are a couple of cases where we can create a rc with replicas (#8315) but I don't think it's an issue atm. Are you able to run other pods in your environment? Can you post the output of |
Nvmd, you can run pods. You can also manually scale as you said. The deployer pod should be able to scale your rc for you. Are you running anything else in the same namespace that may be interferring with the deployer? |
No, its in a namespace by itself with nothing else running at all. FWIW others have managed to get it to scale up too; I've just no clue at all why the deployer times out for me and does nothing |
I suspect there's some issue - but the error message is totally hidden? |
|
The timeout message comes from the scaler - agreed that it's super-cryptic and needs fixing. |
Yeah, if I knew why its timing out and what THE condition is it'd really help ;) |
@jstrachan I opened kubernetes/kubernetes#27048 upstream for this and backported it in Origin in #9228. Can you pull my branch, rebuild both OpenShift and the deployer image and retest in your environment? It should help you debug your issue. Or you could wait for the upstream pull to merge (not anytime soon due to the 1.3 code freeze upstream). |
@Kargakis thanks! So I tried building your branch. It failed after making the binaries (though I've never tried building from source openshift before). I replaced the binaries in the vagrant VM I'm using to run origin from a binary distro and tried again. Here's the logs of the deployer:
I'm guessing though my issue is that the |
Aha - I forgot to use:
build working much better now ;)... |
@Kargakis I've replaced the binaries and have local docker images of your branch; but it seems if I restart openshift and create pods its still using the previous versions. e.g. its using Is there some way to make the new openshift build use the local build images of things like pod & deployer? |
@jstrachan I usually run |
I've got the images, just couldn't figure out how to make the new binaries use the newly built docker images (which have label
lets see if that helps... |
@Kargakis yay! your branch gave me a reason it didn't work; here's the logs from the deployer
|
Cool! The scaler has been ignoring all errors except invalid errors when it should ignore only update conflicts. |
@Kargakis thanks for you help! |
Thanks to @Kargakis and @jimmidyson we've figured out what went wrong. It turns out the namespace I was trying to use the DeploymentConfig inside was created via the kubernetes Namespace REST API; rather than the OpenShift Project REST API; so the necessary deployer RoleBinding wasn't created - hence the issue! If I zapped the project and recreated it via |
Great! @deads2k do we need to start warning about missing rolebindings in |
FWIW I've added the lazy creation of the |
@jimmidyson suggested a nicer fix; not to create a Project via the Project REST API; but use ProjectRequest instead which works much better now; I don't have to manually add any RoleBindings any more. Kinda confusing REST API mind you! :) It'd be less confusing to return a 404 on create Namespace or Project (with a comment to mention ProjectRequest) - as they generally don't work too well if folks wanna use a DeploymentConfig or S2I. |
@jstrachan "Normal" (non-cluster admin) users don't have access to the namespaces endpoint or the create project endpoint - can only create project via projectsrequests endpoint AFAIK - so shouldn't be a big problem (most users shouldn't be cluster-admins). |
Most users can't see rolebindings. I'm fine with checking them as long as we don't display any messages if they don't have the power to see them. Could you key off of an annotation in the namespace instead? Everyone could see that. |
thanks @Kargakis! |
How do I create a DeploymentConfig with replicas=1 so that it actually creates a ReplicationController with replicas > 0?
This one had me confused for a while; I figured OpenShift was broken ;)
Version
Steps To Reproduce
Here's the YAML I'm using to create a DC
Current Result
Here's the DC
Expected Result
Additional Information
I don't see any warnings/errors/events in openshift itself, the DC, RC or deploy pod to indicate why its not deciding to scale up the RC.
The text was updated successfully, but these errors were encountered: