-
Notifications
You must be signed in to change notification settings - Fork 977
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
cannot deploy on OpenShift 4.x - no logs to tell why #346
Comments
@jmazzitelli my take is that you need to add a line like |
Well, that added one more line in the output :)
UPDATE 1: Ahh... if I give a bad command line argument, I am given the usage syntax - I see "trace" is another level. I will try that:
UPDATE 2: That doesn't help track down the problem either:
I'm beginning to wonder if this example deployment.yaml is even correct. It doesn't have any arguments - so what is this supposed to run? |
@jmazzitelli would you like sharing your YAML file here please ? What is strange is that because of the lines
|
@jmazzitelli did you also look at the pod logs ? You are only referring to the output here. I guess this should be something like:
|
I just set my arg to be Anyway, as the original issue description explains, the yaml is basically the same as the current example yaml in this repo except I mount all the volumes to I install this very simply:
Here's some details:
You can see the logs are very small - just two lines. If I add that arg of "--loglevel=debug" I just get that third line I showed earlier. It just seems like nothing is running. |
@obourdon here's my pod yaml after being deployed:
results in this: pod.yaml.log Notice the container status - docker.io/osixia/openldap:1.2.4 finished with an exit code of 0:
|
@jmazzitelli as you can see in your previous post: There is no = between --loglevel and debug these are 2 separate arguments and therefore the list: Sorry I do not have access to a OpenShift platform but this work perfectly well on my local K8s |
Yes, I did turn on trace before. That was my "UPDATE 2" of comment #346 (comment). Here it is again when I enable trace via:
|
@BertrandGouny any ideas what could be the problem here? |
I am having exactly the same error trying podman. Container gets created with status Exited
uid 1000 has access to volumes
when permissions were not set correctly (container did not have access to volumes):
I just installed podman to evaluate how easy it is going to be to migrate out of docker... it looks like I will have to read some friendly podman manuals... |
I gave up - I ended up using https://github.com/openshift/openldap instead. |
So, in my case it was SElinux after all... so, permissions... and running it as non-root (big selling point of podman) volumes needs to end with :Z like working command line with podman that I used:
Directory permissions I used for volumes:
However I was unable to make it work as non root (-u 1000), maybe it has something to do with User being not defined in the container...
or SElinux policies:
or maybe I just need read the friendly manual... |
My use case is different - this issue was running LDAP inside an OpenShift cluster - not within just docker or podman. In that case, permissions on the volumes should not be an issue. There is something else going on that I could not figure out while running in OpenShift. |
Please refer to helm/charts#16098 |
I've just hit this issue also. Guess I'll use the openshift ldap since this is not solved. |
I hit exactly this issue at OCP, it can be solved by grant the openldap pod's service account with anyuid SCC. I guess the arbitrary UID at OCP break this container. The command is:
|
but we don't have admin permissions on our cluster(and this is completely no obvious to ask one). someone succeeded with running this as non-root user? |
I have a CRC VM running OpenShift 4.1.6.
I create a namespace "ldap" and then use a modified example kubernetes yaml to create an LDAP deployment. I say modified because I can't use
hostPath
(not allowed when deployed in this CRC openshift cluster), so I just changed the volumes to mount to an emptyDir:I assume empty directories are ok, and that this should just start with all defaults and no initial data in the LDAP directory.
Everything else is the same as the current example yaml.
When I run
oc create -f ldap-deployment.yaml -n ldap
, the pod tries to start but fails. But the problem is I have no way of knowing why. I see the pod status of "CrashLoopBackOff". When I look at the logs, all I see are two lines:If I edit the Deployment such that the env var LDAP_LOG_LEVEL has a value of "-1" (which should enable all debugging according to Section 5.2.1.2 here: https://www.openldap.org/doc/admin24/slapdconf2.html) I still only see those 2 lines.
So, in short, trying to install on OpenShift is failing and I've no idea why. 2 questions:
The text was updated successfully, but these errors were encountered: