You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We are currently working on deploying centralised logging in OpenShift V3.1 and running into an issue when trying to access the Kibana console. We are running OpenShift on a VM using Vagrant. Unfortunately I cannot give a link to the github repository as the repository is private and not yet open source (it will be in the near future). If it would help I can try to create a zip for testing purposes.
These are the instructions we have followed to configure the centralised logging. These are pretty much the same as aggregate_logging.html:
$ oc login --username=system:admin
$ oadm new-project logging
$ oc project logging
$ oadm policy add-role-to-user admin test -n logging
$ openssl genrsa -out key.pem 2048
$ openssl req -new -key key.pem -out csr.pem
$ openssl req -new -key key.pem -out csr.pem
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [XX]:SE
State or Province Name (full name) []:Stockholm
Locality Name (eg, city) [Default City]:Stockholm
Organization Name (eg, company) [Default Company Ltd]:Red Hat
Organizational Unit Name (eg, section) []:AeroGear
Common Name (eg, your name or your server's hostname) []:kibana.local.feedhenry.ioEmail Address []:daniel.bevenius@gmail.comPlease enter the following 'extra' attributesto be sent with your certificate requestA challenge password []:An optional company name []:$ openssl req -x509 -days 365 -key key.pem -in csr.pem -out certificate.pem$ oc secrets new logging-deployer kibana.crt=certificate.pem kibana.key=key.pem$ oc create -f - <<APIapiVersion: v1kind: ServiceAccountmetadata: name: logging-deployersecrets:- name: logging-deployerAPI$ oc policy add-role-to-user edit \ system:serviceaccount:logging:logging-deployer$ oadm policy add-scc-to-user privileged system:serviceaccount:logging:aggregated-logging-fluentd$ oadm policy add-cluster-role-to-user cluster-reader \ system:serviceaccount:logging:aggregated-logging-fluentd$ oc create -n openshift -f /usr/share/openshift/examples/infrastructure-templates/enterprise/logging-deployer.yaml$ docker pull registry.access.redhat.com/openshift3/logging-deployment:3.1.0$ oc process logging-deployer-template -n openshift \ -v KIBANA_HOSTNAME=kibana.local.feedhenry.io,ES_CLUSTER_SIZE=1,PUBLIC_MASTER_URL=https://local.feedhenry.io:8443,MASTER_URL=https://kubernetes.default.svc.cluster.local:8443 \ | oc create -f -$ oc process logging-support-template | oc create -f -
Giving some time for the pods to start up running `oc get all`` produces:
Accessing https://kibana.local.feedhenry.io will redirect to the OpenShift Console login screen, and when credentials are entered we enter a redirect loop coming back to the same login screen again.
We have followed the trouble shooting section and tried out the suggestions there but with out success.
Please let me know if there is any additional information that I can provide.
Thanks!
The text was updated successfully, but these errors were encountered:
Motivation:
When running in a Vagrant VM we noticed that we were hitting a redirect
loop when trying to access the kibana console. We created the and issue
for this in openshift-docs (see Issue section below).
When processing the template we specify the following:
$ oc process logging-deployer-template -n openshift \
-v KIBANA_HOSTNAME=kibana.local.feedhenry.io,ES_CLUSTER_SIZE=1, \
PUBLIC_MASTER_URL=https://local.feedhenry.io:8443, \
MASTER_URL=https://kubernetes.default.svc.cluster.local:8443 \
| oc create -f -
Notice that we have specified a MASTER_URL. But when I tried run
describe on the pod I see it with out the port (defaulting to 443):
$ oc describe po logging-kibana-6-9aac9
...
Environment Variables:
OAP_BACKEND_URL: http://localhost:5601
OAP_AUTH_MODE: oauth2
OAP_TRANSFORM: user_header,token_header
OAP_OAUTH_ID: kibana-proxy
OAP_MASTER_URL: https://kubernetes.default.svc.cluster.local
It looks like the environment variable OAP_MASTER_URL is never when
deployment/templates/kibana.yaml is processed. So the default value
specified in that file is used which is:
name: OAP_MASTER_URL
value: "https://kubernetes.default.svc.cluster.local"
Modifications:
Added OAP_MASTER_URL to the run.sh script when processing
templates/kibana.yaml
Result:
We can now access the Kibana console via the OpenShift console and also
directly.
Issue:
openshift/openshift-docs#1457
We are currently working on deploying centralised logging in OpenShift V3.1 and running into an issue when trying to access the Kibana console. We are running OpenShift on a VM using Vagrant. Unfortunately I cannot give a link to the github repository as the repository is private and not yet open source (it will be in the near future). If it would help I can try to create a zip for testing purposes.
These are the instructions we have followed to configure the centralised logging. These are pretty much the same as aggregate_logging.html:
Giving some time for the pods to start up running `oc get all`` produces:
Accessing https://kibana.local.feedhenry.io will redirect to the OpenShift Console login screen, and when credentials are entered we enter a redirect loop coming back to the same login screen again.
We have followed the trouble shooting section and tried out the suggestions there but with out success.
Please let me know if there is any additional information that I can provide.
Thanks!
The text was updated successfully, but these errors were encountered: