Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

oc logs fails with forbidden even when run as system:admin #17069

Closed
lucastheisen opened this issue Oct 27, 2017 · 4 comments
Closed

oc logs fails with forbidden even when run as system:admin #17069

lucastheisen opened this issue Oct 27, 2017 · 4 comments
Assignees
Labels
component/auth component/cli kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/P2

Comments

@lucastheisen
Copy link

I am unable to retrieve logs using oc logs:

root@deathkube1.mitre.org(Cent7) ~$ oc whoami
system:admin
root@deathkube1.mitre.org(Cent7) ~$ oc logs po/portal-s2i-base-centos7-3-build
Error from server: Get https://deathkube2.mitre.org:10250/containerLogs/foo/portal-s2i-base-centos7-3-build/docker-build: Forbidden

They still can be read using docker logs on the node where the live:

root@deathkube1.mitre.org(Cent7) ~$ ssh deathkube2.mitre.org \
>     docker logs $(\
>         ssh deathkube2.mitre.org docker ps -a | \
>             grep s2i-base-centos7-3-build | \
>             grep docker-build | \
>             awk '{ print $1 }'\
>     )
Cloning "git@gitlab.mitre.org:org-mitre-caasd/portal-s2i-base-centos7.git" ...
error: build error: Warning: Permanently added 'gitlab.mitre.org,129.83.10.102' (RSA) to the list of known hosts.
GitLab: The project you were looking for could not be found.
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.

The log here shows an error as I expected, but the point is I can actually get the log through docker but not through oc...

Version
root@deathkube1.mitre.org(Cent7) ~$ oc version
oc v3.6.0+c4dd4cf
kubernetes v1.6.1+5115d708d7
features: Basic-Auth GSSAPI Kerberos SPNEGO

Server https://deathkube1.mitre.org:8443
kubernetes v1.6.1+5115d708d7
Steps To Reproduce
  1. Create something on openshift
  2. Check logs with oc logs
Current Result
Error from server: Get https://deathkube2.mitre.org:10250/containerLogs/foo/portal-s2i-base-centos7-3-build/docker-build: Forbidden
Expected Result
Cloning "git@gitlab.mitre.org:org-mitre-caasd/portal-s2i-base-centos7.git" ...
error: build error: Warning: Permanently added 'gitlab.mitre.org,129.83.10.102' (RSA) to the list of known hosts.
GitLab: The project you were looking for could not be found.
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
@lucastheisen
Copy link
Author

Just finished diagnostic:

[Note] Determining if client configuration exists for client/cluster diagnostics
Info:  Successfully read a client config file at '/root/.kube/config'
Info:  Using context for cluster-admin access: 'foo/deathkube1-mitre-org:8443/system:admin'
[Note] Performing systemd discovery

[Note] Running diagnostic: ConfigContexts[default/deathkube1-mitre-org:8443/system:admin]
       Description: Validate client config context is complete and has connectivity

Info:  For client config context 'default/deathkube1-mitre-org:8443/system:admin':
       The server URL is 'https://deathkube1.mitre.org:8443'
       The user authentication is 'system:admin/deathkube1-mitre-org:8443'
       The current project is 'default'
       Successfully requested project list; has access to project(s):
         [ci default foo kube-public kube-system logging openshift openshift-infra]

[Note] Running diagnostic: DiagnosticPod
       Description: Create a pod to run diagnostics from the application standpoint

WARN:  [DCli2006 from diagnostic DiagnosticPod@openshift/origin/pkg/diagnostics/client/run_diagnostics_pod.go:135]
       Timed out preparing diagnostic pod logs for streaming, so this diagnostic cannot run.
       It is likely that the image 'openshift/origin-deployer:v3.6.0' was not pulled and running yet.
       Last error: (*errors.StatusError[2]) Get https://deathkube4.mitre.org:10250/containerLogs/foo/pod-diagnostic-test-pdvnj/pod-diagnostics?follow=true&limitBytes=1024000: Forbidden

[Note] Running diagnostic: NetworkCheck
       Description: Create a pod on all schedulable nodes and run network diagnostics from the application standpoint

ERROR: [DNet2008 from diagnostic NetworkCheck@openshift/origin/pkg/diagnostics/network/run_pod.go:148]
       [Logs for network diagnostic pod on node "deathkube4.mitre.org" failed: Get https://deathkube4.mitre.org:10250/containerLogs/network-diag-ns-cbm3z/network-diag-pod-2mss6/network-diag-pod-2mss6?follow=true&limitBytes=1024000: Forbidden, Logs for network diagnostic pod on node "deathkube3.mitre.org" failed: Get https://deathkube3.mitre.org:10250/containerLogs/network-diag-ns-cbm3z/network-diag-pod-49clk/network-diag-pod-49clk?follow=true&limitBytes=1024000: Forbidden, Logs for network diagnostic pod on node "deathkube2.mitre.org" failed: Get https://deathkube2.mitre.org:10250/containerLogs/network-diag-ns-cbm3z/network-diag-pod-5nnnk/network-diag-pod-5nnnk?follow=true&limitBytes=1024000: Forbidden]

Info:  Additional info collected under "/tmp/openshift" for further analysis

[Note] Skipping diagnostic: AggregatedLogging
       Description: Check aggregated logging integration for proper configuration
       Because: No LoggingPublicURL is defined in the master configuration

[Note] Running diagnostic: ClusterRegistry
       Description: Check that there is a working Docker registry

WARN:  [DClu1010 from diagnostic ClusterRegistry@openshift/origin/pkg/diagnostics/cluster/registry.go:260]
       Failed to read the logs for the "docker-registry-1-jzcl7" pod belonging to
       the "docker-registry" service. This is not a problem by itself but
       prevents diagnostics from looking for errors in those logs. The
       error encountered was:
       (*errors.StatusError) Get https://deathkube4.mitre.org:10250/containerLogs/default/docker-registry-1-jzcl7/registry: Forbidden

ERROR: [DClu1019 from diagnostic ClusterRegistry@openshift/origin/pkg/diagnostics/cluster/registry.go:343]
       Diagnostics created a test ImageStream and compared the registry IP
       it received to the registry IP available via the docker-registry service.

       docker-registry      : 172.30.60.244:5000
       ImageStream registry : docker-registry.default.svc:5000

       They do not match, which probably means that an administrator re-created
       the docker-registry service but the master has cached the old service
       IP address. Builds or deployments that use ImageStreams with the wrong
       docker-registry IP will fail under this condition.

       To resolve this issue, restarting the master (to clear the cache) should
       be sufficient. Existing ImageStreams may need to be re-created.

[Note] Running diagnostic: ClusterRoleBindings
       Description: Check that the default ClusterRoleBindings are present and contain the expected subjects

Info:  clusterrolebinding/cluster-readers has more subjects than expected.

       Use the `oadm policy reconcile-cluster-role-bindings` command to update the role binding to remove extra subjects.

Info:  clusterrolebinding/cluster-readers has extra subject {ServiceAccount default router    }.

Info:  clusterrolebinding/admin has more subjects than expected.

       Use the `oadm policy reconcile-cluster-role-bindings` command to update the role binding to remove extra subjects.

Info:  clusterrolebinding/admin has extra subject {User  ltheisen    }.
Info:  clusterrolebinding/admin has extra subject {User  dneu    }.
Info:  clusterrolebinding/admin has extra subject {ServiceAccount openshift-infra superman    }.

Info:  clusterrolebinding/system:openshift:controller:pv-recycler-controller has more subjects than expected.

       Use the `oadm policy reconcile-cluster-role-bindings` command to update the role binding to remove extra subjects.

Info:  clusterrolebinding/system:openshift:controller:pv-recycler-controller has extra subject {ServiceAccount openshift-infra pv-manager    }.

Info:  clusterrolebinding/cluster-admin has more subjects than expected.

       Use the `oadm policy reconcile-cluster-role-bindings` command to update the role binding to remove extra subjects.

Info:  clusterrolebinding/cluster-admin has extra subject {User  ltheisen    }.

[Note] Running diagnostic: ClusterRoles
       Description: Check that the default ClusterRoles are present and contain the expected permissions

[Note] Running diagnostic: ClusterRouterName
       Description: Check there is a working router

WARN:  [DClu2008 from diagnostic ClusterRouter@openshift/origin/pkg/diagnostics/cluster/router.go:194]
       Failed to read the logs for the "router-4-7gdz2" pod belonging to
       the router deployment. This is not a problem by itself but prevents
       diagnostics from looking for errors in those logs. The error encountered
       was:
       (*errors.StatusError) Get https://deathkube4.mitre.org:10250/containerLogs/default/router-4-7gdz2/router: Forbidden

[Note] Running diagnostic: MasterNode
       Description: Check if master is also running node (for Open vSwitch)

Info:  Found a node with same IP as master: deathkube1.mitre.org

[Note] Skipping diagnostic: MetricsApiProxy
       Description: Check the integrated heapster metrics can be reached via the API proxy
       Because: The heapster service does not exist in the openshift-infra project at this time,
       so it is not available for the Horizontal Pod Autoscaler to use as a source of metrics.

[Note] Running diagnostic: NodeDefinitions
       Description: Check node records on master

WARN:  [DClu0003 from diagnostic NodeDefinition@openshift/origin/pkg/diagnostics/cluster/node_definitions.go:113]
       Node deathkube1.mitre.org is ready but is marked Unschedulable.
       This is usually set manually for administrative reasons.
       An administrator can mark the node schedulable with:
           oadm manage-node deathkube1.mitre.org --schedulable=true

       While in this state, pods should not be scheduled to deploy on the node.
       Existing pods will continue to run until completed or evacuated (see
       other options for 'oadm manage-node').

[Note] Running diagnostic: RouteCertificateValidation
       Description: Check all route certificates for certificates that might be rejected by extended validation.

[Note] Running diagnostic: ServiceExternalIPs
       Description: Check for existing services with ExternalIPs that are disallowed by master config

[Note] Running diagnostic: AnalyzeLogs
       Description: Check for recent problems in systemd service logs

Info:  Checking journalctl logs for 'origin-node' service
Info:  Checking journalctl logs for 'docker' service

[Note] Running diagnostic: MasterConfigCheck
       Description: Check the master config file

WARN:  [DH0005 from diagnostic MasterConfigCheck@openshift/origin/pkg/diagnostics/host/check_master_config.go:52]
       Validation of master config file '/etc/origin/master/master-config.yaml' warned:
       assetConfig.loggingPublicURL: Invalid value: "": required to view aggregated container logs in the console
       assetConfig.metricsPublicURL: Invalid value: "": required to view cluster metrics in the console
       auditConfig.auditFilePath: Required value: audit can not be logged to a separate file

[Note] Running diagnostic: NodeConfigCheck
       Description: Check the node config file

Info:  Found a node config file: /etc/origin/node/node-config.yaml

[Note] Running diagnostic: UnitStatus
       Description: Check status for related systemd units

ERROR: [DS3002 from diagnostic UnitStatus@openshift/origin/pkg/diagnostics/systemd/unit_status.go:55]
       systemd unit origin-node depends on unit iptables, which is not loaded.

       iptables is used by nodes for container networking.
       Connections to a container will fail without it.
       An administrator probably needs to install the iptables unit with:

         # yum install iptables

       If it is already installed, you may to reload the definition with:

         # systemctl reload iptables

[Note] Summary of diagnostics execution (version v3.6.0+c4dd4cf):
[Note] Warnings seen: 5
[Note] Errors seen: 3

All the warnings and the first error appear to be related to the inability to read the logs. The second error i am unsure of, and i tried the suggested remedy (restart master) but it made no difference. The third error, about iptables seems odd given that the suggested configuration from the documentation is to use firewalld:

While iptables is the default firewall, firewalld is recommended for new installations.

@pweil- pweil- added component/auth component/cli kind/bug Categorizes issue or PR as related to a bug. priority/P2 labels Oct 30, 2017
@openshift-bot
Copy link
Contributor

Issues go stale after 90d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle stale

@openshift-ci-robot openshift-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 24, 2018
@openshift-bot
Copy link
Contributor

Stale issues rot after 30d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle rotten
/remove-lifecycle stale

@openshift-ci-robot openshift-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Mar 26, 2018
@openshift-bot
Copy link
Contributor

Rotten issues close after 30d of inactivity.

Reopen the issue by commenting /reopen.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Exclude this issue from closing again by commenting /lifecycle frozen.

/close

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
component/auth component/cli kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/P2
Projects
None yet
Development

No branches or pull requests

6 participants