Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Kubernetes API Version detection unauthorized #5574

Closed
MrBones757 opened this issue Dec 30, 2019 · 7 comments
Closed

Kubernetes API Version detection unauthorized #5574

MrBones757 opened this issue Dec 30, 2019 · 7 comments

Comments

@MrBones757
Copy link

ISSUE TYPE
  • Bug Report
SUMMARY

The Kubernetes installer changes for detecting API version fail for authenticated clusters
(e68d576#diff-497ab200c9d7364c4f7d16bb0dccea4a)

ENVIRONMENT
  • AWX version: 9.1.0
  • AWX install method: kubernetes
  • Ansible version: 2.8.3
  • Operating System: Ubuntu Bionic (Install Host)
  • Web Browser: N/A
STEPS TO REPRODUCE

Run an install against a kubernetes cluster with a secured (authenticated) api
(I'm running a cis-hardened Rancher (rke) install w/ kube version 1.14.x)

EXPECTED RESULTS

Version is obtained correctly

ACTUAL RESULTS

the task: "Get kube version from api server"
in kubernetes/tasks/main.yml
failed with a 401 due to the uri not providing credentials.

ADDITIONAL INFORMATION

A potential fix is to fall back to the old method:

  • name: Get kube version from api server
    block:
    • name: Attempt URI Version
      uri:
      url: "{{ kube_server | trim }}/version"
      validate_certs: false
      register: kube_version

    • name: Extract server version from command output
      set_fact:
      kube_api_version: "{{ kube_version.json.gitVersion[1:] }}"
      rescue:

    • name: Get Kubernetes Config
      command: |
      {{ kubectl_or_oc }} version -o json
      register: kube_version

    • name: Extract server version from command output
      set_fact:
      kube_api_version: "{{ (kube_version.stdout | from_json).serverVersion.gitVersion[1:] }}"

or to somehow provide auth with the uri task

@MrBones757
Copy link
Author

If you need any additional information please let me know & i will update / amend the issue

@smuth4
Copy link

smuth4 commented Jan 1, 2020

I suspect you're running into the same issue I posted here: #5388 (comment)

@ryanpetrello
Copy link
Contributor

@shanemcd you have any ideas here?

@shanemcd
Copy link
Member

shanemcd commented Jan 3, 2020

This is more difficult than it should be because OpenShift seems to have decided against adding support for oc version -o json. The current implementation uses the approach suggested here.

I was attempting to identify a solution that is k8s-variant agnostic, but that may not be possible unless there is another way I'm not aware of.

For the vanilla Kubernetes side, I think we should use kubectl version -o json. For the OpenShift side, we can either keep doing what we are doing now with the uri module (/version does not seem to require auth for OpenShift), or switch to inspecting the output of oc version with something like sed or awk. Example output:

$ oc version
oc v3.11.154
kubernetes v1.11.0+d4cacc0
features: Basic-Auth

Server https://console.ocp3.example.com:8443
openshift v3.11.154
kubernetes v1.11.0+d4cacc0

I worry that this formatting will change between versions of oc, so we should be extra careful if we go that route.

Once we identify how to get the bits we need, we'll want to remove these lines and add the variant-specific tasks to kubernetes.yml and openshift.yml.

@alanbchristie
Copy link

It is annoying the oc version has no JSON output like kubectl and relying on stdout comes with its own risks. As I'm not sure there's a common command, maybe the way out of this hole is to use two small logic blocks as you suggest ... one that uses the uri module for openshift and one that uses kubectl for kubernetes?

I cannot commit any time to a formal solution but the work-around, for those working with Kubernetes who are having a problem, is to replace the following three tasks (from installer/roles/kubernetes/tasks/main.yml): -

- name: Get kube version from api server
  uri:
    url: "{{ kube_server | trim }}/version"
    validate_certs: false
  register: kube_version

- name: Extract server version from command output
  set_fact:
    kube_api_version: "{{ kube_version.json.gitVersion[1:] }}"

- name: Determine StatefulSet api version
  set_fact:
    kubernetes_statefulset_api_version: "{{ 'apps/v1' if kube_api_version is version('1.9', '>=') else 'apps/v1beta1' }}"

With (for Kubernetes v1.9 or later): -

- name: Determine StatefulSet api version
  set_fact:
    kubernetes_statefulset_api_version: apps/v1

Or (for kubernetes v1.8 or earlier): -

- name: Determine StatefulSet api version
  set_fact:
    kubernetes_statefulset_api_version: apps/v1beta1

The logic block is just trying to define the StatefulSet version for the deployed Kubernetes. With the above hack made deployment continues to completion.

@shanemcd
Copy link
Member

shanemcd commented Jan 6, 2020

I just put up #5597. Would appreciate some extra eyes / test runs.

@MrBones757
Copy link
Author

this issue is resolved by the changes in the above PR

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

7 participants