Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Jinja extrapolation for openshift_hosted_... vars doesn't work #5657

Closed
tomassedovic opened this issue Oct 4, 2017 · 4 comments
Closed

Jinja extrapolation for openshift_hosted_... vars doesn't work #5657

tomassedovic opened this issue Oct 4, 2017 · 4 comments
Assignees

Comments

@tomassedovic
Copy link
Contributor

Description

The openshift_hosted_* variables need to be hard coded even when declared in group_vars/OSEv3.yml which normally allows Jinja templating.

See below for more details, but it seems the openshift_facts role is doing something really weird with the way it processes the openshift_hosted_* vars. I tried to look into it, but got lost so even an explanation how it all works might help me.

My ultimate goal of this is to be able to use a lookup plugin for specifying the volume details for the hosted registry, but this issue seems to affect more than just that usecase.

Version
  • Ansible version per ansible --version: ansible 2.3.0.0

  • The output of git describe: openshift-ansible-3.6.22-1

I tried this on master (openshift-ansible-3.7.0-0.141.0) as well, but I can't even get that far because of an unrelated error. Running the setup module after the deployment seems to show it's broken there as well.

Steps To Reproduce
  1. create a group_vars/OSEv3.yml file alongside your inventory/hosts file
  2. put the openshift-ansible configuration (the OSEv3 vars) into the OSEv3.yml file. Here's a simplified version of mine:
openshift_deployment_type: origin
penshift_master_default_subdomain: "apps.openshift.example.com"

openshift_cloudprovider_kind: openstack
openshift_cloudprovider_openstack_username: "{{ lookup('env','OS_USERNAME') }}"
openshift_cloudprovider_openstack_password: "{{ lookup('env','OS_PASSWORD') }}"
openshift_cloudprovider_openstack_auth_url: "{{ lookup('env','OS_AUTH_URL') }}"
openshift_cloudprovider_openstack_tenant_name: "{{ lookup('env','OS_TENANT_NAME') }}"

openshift_hosted_registry_storage_kind: openstack
openshift_hosted_registry_storage_access_modes: ['ReadWriteOnce']
openshift_hosted_registry_storage_openstack_filesystem: xfs
openshift_hosted_registry_storage_openstack_volumeID: '85ed1c24-a877-4bf3-9623-a1ec7dba1fb4'
openshift_hosted_registry_storage_volume_size: '{{ 5 + 5 }}Gi'

(it uses a Cinder-backed hosted registry, but the issue here is how openshift-ansible sets the local facts rather than the openstack/cinder integration)

  1. Put a jinja code in the openshift_hosted_registry_storage_volume_size variable in OSEv3.yml (note that the {{ 5 + 5 }}Gi value here is just an example. I'm using something more complex, but this illustrates the issue)

  2. Run the byo playbook:

ansible-playbook  -i inventory/ openshift-ansible/playbooks/byo/config.yml
Expected Results

The playbook should finish successfully, creating a Cinder-backed registry.

Observed Results

The playbook fails during the PV creation:

TASK [openshift_persistent_volumes : Create PersistentVolumes] ********************************************************************************************************************************
Tuesday 03 October 2017  15:54:03 +0200 (0:00:00.482)       0:09:35.462 *******
fatal: [master-0.openshift.example.com]: FAILED! => {"changed": false, "cmd": ["oc", "create", "-f", "/tmp/openshift-ansible-VX52l1a/persistent-volumes.yml", "--config=/tmp/openshift-ansible-VX52l1a/admin.kubeconfig"], "delta": "0:00:00.164953", "end": "2017-10-03 13:54:03.952274", "failed": true, "failed_when_result": true, "rc": 1, "start": "2017-10-03 13:54:03.787321", "stderr": "Error from server (BadRequest): PersistentVolume in version \"v1\" cannot be handled as a PersistentVolume: quantities must match the regular expression '^([+-]?[0-9.]+)([eEinumkKMGTP]*[-+]?[0-9]*)$'", "stderr_lines": ["Error from server (BadRequest): PersistentVolume in version \"v1\" cannot be handled as a PersistentVolume: quantities must match the regular expression '^([+-]?[0-9.]+)([eEinumkKMGTP]*[-+]?[0-9]*)$'"], "stdout": "", "stdout_lines": []}

The PV config file from the error message (/tmp/openshift-ansible-VX52l1a/persistent-volumes.yml reveals that the openshift_hosted_registry_storage_volume_size was included verbatim, not processed:

$ ansible -i inventory/ masters -m command -a 'cat /tmp/openshift-ansible-VX52l1a/persistent-volumes.yml'
master-0.openshift.example.com | SUCCESS | rc=0 >>
---
apiVersion: v1
kind: List
items:
- apiVersion: v1
  kind: PersistentVolume
  metadata:
    name: "registry-volume"
  spec:
    capacity:
      storage: "{# 5 + 5 #}Gi"
    accessModes:
    - ReadWriteOnce
    cinder:
      fsType: xfs
      volumeID: 85ed1c24-a877-4bf3-9623-a1ec7dba1fb4

(note the storage line: it should say 10Gi instead)

The values seem to be coming from the local facts set by openshift_facts role:

$ ansible -i inventory/ masters -m setup -a 'filter=ansible_local'                                                  
master-0.openshift.example.com | SUCCESS => {
    "ansible_facts": {
        "ansible_local": {
            "openshift": {
                ...
                "hosted": {
                    "registry": {
                        "storage": {
                            "access": {
                                "modes": [
                                    "ReadWriteOnce"
                                ]
                            }, 
                            "kind": "openstack", 
                            "openstack": {
                                "filesystem": "xfs", 
                                "volumeID": "{{ lookup(\"os_cinder\", cinder_hosted_registry_name).id }}"
                            }, 
                            "volume": {
                                "size": "{{ cinder_hosted_registry_size_gb }}Gi"
                            }
                        }, 
                        "wait": true
                    }, 
                    "router": {
                        "wait": true
                    }
                }, 
                ...

It seems the role that processes the openshift_hosted_* variables is accessing the raw values from the the group_vars file as if it were actually reading and parsing the YAML. I didn't even know Ansible lets you access the unprocessed values.

Note that when I add a debug: var=openshift_hosted_registry_storage_volume_size task to the playbooks, it shows the correct value (10Gi) and Ansible/openshift-ansible processes all the other vars (e.g. the openstack creds environment lookups) as expected.

Additional Information
$ cat /etc/redhat-release 
Fedora release 24 (Twenty Four)

The inventory just lists the nodes and groups, the actual configuration happens through `group_vars`
@DenverJ
Copy link
Contributor

DenverJ commented Nov 28, 2017

I am also seeing this problem. Someone, please let me know if you would prefer I open a separate issue instead. I have in my inventory as below.

openshift_registry_base_hostname="openshift-registry"
openshift_hosted_registry_routehost="{{ openshift_registry_base_hostname }}.{{ ansible_domain }}"

When this goes into openshift_facts it does not get interpolated. Then when the same variable name is set sourced from the openshift facts it gets set as the raw string. Here is a simplified playbook showing the problem.

- hosts: master1.domain
  tasks:
    - name: Show openshift_hosted_registry_routehost
      debug:
        var: openshift_hosted_registry_routehost

    - name: Set hosted facts
      openshift_facts:
        role: hosted
        openshift_env: "{{ hostvars
                           | oo_merge_hostvars(vars, inventory_hostname)
                           | oo_openshift_env }}"

    - name: set openshift_hosted facts
      set_fact:
        openshift_hosted_registry_routecertificates: "{{ ('routecertificates' in openshift.hosted.registry.keys()) | ternary(openshift.hosted.registry.routecertificates, {}) }}"
        openshift_hosted_registry_routehost: "{{ openshift.hosted.registry.routehost }}"

    - name: Show openshift_hosted_registry_routehost
      debug:
        var: openshift_hosted_registry_routehost

And the output.

PLAY [master1.domain] **************************************************************************************************************************************

TASK [Gathering Facts] *************************************************************************************************************************************************
ok: [master1.domain]

TASK [Show openshift_hosted_registry_routehost] ************************************************************************************************************************
ok: [master1.domain] => {
    "openshift_hosted_registry_routehost": "openshift-registry.domain"
}

TASK [Set hosted facts] ************************************************************************************************************************************************
ok: [master1.domain]

TASK [set openshift_hosted facts] **************************************************************************************************************************************
ok: [master1.domain]

TASK [Show openshift_hosted_registry_routecertificates] ****************************************************************************************************************
ok: [master1.domain] => {
    "openshift_hosted_registry_routehost": "{{ openshift_registry_base_hostname }}.{{ ansible_domain }}"
}

@sdodson
Copy link
Member

sdodson commented Nov 28, 2017

/assign michaelgugino

@michaelgugino
Copy link
Contributor

FYI, the proper name is 'templating' for this problem. This happens when a plugin (module, action, etc) uses a variable that is not passed in as a parameter. This is an issue that plagues us in a variety of places.

@michaelgugino
Copy link
Contributor

This has really lead down the rabbit hole: #6306

Cleaning up some openshift_facts items first, that will probably clear most of this right up.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants