Skip to content
This repository has been archived by the owner on Oct 5, 2023. It is now read-only.

ansible-play fails on centos7 during the task [openshift_version : For an RPM install, abort when the release requested does not match the available version.] #12

Closed
tasdikrahman opened this issue Aug 15, 2017 · 15 comments

Comments

@tasdikrahman
Copy link

OS

[root@metrics-store openshift-ansible]# cat /etc/*elease
CentOS Linux release 7.3.1611 (Core)
NAME="CentOS Linux"
VERSION="7 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="7"
PRETTY_NAME="CentOS Linux 7 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:7"
HOME_URL="https://www.centos.org/"
BUG_REPORT_URL="https://bugs.centos.org/"

CENTOS_MANTISBT_PROJECT="CentOS-7"
CENTOS_MANTISBT_PROJECT_VERSION="7"
REDHAT_SUPPORT_PRODUCT="centos"
REDHAT_SUPPORT_PRODUCT_VERSION="7"

CentOS Linux release 7.3.1611 (Core)
CentOS Linux release 7.3.1611 (Core)
[root@metrics-store openshift-ansible]#
[root@metrics-store openshift-ansible]# rpm -q openshift-ansible
openshift-ansible-3.6.173.0.3-1.el7.noarch

Traceback

TASK [openshift_version : For an RPM install, abort when the release requested does not match the available version.] *********
task path: /usr/share/ansible/openshift-ansible/roles/openshift_version/tasks/main.yml:182
fatal: [localhost]: FAILED! => {
    "assertion": "openshift_version.startswith(openshift_release) | bool",
    "changed": false,
    "evaluated_to": false,
    "failed": true
}

MSG:

You requested openshift_release 1.5, which is not matched by
the latest OpenShift RPM we detected as origin-3.6.0
on host localhost.
We will only install the latest RPMs, so please ensure you are getting the release
you expect. You may need to adjust your Ansible inventory, modify the repositories
available on the host, or run the appropriate OpenShift upgrade playbook.

	to retry, use: --limit @/usr/share/ansible/openshift-ansible/playbooks/byo/config.retry

PLAY RECAP ********************************************************************************************************************
localhost                  : ok=68   changed=6    unreachable=0    failed=1


Failure summary:

  1. Host:     localhost
     Play:     Determine openshift_version to configure on first master
     Task:     openshift_version : For an RPM install, abort when the release requested does not match the available version.
     Message:  You requested openshift_release 1.5, which is not matched by
               the latest OpenShift RPM we detected as origin-3.6.0
               on host localhost.
               We will only install the latest RPMs, so please ensure you are getting the release
               you expect. You may need to adjust your Ansible inventory, modify the repositories
               available on the host, or run the appropriate OpenShift upgrade playbook.

I did the following before running the playbook

[root@metrics-store ~]# curl https://raw.githubusercontent.com/ViaQ/Main/master/vars.yaml.template > vars.yaml.template
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  1235  100  1235    0     0   2153      0 --:--:-- --:--:-- --:--:--  2151
[root@metrics-store ~]# pwd
/root
[root@metrics-store ~]# curl https://raw.githubusercontent.com/ViaQ/Main/master/ansible-inventory-origin-15-aio > ansible-inventory
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   764  100   764    0     0    506      0  0:00:01  0:00:01 --:--:--   506
[root@metrics-store ~]#
@richm
Copy link
Member

richm commented Aug 15, 2017

@tasdikrahman can you try it with the code/instructions from #9 ?

@richm
Copy link
Member

richm commented Aug 15, 2017

@tasdikrahman Note that in order to download the files referenced in the PR, you have to use e.g.

curl https://raw.githubusercontent.com/richm/Main/4d8552891ed57f6c0a6496396835824c31a8aa23/ansible-inventory-origin-36-aio

since the files updated by the PR have not been published yet

@tasdikrahman
Copy link
Author

tasdikrahman commented Aug 17, 2017

Hey @richm . Thanks for the speedy reply. I was following the instructions and code from your PR. The initial error didn't occur again, but now I am getting a memory error with the traceback

CHECK [memory_availability : localhost] ***************************************************************************************
fatal: [localhost]: FAILED! => {
    "changed": true,
    "checks": {
        "disk_availability": {},
        "docker_image_availability": {
            "changed": true
        },
        "docker_storage": {
            "skipped": true,
            "skipped_reason": "Disabled by user request"
        },
        "memory_availability": {
            "failed": true,
            "msg": "Available memory (1.8 GiB) is too far below recommended value (7.0 GiB)"
        },
        "package_availability": {
            "changed": false,
            "invocation": {
                "module_args": {
                    "packages": [
                        "PyYAML",
                        "bash-completion",
                        "bind",
                        "ceph-common",
                        "cockpit-bridge",
                        "cockpit-docker",
                        "cockpit-system",
                        "cockpit-ws",
                        "dnsmasq",
                        "docker",
                        "etcd",
                        "firewalld",
                        "flannel",
                        "glusterfs-fuse",
                        "httpd-tools",
                        "iptables",
                        "iptables-services",
                        "iscsi-initiator-utils",
                        "libselinux-python",
                        "nfs-utils",
                        "ntp",
                        "openssl",
                        "origin",
                        "origin-clients",
                        "origin-master",
                        "origin-node",
                        "origin-sdn-ovs",
                        "pyparted",
                        "python-httplib2",
                        "yum-utils"
                    ]
                }
            }
        },
        "package_version": {
            "skipped": true,
            "skipped_reason": "Disabled by user request"
        }
    },
    "failed": true,
    "playbook_context": "install"
}

MSG:

One or more checks failed

	to retry, use: --limit @/usr/share/ansible/openshift-ansible/playbooks/byo/config.retry

PLAY RECAP ********************************************************************************************************************
localhost                  : ok=97   changed=6    unreachable=0    failed=1


Failure summary:

  1. Host:     localhost
     Play:     Verify Requirements
     Task:     openshift_health_check
     Message:  One or more checks failed
     Details:  check "memory_availability":
               Available memory (1.8 GiB) is too far below recommended value (7.0 GiB)

The execution of "playbooks/byo/config.yml"
includes checks designed to fail early if the requirements
of the playbook are not met. One or more of these checks
failed. To disregard these results, you may choose to
disable failing checks by setting an Ansible variable:

   openshift_disable_check=memory_availability

Failing check names are shown in the failure details above.
Some checks may be configurable by variables if your requirements
are different from the defaults; consult check documentation.
Variables can be set in the inventory or passed on the
command line using the -e flag to ansible-playbook.

My configuration for var.yml is taken from https://github.com/richm/Main/blob/4d8552891ed57f6c0a6496396835824c31a8aa23/vars.yaml.template and the origin repo file content in /etc/yum.repos.d/viaq.repo taken from https://github.com/richm/Main/blob/4d8552891ed57f6c0a6496396835824c31a8aa23/ansible-inventory-origin-36-aio

I have installed openshift-ansible using yum

# rpm -q openshift-ansible
openshift-ansible-3.6.173.0.3-1.el7.noarch
# free -m
              total        used        free      shared  buff/cache   available
Mem:           1839          84         361          19        1393        1539
Swap:             0           0           0

Should I increase the RAM of my VM where I am installing this? If yes, then can I do it near 4 as 7 would be very hard for me to get. But if it requires that much, then no other choice.

Also, to give you a little context I am trying to setup this

ovirtmetricsdataflow

from the docs

@richm
Copy link
Member

richm commented Aug 17, 2017

@tasdikrahman I don't think it will work with anything less than 8GB of RAM and 4 processors. This is a full, all-in-one installation of OpenShift Origin and an EFK stack.

This is a lot different than the previous RHV metrics setup which only required a small, embedded postgres database . . .

@tasdikrahman
Copy link
Author

tasdikrahman commented Aug 17, 2017

@richm thanks for the speedy response. I was having a chat with stLuke about this. He asked me if we can skip the memory check somehow. If that can be done, what would be the bare minimum server hardware requirements for getting the setup up? Thanks

@richm
Copy link
Member

richm commented Aug 17, 2017

@tasdikrahman I'm not sure what are the bare minimum requirements. I suppose you could edit the ansible inventory file and remove the checks by adding them to the list in openshift_disable_check

@tasdikrahman
Copy link
Author

tasdikrahman commented Aug 17, 2017

@richm I did get hold of an 8gig machine, I am now past that error and stuck at this one with the traceback

TASK [openshift_master_certificates : Lookup default group for ansible_ssh_user] **************************************************************************************************************
task path: /usr/share/ansible/openshift-ansible/roles/openshift_master_certificates/tasks/main.yml:166
fatal: [localhost]: FAILED! => {
    "failed": true
}

MSG:

the field 'args' has an invalid value, which appears to include a variable that is undefined. The error was: 'ansible_ssh_user' is undefined

The error appears to have been in '/usr/share/ansible/openshift-ansible/roles/openshift_master_certificates/tasks/main.yml': line 166, column 3, but may
be elsewhere in the file depending on the exact syntax problem.

The offending line appears to be:


- name: Lookup default group for ansible_ssh_user
  ^ here


	to retry, use: --limit @/usr/share/ansible/openshift-ansible/playbooks/byo/config.retry

PLAY RECAP ************************************************************************************************************************************************************************************
localhost                  : ok=248  changed=30   unreachable=0    failed=1


Failure summary:

  1. Host:     localhost
     Play:     Configure masters
     Task:     openshift_master_certificates : Lookup default group for ansible_ssh_user
     Message:  the field 'args' has an invalid value, which appears to include a variable that is undefined. The error was: 'ansible_ssh_user' is undefined

               The error appears to have been in '/usr/share/ansible/openshift-ansible/roles/openshift_master_certificates/tasks/main.yml': line 166, column 3, but may
               be elsewhere in the file depending on the exact syntax problem.

               The offending line appears to be:


               - name: Lookup default group for ansible_ssh_user
                 ^ here

Inside var.yml which is passed to the playbook, I am already having the variable ansible_ssh_user: root.

Not sure what gave rise to this error

@richm
Copy link
Member

richm commented Aug 17, 2017

@tasdikrahman can you paste your exact ansible-playbook command line and attach the vars.yaml you used?

@tasdikrahman
Copy link
Author

@richm The command use to run the playbook is

[root@metrics-store openshift-ansible]# pwd
/usr/share/ansible/openshift-ansible
[root@metrics-store openshift-ansible]# ANSIBLE_LOG_PATH=/tmp/ansible.log ansible-playbook -vvv -e /root/vars.yaml -i /root/ansible-inventory playbooks/byo/config.yml

And the vars.yaml file contents are

[root@metrics-store ~]# cat vars.yaml
# either root, or the user created in provisioning step which can use passwordless ssh
ansible_ssh_user: root

# no if root, yes otherwise
ansible_become: no

# the public FQDN of the machine assigned during provisioning
openshift_public_hostname: "{{ ansible_fqdn }}"

# the public IP address, the IP address used in your internal DNS or host look up for browsers and other external client programs
openshift_public_ip: "{{ ansible_default_ipv4.address }}"

# the public subdomain to use for all of the external facing logging services
# by default it is the same as the public hostname
openshift_master_default_subdomain: "{{ openshift_public_hostname }}"

# list of names of additional namespaces to create for mux
# These are in YAML list format.  Each namespace name must be in Kubernetes
# namespace identifier format, which must match the following regular expression:
# ^[a-z0-9]([-a-z0-9]*[a-z0-9])?$
# that is, begin with alphanum, followed by alphanums or dashes, and ending
# with an alphanum.  With OpenShift 3.6 and later, there is a 63 character
# limit on the namespace name.
#openshift_logging_mux_namespaces:
#- this-is-a-namespace
#- another-namespace

# the private IP address, if your machine has a different public and private IP address
openshift_ip: "{{ ansible_default_ipv4.address }}"

# the private hostname of the machine that will be used inside the cluster, if different
# than the openshift_public_hostname
openshift_hostname: "{{ openshift_public_hostname }}"

# the public URL for OpenShift UI access
openshift_logging_master_public_url: https://{{ openshift_public_hostname }}:8443

# the public hostname for Kibana browser access
openshift_logging_kibana_hostname: kibana.{{ openshift_master_default_subdomain }}

# the public hostname for Elasticsearch direct API access
openshift_logging_es_hostname: es.{{ openshift_master_default_subdomain }}

# the public hostname for common logging ingestion - the fluentd secure_forward listener
openshift_logging_mux_hostname: mux.{{ openshift_master_default_subdomain }}

# mux tuning parameters
openshift_logging_mux_cpu_limit: 500m
#openshift_logging_mux_memory_limit: 2Gi
#openshift_logging_mux_buffer_queue_limit: 1024
openshift_logging_mux_buffer_size_limit: 16m
#openshift_logging_mux_replicas: 1
[root@metrics-store ~]#

@tasdikrahman
Copy link
Author

@richm I have installed it using

# yum install openshift-ansible \
  openshift-ansible-callback-plugins openshift-ansible-filter-plugins \
  openshift-ansible-lookup-plugins openshift-ansible-playbooks \
  openshift-ansible-roles

And the contents of

[root@metrics-store ~]# cat /etc/yum.repos.d/viaq.repo
[centos-openshift-origin]
name=CentOS OpenShift Origin
baseurl=http://mirror.centos.org/centos/7/paas/x86_64/openshift-origin/
enabled=1
gpgcheck=1
gpgkey=https://tdawson.fedorapeople.org/centos/RPM-GPG-KEY-CentOS-SIG-PaaS

[centos-openshift-common-candidate]
name=CentOS OpenShift Common Candidate
baseurl=https://cbs.centos.org/repos/paas7-openshift-common-candidate/x86_64/os/
enabled=0
gpgcheck=0

[centos-openshift-origin14-candidate]
name=CentOS OpenShift Origin14 Candidate
baseurl=http://cbs.centos.org/repos/paas7-openshift-origin14-candidate/x86_64/os/
enabled=1
gpgcheck=0
gpgkey=https://tdawson.fedorapeople.org/centos/RPM-GPG-KEY-CentOS-SIG-PaaS

[centos-openshift-origin15-candidate]
name=CentOS OpenShift Origin15 Candidate
baseurl=http://cbs.centos.org/repos/paas7-openshift-origin15-candidate/x86_64/os/
enabled=1
gpgcheck=0
gpgkey=https://tdawson.fedorapeople.org/centos/RPM-GPG-KEY-CentOS-SIG-PaaS

[centos-openshift-origin36-candidate]
name=CentOS OpenShift Origin36 Candidate
baseurl=http://cbs.centos.org/repos/paas7-openshift-origin36-candidate/x86_64/os
enabled=1
gpgcheck=0
[root@metrics-store ~]#

@tasdikrahman
Copy link
Author

@richm Also

[root@metrics-store ~]# cat /root/ansible-inventory

[OSEv3:children]
nodes
masters

[OSEv3:vars]
ansible_connection=local
openshift_release=v3.6
openshift_hosted_logging_deploy=true
openshift_logging_install_logging=true
short_version=3.6
openshift_image_tag=latest
oreg_url=openshift/origin-${component}:latest
openshift_deployment_type=origin
openshift_logging_namespace=logging
openshift_master_identity_providers=[{'challenge': 'true', 'login': 'true', 'kind': 'AllowAllPasswordIdentityProvider', 'name': 'allow_all'}]
openshift_logging_es_cluster_size=1
openshift_logging_image_prefix=docker.io/openshift/origin-
openshift_logging_image_version=latest
deployment_type=origin
openshift_logging_es_allow_external=True
openshift_logging_use_mux=True
openshift_logging_mux_allow_external=True
openshift_logging_mux_client_mode=maximal
openshift_check_min_host_memory_gb=7
openshift_check_min_host_disk_gb=14
openshift_disable_check="package_version,docker_storage"

[nodes]
localhost storage=True openshift_node_labels="{'region': 'infra'}" openshift_schedulable=True

[masters]
localhost storage=True openshift_node_labels="{'region': 'infra'}" openshift_schedulable=True
[root@metrics-store ~]#

@tasdikrahman
Copy link
Author

ohh, I was missing the @ while passing the environment variable. So the command has become

ANSIBLE_LOG_PATH=/tmp/ansible.log ansible-playbook -vvv -e @/root/vars.yaml -i /root/ansible-inventory playbooks/byo/config.yml

but I am now stuck with this traceback

TASK [openshift_node_dnsmasq : fail] **********************************************************************************************************************************************************
task path: /usr/share/ansible/openshift-ansible/roles/openshift_node_dnsmasq/tasks/no-network-manager.yml:2
fatal: [localhost]: FAILED! => {
    "changed": false,
    "failed": true
}

MSG:

Currently, NetworkManager must be installed and enabled prior to installation.

	to retry, use: --limit @/usr/share/ansible/openshift-ansible/playbooks/byo/config.retry

PLAY RECAP ************************************************************************************************************************************************************************************
localhost                  : ok=440  changed=66   unreachable=0    failed=1


Failure summary:

  1. Host:     localhost
     Play:     Configure nodes
     Task:     openshift_node_dnsmasq : fail
     Message:  Currently, NetworkManager must be installed and enabled prior to installation.

@richm

@tasdikrahman
Copy link
Author

tasdikrahman commented Aug 17, 2017

An observation here is that, the playbook doesn't check whether the network manager is running/masked.

That fixed the above issue

In my case, networkmanager.service was masked and I had to unmask it and then do a systemctl start NetworkManager. Maybe we can have some checks for this while running the playbook

@richm
Copy link
Member

richm commented Aug 17, 2017

@tasdikrahman ok - file a bug against openshift-ansible to check for and/or enable NetworkManager.

Can we close this issue?

@tasdikrahman
Copy link
Author

@richm Yes, I am closing this issue. Thanks for the help.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants