Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Meet a install failure issue, error message: "assertion": "ansible_swaptotal_mb == 0" #2031

Closed
wumingpu opened this issue Dec 6, 2017 · 11 comments

Comments

@wumingpu
Copy link

wumingpu commented Dec 6, 2017

Is this a BUG REPORT or FEATURE REQUEST? (choose one):

Environment:

  • Cloud provider or hardware configuration:
    My localhost VM

  • OS (printf "$(uname -srm)\n$(cat /etc/os-release)\n"):
    Linux 4.4.0-62-generic x86_64
    NAME="Ubuntu"
    VERSION="16.04.2 LTS (Xenial Xerus)"
    ID=ubuntu
    ID_LIKE=debian
    PRETTY_NAME="Ubuntu 16.04.2 LTS"
    VERSION_ID="16.04"
    HOME_URL="http://www.ubuntu.com/"
    SUPPORT_URL="http://help.ubuntu.com/"
    BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/"
    VERSION_CODENAME=xenial
    UBUNTU_CODENAME=xenial

  • Version of Ansible (ansible --version):
    ansible 2.4.2.0
    config file = /root/.kubespray/ansible.cfg
    configured module search path = [u'/root/.kubespray/library']
    ansible python module location = /usr/lib/python2.7/dist-packages/ansible
    executable location = /usr/bin/ansible
    python version = 2.7.12 (default, Nov 20 2017, 18:23:56) [GCC 5.4.0 20160609]

Kubespray version (commit) (git rev-parse --short HEAD):
c2347db

Network plugin used:
flannel

Copy of your inventory file:

[kube-master]
master

[all]
master ansible_host=10.22.1.180 ansible_user=root ip=10.22.1.180
node1 ansible_host=10.22.1.181 ansible_user=root ip=10.22.1.181
node2 ansible_host=10.22.1.182 ansible_user=root ip=10.22.1.182
node3 ansible_host=10.22.1.183 ansible_user=root ip=10.22.1.183

[k8s-cluster:children]
kube-node
kube-master

[kube-node]
node1
node2
node3

[etcd]
master

Command used to invoke ansible:
time kubespray deploy --verbose -u root -k /root/.ssh/id_rsa -n flannel

Output of ansible run:

task path: /root/.kubespray/roles/kubernetes/preinstall/tasks/verify-settings.yml:76
Wednesday 06 December 2017 16:51:43 +0800 (0:00:00.161) 0:00:24.714 ****
fatal: [master]: FAILED! => {
"assertion": "ansible_swaptotal_mb == 0",
"changed": false,
"evaluated_to": false
}
fatal: [node1]: FAILED! => {
"assertion": "ansible_swaptotal_mb == 0",
"changed": false,
"evaluated_to": false
}
fatal: [node2]: FAILED! => {
"assertion": "ansible_swaptotal_mb == 0",
"changed": false,
"evaluated_to": false
}
fatal: [node3]: FAILED! => {
"assertion": "ansible_swaptotal_mb == 0",
"changed": false,
"evaluated_to": false
}
to retry, use: --limit @/root/.kubespray/cluster.retry


< PLAY RECAP >

    \   ^__^
     \  (oo)\_______
        (__)\       )\/\
            ||----w |
            ||     ||

localhost : ok=2 changed=0 unreachable=0 failed=0
master : ok=17 changed=0 unreachable=0 failed=1
node1 : ok=15 changed=0 unreachable=0 failed=1
node2 : ok=15 changed=0 unreachable=0 failed=1
node3 : ok=15 changed=0 unreachable=0 failed=1

Anything else do we need to know:

@jicki
Copy link

jicki commented Dec 7, 2017

cd /tmp

rm -rf node*

在每台服务器上执行

swapoff -a

执行完毕以后~ 再重新运行 程序

@ipeacocks
Copy link

I could confirm this issue. Swap on VM is turned off

# free
              total        used        free      shared  buff/cache   available
Mem:        1796368      544636      110716        8484     1141016     1070604
Swap:

but Ansible installer of kubespray doen't think so

TASK [kubernetes/preinstall : Stop if swap enabled] *************************************************************************************************************
Sunday 10 December 2017  03:26:41 +0200 (0:00:00.057)       0:00:12.176 ******* 
fatal: [k8s-s1.me]: FAILED! => {
    "assertion": "ansible_swaptotal_mb == 0",
    "changed": false,
    "evaluated_to": false
}
fatal: [k8s-m1.me]: FAILED! => {
    "assertion": "ansible_swaptotal_mb == 0",
    "changed": false,
    "evaluated_to": false
}
fatal: [k8s-m2.me]: FAILED! => {
    "assertion": "ansible_swaptotal_mb == 0",
    "changed": false,
    "evaluated_to": false
}

Ubuntu 16.04, baremetal installation.

@ipeacocks
Copy link

Fully recreated VMs - and issue has gone away. I suppose it was due to cached Ansible facts or smth like that.

@jicki
Copy link

jicki commented Dec 11, 2017

@ipeacocks rm -rf /tmp/node*

@ipeacocks
Copy link

@jicki, yes, I've tried that but it didn't help me.

@ipeacocks
Copy link

ipeacocks commented Dec 11, 2017

I suppose this is correct path for cleaning cached facts:

/home/<your_username>/.ansible

Or even better recreate facts each run but it needs changes to code.

@wumingpu
Copy link
Author

@jicki 谢谢,问题解决了,我先在Master上rm -rf /tmp/node*,然后每台VM上执行swapoff -a。再运行程序就没那个错误了,谢谢。
@ipeacocks The issue resolved, I run "rm -rf /tmp/node*" on Master and then run "swapoff -a" on each VM. Then rerun the deploy command, I do not get this issue. You can try to run "swapoff -a" command on each VM.

@hobbytp
Copy link

hobbytp commented Dec 29, 2017

yes, I met the same issue, my VM is centos, after remove /tmp/node* in Ansible server, and disable swap (remove swap in /etc/fstab and swapoff -a and reboot), the issue gone. Thanks for the info sharing.

@junaid-ali
Copy link
Contributor

Removing /tmp/node* and disabling swap worked for me. @hobbytp we don't need to reboot since swapoff -a will disable swap immediately. For persistence, we do need to update /etc/fstab

@junaid-ali
Copy link
Contributor

I am a beginner in writing playbooks but there should be a cleaner way here instead of reading facts from /tmp directory.

@Ghostbaby
Copy link

/tmp/node* 不在每个节点上,在kubespray所在服务器上。

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants