-
Notifications
You must be signed in to change notification settings - Fork 30
Description
I have an Ansible role in which I include your role a number of times to have long running containers on a dedicated machine. I developed everything against Vagrant + the Ansible provisioner. Everything works like a charm.
Then came the time I had to roll it out on the AWS EC2 production machine. Bam, it broke on setting up the first docker systemd service with this error (when running Ansible with -vvv):
fatal: [10.130.20.211]: FAILED! => {
"changed": false,
"invocation": {
"module_args": {
"daemon_reload": true,
"enabled": true,
"masked": false,
"name": "elasticsearch_container.service",
"no_block": false,
"state": "{'code': 16, 'name': 'running'}",
"user": false
}
},
"msg": "value of state must be one of: reloaded, restarted, started, stopped, got: {'code': 16, 'name': 'running'}"
}
So the given state became {'code': 16, 'name': 'running'}. Googling this type of message structure always directs me to the boto Python AWS SDK library. While not really knowing how Ansible internals work, my assumption is that something of the boto usage leaks down in the namespace of the Ansible role variables. The value given above could easily be the state of the VM being checked first.
To test this, I renamed the default variable state in this role to service_state and updated the usages in install.yml. Executed my playbook again and with good results.
As I don't know Ansible well enough, I first created this ticket here. It might be the case that this is a valid case of Variable Precedence, but it might be that this collision is a core Ansible bug. In this case, I hope we can work together in the investigation before we file a bug in the Ansible core project.