Skip to content
This repository has been archived by the owner on Mar 23, 2019. It is now read-only.

Using container enabled roles with ansible-container #671

Closed
mchassy opened this issue Jul 29, 2017 · 9 comments · Fixed by #678
Closed

Using container enabled roles with ansible-container #671

mchassy opened this issue Jul 29, 2017 · 9 comments · Fixed by #678
Labels

Comments

@mchassy
Copy link

mchassy commented Jul 29, 2017

ISSUE TYPE
  • Bug Report
container.yml
version: "2"
settings:
  conductor:
    base: "centos:7"
services:
  mongo:
    from: "centos:7"
    roles:
    - role: setup_container_role
      which_container: "mongo"
    working_dir: "/opt"
    user: orson
    entrypoint: [/entrypoint.sh]
    command: [/usr/bin/dumb-init, sleep, infinity]
  tomee:
    from: "centos:7"
    roles:
    - role: setup_container_role
      which_container: "tomee"
    working_dir: "/opt"
    user: orson
    entrypoint: [/entrypoint.sh]
    command: [/usr/bin/dumb-init, sleep, infinity]

registries:
  docker:
    url: https://index.docker.io/v1/
    # url: https://cloud.docker.com/api/app/v1/service/
    namespace: orsontestdata
  # Add optional registries used for deployment. For example:
  #  google:
  #    url: https://gcr.io
  #    namespace: my-cool-project-xxxxxx

OS / ENVIRONMENT
$ ansible-container --debug version
Ansible Container, version 0.9.2rc0
Darwin, MMac.local, 16.6.0, Darwin Kernel Version 16.6.0: Fri Apr 14 16:21:16 PDT 2017; root:xnu-3789.60.24~6/RELEASE_X86_64, x86_64
2.7.12 (v2.7.12:d33e0cf91556, Jun 26 2016, 12:10:39)
[GCC 4.2.1 (Apple Inc. build 5666) (dot 3)] /Users/markchassy/ansible/bin/python
{
  "ContainersPaused": 0,
  "Labels": null,
  "CgroupDriver": "cgroupfs",
  "ContainersRunning": 0,
  "ContainerdCommit": {
    "Expected": "cfb82a876ecc11b5ca0977d1733adbe58599088a",
    "ID": "cfb82a876ecc11b5ca0977d1733adbe58599088a"
  },
  "InitBinary": "docker-init",
  "NGoroutines": 30,
  "Swarm": {
    "ControlAvailable": false,
    "NodeID": "",
    "Error": "",
    "RemoteManagers": null,
    "LocalNodeState": "inactive",
    "NodeAddr": ""
  },
  "LoggingDriver": "json-file",
  "OSType": "linux",
  "HttpProxy": "",
  "Runtimes": {
    "runc": {
      "path": "docker-runc"
    }
  },
  "DriverStatus": [
    [
      "Root Dir",
      "/var/lib/docker/aufs"
    ],
    [
      "Backing Filesystem",
      "extfs"
    ],
    [
      "Dirs",
      "21"
    ],
    [
      "Dirperm1 Supported",
      "true"
    ]
  ],
  "OperatingSystem": "Alpine Linux v3.5",
  "Containers": 0,
  "HttpsProxy": "",
  "BridgeNfIp6tables": true,
  "MemTotal": 4139130880,
  "SecurityOptions": [
    "name=seccomp,profile=default"
  ],
  "Driver": "aufs",
  "IndexServerAddress": "https://index.docker.io/v1/",
  "ClusterStore": "",
  "InitCommit": {
    "Expected": "949e6fa",
    "ID": "949e6fa"
  },
  "Isolation": "",
  "SystemStatus": null,
  "OomKillDisable": true,
  "ClusterAdvertise": "",
  "SystemTime": "2017-07-29T11:46:06.483740859Z",
  "Name": "moby",
  "CPUSet": true,
  "RegistryConfig": {
    "AllowNondistributableArtifactsCIDRs": [],
    "Mirrors": [],
    "IndexConfigs": {
      "docker.io": {
        "Official": true,
        "Name": "docker.io",
        "Secure": true,
        "Mirrors": []
      }
    },
    "AllowNondistributableArtifactsHostnames": [],
    "InsecureRegistryCIDRs": [
      "127.0.0.0/8"
    ]
  },
  "DefaultRuntime": "runc",
  "ContainersStopped": 0,
  "NCPU": 4,
  "NFd": 18,
  "Architecture": "x86_64",
  "KernelMemory": true,
  "CpuCfsQuota": true,
  "Debug": true,
  "ID": "AC3E:EYB4:UGTT:FYMR:33IQ:ACRY:ZNCO:EK6S:HHK4:OBMR:3RCV:PWSL",
  "IPv4Forwarding": true,
  "KernelVersion": "4.9.36-moby",
  "BridgeNfIptables": true,
  "NoProxy": "*.local, 169.254/16",
  "LiveRestoreEnabled": false,
  "ServerVersion": "17.06.0-ce",
  "CpuCfsPeriod": true,
  "ExperimentalBuild": true,
  "MemoryLimit": true,
  "SwapLimit": true,
  "Plugins": {
    "Volume": [
      "local"
    ],
    "Network": [
      "bridge",
      "host",
      "ipvlan",
      "macvlan",
      "null",
      "overlay"
    ],
    "Authorization": null,
    "Log": [
      "awslogs",
      "fluentd",
      "gcplogs",
      "gelf",
      "journald",
      "json-file",
      "logentries",
      "splunk",
      "syslog"
    ]
  },
  "Images": 20,
  "DockerRootDir": "/var/lib/docker",
  "NEventsListener": 1,
  "CPUShares": true,
  "RuncCommit": {
    "Expected": "2d41c047c83e09a6d61d464906feb2a2f3c52aa4",
    "ID": "2d41c047c83e09a6d61d464906feb2a2f3c52aa4"
  }
}
{
  "KernelVersion": "4.9.36-moby",
  "Arch": "amd64",
  "BuildTime": "2017-06-23T21:51:55.152028673+00:00",
  "ApiVersion": "1.30",
  "Version": "17.06.0-ce",
  "MinAPIVersion": "1.12",
  "GitCommit": "02c1d87",
  "Os": "linux",
  "Experimental": true,
  "GoVersion": "go1.8.3"
}
SUMMARY

I am working on a common deployment of my application both for VM installs and container installs. I have the VM ansible script tested and working. In the VM deployment, I have a common setup role which executes as root. Then I have separate roles for mongo and tomee which are executed by the application user which is created by the initial setup role.

There is one tomee specific action in the setup, which is the install of the right version of java. I do this in the setup role, because you need to be root to install java.

I would like to do the same thing in container deployment. This means I would run the setup role for each container. I would pass the name of the container as a variable, so that for the tomee container, java would be installed. Also, I need to execute container specific actions for the mongo and tomee roles.

From the documentation, I understood 2 things about roles and container enabled roles.

  • Any tasks in meta/container.yml will be executed in addition to task in tasks/main.yml
  • I can pass a parameter to a role

So I have create a very simple role which basically create 4 files.

In the main.yml I create 2 files

  • one which is always created
  • one which is always created with the name of the container which I passed as a parameter

In the meta/container.yml I create 2 files

  • one which should always be created if the role is run with ansible-container
  • one which should always be created if the role is run with ansible-container with the name of the container which I passed as a parameter
STEPS TO REPRODUCE
ansible-container build
EXPECTED RESULTS
  • 2 containers: orson-tomee and orson-mongo
  • each should contain 4 files in /home/orson
  • 2 of those files should be created with name of the container which was passed as a parameter
ACTUAL RESULTS
  • The tasks in meta/container.yml are never executed, so only 2 files are created
  • The named file is called mongo
  • The roles executes twice, but only one container is built. The same container is create with 2 tags. One for mongo and one for tomee.

Here is setup_container_role/tasks/main.yml

---
- name: Install dumb init
  get_url:
    dest: /usr/bin/dumb-init
    url: https://github.com/Yelp/dumb-init/releases/download/v1.0.2/dumb-init_1.0.2_amd64
    mode: 0775
    validate_certs: no
- name: Make orson group
  group: name=orson state=present
- name: Make orson user
  user: name=orson state=present createhome=yes home=/home/orson/ uid=1000 group=orson
- name: I do this all the time
  file: path=/home/orson/allthetime.txt state=touch mode=0777
- name: I do this based on a variable
  file: path="/home/orson/{{ which_container }}.txt" state=touch mode=0777
- name: place entrypoint template
  template: src=entrypoint.sh dest=/entrypoint.sh mode=0777

Here is setup_container_role/meta/container.yml

---
- name: I do this when I am a container
  file: path=/home/orson/containertime.txt state=touch mode=0777
- name: I do this when I am a container, based on a variable
  file: path="/home/orson/container_{{ which_container }}.txt" state=touch mode=0777

Here you can see that the same container was simply tagged as mongo and tomee

$ docker images
REPOSITORY                  TAG                 IMAGE ID            CREATED             SIZE
orson-mongo                 20170729114110      9b3f3e004d42        35 minutes ago      193MB
orson-mongo                 latest              9b3f3e004d42        35 minutes ago      193MB
orson-tomee                 latest              9b3f3e004d42        35 minutes ago      193MB
orson-conductor             latest              e1a4800b1090        35 minutes ago      553MB
@chouseknecht
Copy link
Contributor

chouseknecht commented Aug 1, 2017

@mchassy

I think what you're saying is that the meta/container.yml contains the following:

roles:
   - role: some-role
      which-container: tomee

That's my interpretation of the phrase "The tasks in meta/container.yml"

When the project's container.yml is aggregated with the role's meta/container.yml, what's in container.yml is given precedence. So the roles directive in container.yml would replace the roles directive in meta/container.yml.

If that's indeed what you're trying to do, then obviously it's not going to work as you want. The only way to add additional tasks outside of the role's tasks/main.yml, would be to use an include, or use dependencies in the role's meta/main.yml file.

You could also make conditional tasks or includes in tasks/main.yml based on environment variables, or based on the host (or service) name.

@mchassy
Copy link
Author

mchassy commented Aug 4, 2017

Chris,

Thanks. This solves one of my 2 issues. I had in fact misunderstood, the use of meta/container.yml.
So now I can use when: ansible_connection == "docker" and that works perfectly fine.

The other issue is passing the which_container variable, and that still does not seem to be working.
In my container.yml, I have:

services:
  mongo:
    from: "centos:7"
    roles:
    - role: setup_role
      vars:
        - which_container: "mongo"
    working_dir: "/opt"
    user: orson
    entrypoint: [/entrypoint.sh]
    command: [/usr/bin/dumb-init, sleep, infinity]
  tomee:
    from: "centos:7"
    roles:
    - role: setup_role
      vars:
        - which_container: "tomee"
    working_dir: "/opt"
    user: orson
    entrypoint: [/entrypoint.sh]
    command: [/usr/bin/dumb-init, sleep, infinity]

But the value of mongo is passed both times and I end up with one container which simply has been tagged with 2 names.

@j00bar
Copy link
Contributor

j00bar commented Aug 4, 2017

Sounds like a bug with our role cache detection logic?

@chouseknecht
Copy link
Contributor

@mchassy

I was about to say that everything looks fine on my side, and you're crazy! Turns out you're not crazy. I was able to duplicate what you're seeing.

Here's my container.yml, which is very similar to yours. I removed the vars: bit, thinking that might be the cause of the problem. Unfortunately it's not.

version: '2'
settings:
  conductor:
    base: 'centos:7'

services:
  mongo:
    from: "centos:7"
    roles:
    - role: setup_role
      which_container: "mongo"
    working_dir: "/opt"
    user: orson
    entrypoint: [/entrypoint.sh]
    command: [/usr/bin/dumb-init, sleep, infinity]
  tomee:
    from: "centos:7"
    roles:
    - role: setup_role
      which_container: "tomee"
    working_dir: "/opt"
    user: orson
    entrypoint: [/entrypoint.sh]
    command: [/usr/bin/dumb-init, sleep, infinity]

Here's the tasks/main.yml from my local setup role:

- name: Show which_container
  debug: var=which_container

- name: Show current host
  debug: var=inventory_hostname

And here's the playbook output from ansible-container --debug build

ok: [mongo]
META: ran handlers

TASK [setup_role : Show which_container] ***************************************
task path: /src/roles/setup_role/tasks/main.yml:1
ok: [mongo] => {
    "which_container": "mongo"
}

TASK [setup_role : Show current host] ******************************************
task path: /src/roles/setup_role/tasks/main.yml:4
ok: [mongo] => {
    "inventory_hostname": "mongo"
}
META: ran handlers
META: ran handlers

PLAY RECAP *********************************************************************
mongo                      : ok=3    changed=0    unreachable=0    failed=0

And here are my built images:

REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
tstfoo-mongo        20170804182951      4b83b7096ee7        41 seconds ago      193MB
tstfoo-mongo        latest              4b83b7096ee7        41 seconds ago      193MB
tstfoo-tomee        latest              4b83b7096ee7        41 seconds ago      193MB
tstfoo-conductor    latest              ae4ba36c5345        51 seconds ago      546MB

Same as what you're seeing. One image has been tagged for both services. It build the mongo service, and then decided it could simply use that image for the tomee service.

Seems our build cache does not account for variables. It only considers the tasks.

@mchassy
Copy link
Author

mchassy commented Aug 4, 2017

@chouseknecht
Glad to know I'm not crazy ... at least not crazy like that ;-)
For the moment, I think I should be able to get done what I need using:
when: ansible_connection == "docker"
But I can see other circumstances, where I would want to pass the service name to the role so as to set up a generic container in slightly different ways.

@chouseknecht
Copy link
Contributor

Working on quick hack to fix this...

@chouseknecht
Copy link
Contributor

chouseknecht commented Aug 4, 2017

@j00bar

See if my fix makes sense. It at least fixes this particular case. We probably can't account for all variables, but it's something.

@mchassy
Copy link
Author

mchassy commented Aug 5, 2017

@chouseknecht
I think that I have found the root cause of this (at least from the user point of view)
Look at the syntax for listing the roles

    roles:
    - role: setup_role

That is incorrect. It should be

    roles:
      - role: setup_role

There is no error, but ansible-container is obviously parsing the roles incorrectly when the incorrect syntax is used. If you use the correct syntax then the two containers are created distinctly as they should be.

@mchassy
Copy link
Author

mchassy commented Aug 12, 2017

@chouseknecht and @j00bar
Just wanted to thank you guys for the great support on your product.
I now have single set of roles which can be used to deploy to a vm using ansible-playbook or to containers using ansible-container. By correctly using roles and just a couple of vars, the path to deployment becomes simple. Now I just need to convince my dev team to start using this ;-)

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants