Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ecs_service does not support the platform-version option #70625

Closed
danlange opened this issue Jul 14, 2020 · 3 comments
Closed

ecs_service does not support the platform-version option #70625

danlange opened this issue Jul 14, 2020 · 3 comments
Labels
affects_2.9 This issue/PR affects Ansible v2.9 aws bug This issue/PR relates to a bug. cloud collection:community.aws collection Related to Ansible Collections work module This issue/PR relates to a module. needs_collection_redirect https://github.com/ansible/ansibullbot/blob/master/docs/collection_migration.md python3 support:community This issue/PR relates to code supported by the Ansible community. traceback This issue/PR includes a traceback.

Comments

@danlange
Copy link

danlange commented Jul 14, 2020

SUMMARY

The ecs_service module does not expose the --platform-version option of "aws ecs create-service".
ECS requires "--platform-version 1.4.0" to create a service using ECS Fargate that mounts an EFS volume.
See: https://docs.aws.amazon.com/AmazonECS/latest/userguide/platform_versions.html
Please expose this option so I can use ecs_service instead of shelling out to "aws ecs create-service"

ISSUE TYPE
  • Bug Report
COMPONENT NAME

ecs_service

ANSIBLE VERSION
ansible 2.9.10
  config file = None
  configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python3.8/site-packages/ansible
  executable location = /usr/bin/ansible
  python version = 3.8.3 (default, May 15 2020, 01:53:50) [GCC 9.3.0]
CONFIGURATION
OS / ENVIRONMENT

Dockerfile:

FROM docker:latest

RUN set -xe \
    && apk add --no-cache --virtual .build-deps \
        autoconf \
        cmake \
        file \
        g++ \
        gcc \
        libc-dev \
        openssl-dev \
        python3-dev \
        libffi-dev \
        make \
        pkgconf \
        re2c

RUN apk add --no-cache --virtual .persistent-deps \
    bash \
    wget \
    unzip \
    vim \
    jq \
    git \
    py-pip \
    libffi \
    curl \
    openssl \
    groff \
    less \
    python3 \
    && pip install --upgrade \
        awscli \
        ansible \
        boto \
        boto3 \
        botocore \
        docker \
        pip \
    && mkdir /devops \
    && apk del .build-deps

COPY ./hosts /etc/ansible/

WORKDIR /devops

# Set up the application directory
VOLUME ["/devops"]

# Setup user home
VOLUME ["/root"]

CMD ["/bin/bash"]

./hosts:

localhost ansible_connection=local ansible_python_interpreter=/usr/bin/python3
STEPS TO REPRODUCE

The following playbook creates a private cloud running an SSH container as an ECS Fargate service, with an EFS volume mounted.

- hosts: localhost
  connection: local
  gather_facts: False

  vars:
    vpc_name: "Example VPC"
    vpc_cidr: "10.20.0.0/16"

    subnet_name: "Example Subnet"
    subnet_cidr: "10.20.0.0/24"
    subnet_az: "{{ AWS_REGION }}a"

    registry_id: "<Substitute your own registry ID here.>"


  tasks:
    - name: Check that required environment variables are set.
      fail: msg="Must set {{ item }}"
      when: "lookup('env', item) == ''"
      with_items:
        - AWS_ACCESS_KEY_ID
        - AWS_SECRET_ACCESS_KEY
        - AWS_REGION

    - name: Set local variables corresponding to environment variables
      set_fact:
        AWS_REGION: "{{ lookup('env', 'AWS_REGION') }}"

    - name: "VPC '{{ vpc_name }}' with CIDR {{ vpc_cidr }}"
      ec2_vpc_net:
        name: "{{ vpc_name }}"
        cidr_block: "{{ vpc_cidr }}"
        tenancy: default
      register: vpc

    - name: Subnet
      ec2_vpc_subnet:
        state: present
        az: "{{ subnet_az }}"
        vpc_id: "{{ vpc.vpc.id }}"
        cidr: "{{ subnet_cidr }}"
        tags:
          Name: "{{ subnet_name }}"
      register: subnet

    - name: Internet gateway
      ec2_vpc_igw:
        vpc_id: "{{ vpc.vpc.id }}"
      register: igw

    - name: Route table
      ec2_vpc_route_table:
        vpc_id: "{{ vpc.vpc.id }}"
        subnets:
          - "{{ subnet.subnet.id }}"
        routes:
          - dest: 0.0.0.0/0
            gateway_id: "{{ igw.gateway_id }}"
      register: route_table

    - name: ECS Cluster
      shell: "aws ecs create-cluster --cluster-name 'example-cluster' --region {{ AWS_REGION }} --capacity-providers FARGATE"
      changed_when: false

    - name: ECR Credentials
      shell: "aws ecr get-login --registry-ids 171421899218 --region {{ AWS_REGION }} --no-include-email | awk '{print $6}'"
      register: ecr_credentials
      changed_when: false

    - name: Stub EFS client security group
      ec2_group:
        name: "example EFS client security group"
        description: Allow example EFS client to make NFS connections
        vpc_id: "{{ vpc.vpc.id }}"
      register: example_efs_client_security_group

    - name: EFS server security group
      ec2_group:
        name: "example EFS server security group"
        description: Allow example EFS server to receive NFS connections
        vpc_id: "{{ vpc.vpc.id }}"
        rules:
          - proto: tcp
            ports: 2049
            group_id: "{{ example_efs_client_security_group.group_id }}"
            rule_desc: Allow Inbound NFS traffic
      register: example_efs_server_security_group

    - name: Fix example EFS client security group
      ec2_group:
        name: "example EFS client security group"
        description: Allow example EFS client to make NFS connections
        vpc_id: "{{ vpc.vpc.id }}"
        rules_egress:
          - proto: tcp
            ports: 2049
            group_id: "{{ example_efs_server_security_group.group_id }}"
            rule_desc: Allow Outbound NFS traffic
      register: example_efs_client_security_group

    - name: example EFS
      efs:
        state: present
        name: "example-efs"
        tags:
          Name: "example-efs"
        targets:
            - subnet_id: "{{ subnet.subnet.id }}"
              security_groups: [ "{{ example_efs_server_security_group.group_id }}" ]
      register: example_efs

    - name: SSH security group
      ec2_group:
        name: "SSH security group"
        description: Allow SSH access
        vpc_id: "{{ vpc.vpc.id }}"
        rules:
          - proto: tcp
            ports:
            - 22
            cidr_ip: 0.0.0.0/0
            rule_desc: allow all on port 22 tcp
      register: ssh_security_group

    - name: ECR image repository for ssh image
      ecs_ecr: name=gotechnies/alpine-ssh
      register: image_repository

    - name: Log in to ECR repository
      docker_login:
        registry: "{{ image_repository.repository.repositoryUri }}"
        username: "AWS"
        password: "{{ ecr_credentials.stdout }}"
        reauthorize: yes
      register: ecr_repository
      changed_when: false

    - name: Upload SSH image to ECR 
      docker_image:
        name: gotechnies/alpine-ssh
        source: pull
        repository: "{{ image_repository.repository.repositoryUri }}"
        tag: latest
        push: yes

    - name: ssh task
      ecs_taskdefinition:
        state: present
        family: ssh
        launch_type: FARGATE
        cpu: "256"
        memory: "0.5GB"
        network_mode: awsvpc
        execution_role_arn: "arn:aws:iam::{{ registry_id }}:role/ecsTaskExecutionRole"
        containers:
        - name: ssh
          essential: true
          image: "{{ image_repository.repository.repositoryUri }}:latest"
          mountPoints:
            - containerPath: /example
              sourceVolume: example-efs
          portMappings:
          - containerPort: 22
            hostPort:      22
        volumes:
          - name: example-efs
            efsVolumeConfiguration:
              fileSystemId: "{{ example_efs.efs.file_system_id }}"
              transitEncryption: DISABLED
      register: ssh_task_definition

    - name: SSH service
      ecs_service:
        state: present
        name: ssh-service
        cluster: "example-cluster"
        task_definition: "{{ ssh_task_definition.taskdefinition.taskDefinitionArn }}"
        desired_count: 1
        launch_type: FARGATE
        network_configuration:
          assign_public_ip: yes
          subnets:
          - "{{ subnet.subnet.id }}"
          security_groups:
          - "{{ ssh_security_group.group_id }}"
          - "{{ example_efs_client_security_group.group_id }}"

EXPECTED RESULTS

The script completes successfully. You can SSH into the public IP address of the deployed service (as root/root) and interact with the EFS volume mounted at /example.

Note that you can see the expected results by replacing the last line of the script with this workaround:

- name: SSH service
  shell: |
    aws ecs create-service --cluster 'example-cluster' --service-name 'ssh-service' --region '{{ AWS_REGION }}' --platform-version '1.4.0' --task-definition '{{ ssh_task_definition.taskdefinition.taskDefinitionArn }}' --desired-count 1 --launch-type FARGATE --network-configuration '{ "awsvpcConfiguration": { "subnets":["{{ subnet.subnet.id }}"], "securityGroups": ["{{ ssh_security_group.group_id }}", "{{ example_efs_client_security_group.group_id }}"], "assignPublicIp": "ENABLED"}}'
ACTUAL RESULTS

The last step, the creation of the service itself, fails because the default EFS Fargate platform-version (1.3.0) is incompatible with EFS. The error message is:

An exception occurred during task execution. To see the full traceback, use -vvv. The error was: botocore.errorfactory.PlatformTaskDefinitionIncompatibilityException: An error occurred (PlatformTaskDefinitionIncompatibilityException) when calling the CreateService operation: One or more of the requested capabilities are not supported.
fatal: [localhost]: FAILED! => {"boto3_version": "1.14.19", "botocore_version": "1.17.19", "changed": false, "error": {"code": "PlatformTaskDefinitionIncompatibilityException", "message": "One or more of the requested capabilities are not supported."}, "msg": "Couldn't create service: An error occurred (PlatformTaskDefinitionIncompatibilityException) when calling the CreateService operation: One or more of the requested capabilities are not supported.", "response_metadata": {"http_headers": {"connection": "close", "content-length": "132", "content-type": "application/x-amz-json-1.1", "date": "Tue, 14 Jul 2020 01:12:04 GMT", "x-amzn-requestid": "277dd316-c404-4d99-8d09-c4dbc205b62c"}, "http_status_code": 400, "request_id": "277dd316-c404-4d99-8d09-c4dbc205b62c", "retry_attempts": 0}}
@ansibot
Copy link
Contributor

ansibot commented Jul 14, 2020

Files identified in the description:

If these files are incorrect, please update the component name section of the description or use the !component bot command.

click here for bot help

@ansibot
Copy link
Contributor

ansibot commented Jul 14, 2020

@danlange, just so you are aware we have a dedicated Working Group for aws.
You can find other people interested in this in #ansible-aws on Freenode IRC
For more information about communities, meetings and agendas see https://github.com/ansible/community

click here for bot help

@ansibot ansibot added affects_2.9 This issue/PR affects Ansible v2.9 aws bug This issue/PR relates to a bug. cloud collection Related to Ansible Collections work collection:community.aws module This issue/PR relates to a module. needs_collection_redirect https://github.com/ansible/ansibullbot/blob/master/docs/collection_migration.md needs_triage Needs a first human triage before being processed. python3 support:community This issue/PR relates to code supported by the Ansible community. traceback This issue/PR includes a traceback. labels Jul 14, 2020
@Akasurde
Copy link
Member

Thank you very much for your interest in Ansible. This plugin/module is no longer maintained in this repository and has been migrated to https://github.com/ansible-collections/community.aws

Migrated this issue in the above repository - ansible-collections/community.aws#136.

If you have further questions please stop by IRC or the mailing list:

@sivel sivel removed the needs_triage Needs a first human triage before being processed. label Jul 14, 2020
@ansible ansible locked and limited conversation to collaborators Aug 11, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
affects_2.9 This issue/PR affects Ansible v2.9 aws bug This issue/PR relates to a bug. cloud collection:community.aws collection Related to Ansible Collections work module This issue/PR relates to a module. needs_collection_redirect https://github.com/ansible/ansibullbot/blob/master/docs/collection_migration.md python3 support:community This issue/PR relates to code supported by the Ansible community. traceback This issue/PR includes a traceback.
Projects
None yet
Development

No branches or pull requests

4 participants