-
Notifications
You must be signed in to change notification settings - Fork 107
Description
ISSUE TYPE
Feature Idea
COMPONENT NAME
ACI
ANSIBLE VERSION
v2.6
SUMMARY
The general idea is that the ACI modules would feel more native and better integrated with how Ansible works. This means that the information/credentials to connect to the APIC is stored in the inventory (using ansible_host, ansible_port, ansible_user and ansible_password) and the playbook tasks only take into account the parameters required for its specific use.
Other benefits of using an ACI connection plugin include:
It would manage the connection and could handle HTTP errors more gracefully
On connection problems it can rebuild the session transparantly
During maintenance or APIC cluster issues the connection plugin would switch between APICs (provides high-availability)
It would centralize connection information per node or per group, keeping credentials out of playbooks
It avoids too many consecutive auth API calls which may result in connection throttling and playbook failure
Currently we do:
- hosts: apic_cluster01
tasks:
- aci_tenant:
hostname: 10.1.2.1
username: admin
password: SecretPassword
tenant: customer-xyz
description: Customer XYZ
state: present
- aci_vrf:
hostname: 10.1.2.1
username: admin
password: SecretPassword
tenant: customer-xyz
vrf: lab
description: Lab VRF
policy_control_preference: enforced
policy_control_direction: ingress
- aci_bd:
hostname: 10.1.2.1
username: admin
password: SecretPassword
tenant: customer-xyz
vrf: lab
bd: app01
enable_routing: yes
- aci_bd_subnet:
hostname: 10.1.2.1
username: admin
password: SecretPassword
tenant: customer-xyz
bd: app01
gateway: 10.10.10.1
mask: 24
scope: private
...
A typical playbook would then look much more concise and readable:
- hosts: apic_cluster01
tasks:
- aci_tenant:
tenant: customer-xyz
description: Customer XYZ
state: present
- aci_vrf:
tenant: customer-xyz
vrf: lab
description: Lab VRF
policy_control_preference: enforced
policy_control_direction: ingress
- aci_bd:
tenant: customer-xyz
vrf: lab
bd: app01
enable_routing: yes
- aci_bd_subnet:
tenant: customer-xyz
bd: app01
gateway: 10.10.10.1
mask: 24
scope: private
...
The inventory for an ACI cluster would then look like:
all:
apic_cluster01:
ansible_host: [ 10.1.2.1, 10.1.2.2, 10.1.2.3 ]
ansible_connection: aci
ansible_user: admin
ansible_password: SuperSecret
proxy_env:
http_proxy: http://proxy.example.com:8080
This relates to #33887