Skip to content

salt-formulas/salt-formula-salt

Repository files navigation

Usage

Salt is a new approach to infrastructure management. Easy enough to get running in minutes, scalable enough to manage tens of thousands of servers, and fast enough to communicate with them in seconds.

Salt delivers a dynamic communication bus for infrastructures that can be used for orchestration, remote execution, configuration management and much more.

Sample Metadata

Salt Master

Salt master with base formulas and pillar metadata back end:

tests/pillar/master_single_pillar.sls

Salt master with reclass ENC metadata back end:

tests/pillar/master_single_reclass.sls

Salt master with Architect ENC metadata back end:

Salt master with multiple ext_pillars:

Salt master with API:

tests/pillar/master_api.sls

Salt master with defined user ACLs:

tests/pillar/master_acl.sls

Salt master with preset minions:

Salt master with pip based installation (optional):

Install formula through system package management:

Formula keystone is installed latest version and the formulas without version are installed in one call to aptpkg module. If the version attribute is present sls iterates over formulas and take action to install specific version or remove it. The version attribute may have these values [latest|purged|removed|<VERSION>].

Clone master branch of keystone formula as local feature branch:

Salt master with specified formula refs (for example, for Gerrit review):

Salt master logging configuration:

Salt minion logging configuration:

Salt master with logging handlers:

Salt engine definition for saltgraph metadata collector:

Salt engine definition for Architect service:

Salt engine definition for sending events from docker events:

Salt master peer setup for remote certificate signing:

Salt master backup configuration:

Configure verbosity of state output (used for salt command):

Pass pillar render error to minion log:

Note

When set to False this option is great for debuging. However it is not recomended for any production environment as it may contain templating data as passwords, and so on, that minion should not expose.

Enable Windows repository support:

Configure a gitfs_remotes resource:

Read more about gitfs resource options in the official Salt documentation.

Event/Reactor systems

Salt to synchronize node pillar and modules after start:

Trigger basic node install:

Sample event to trigger the node installation:

Run any defined orchestration pipeline:

Event to trigger the orchestration pipeline:

Synchronise modules and pillars on minion start:

Add and/or remove the minion key:

Event to trigger the key creation:

Note

You can add pass additional orch_pre_create, orch_post_create, orch_pre_remove or orch_post_remove parameters to the event to call extra orchestrate files. This can be useful for example for registering/unregistering nodes from the monitoring alarms or dashboards.

The key creation event needs to be run from other machine than the one being registered.

Event to trigger the key removal:

Control VM provisioning:

_param:
  private-ipv4: &private-ipv4
  - id: private-ipv4
    type: ipv4
    link: ens2
    netmask: 255.255.255.0
    routes:
    - gateway: 192.168.0.1
      netmask: 0.0.0.0
      network: 0.0.0.0
virt:
  disk:
    three_disks:
      - system:
          size: 4096
          image: ubuntu.qcow
      - repository_snapshot:
          size: 8192
          image: snapshot.qcow
      - cinder-volume:
          size: 2048
  nic:
    control:
    - name: nic01
      bridge: br-pxe
      model: virtio
    - name: nic02
      bridge: br-cp
      model: virtio
    - name: nic03
      bridge: br-store-front
      model: virtio
    - name: nic04
      bridge: br-public
      model: virtio
    - name: nic05
      bridge: br-prv
      model: virtio
      virtualport:
        type: openvswitch

salt:
  control:
    enabled: true
    virt_enabled: true
    size:
      medium_three_disks:
        cpu: 2
        ram: 4
        disk_profile: three_disks
    cluster:
      mycluster:
        domain: neco.virt.domain.com
        engine: virt
        # Cluster global settings
        rng: false
        enable_vnc: True
        seed: cloud-init
        cloud_init:
          user_data:
            disable_ec2_metadata: true
            resize_rootfs: True
            timezone: UTC
            ssh_deletekeys: True
            ssh_genkeytypes: ['rsa', 'dsa', 'ecdsa']
            ssh_svcname: ssh
            locale: en_US.UTF-8
            disable_root: true
            apt_preserve_sources_list: false
            apt:
              sources_list: ""
              sources:
                ubuntu.list:
                  source: ${linux:system:repo:ubuntu:source}
                mcp_saltstack.list:
                  source: ${linux:system:repo:mcp_saltstack:source}
        node:
          ubuntu1:
            provider: node01.domain.com
            image: ubuntu.qcow
            size: medium
            img_dest: /var/lib/libvirt/ssdimages
            # Node settings override cluster global ones
            enable_vnc: False
            rng:
              backend: /dev/urandom
              model: random
              rate:
                period: '1800'
                bytes: '1500'
            # Custom per-node loader definition (e.g. for AArch64 UEFI)
            loader:
              readonly: yes
              type: pflash
              path: /usr/share/AAVMF/AAVMF_CODE.fd
            machine: virt-2.11  # Custom per-node virt machine type
            cpu_mode: host-passthrough
            cpuset: '1-4'
            mac:
              nic01: AC:DE:48:AA:AA:AA
              nic02: AC:DE:48:AA:AA:BB
            # netconfig affects: hostname during boot
            # manual interfaces configuration
            cloud_init:
              network_data:
                networks:
                - <<: *private-ipv4
                  ip_address: 192.168.0.161
              user_data:
                salt_minion:
                  conf:
                    master: 10.1.1.1
          ubuntu2:
            seed: qemu-nbd
            cloud_init:
              enabled: false

There are two methods to seed an initial Salt minion configuration to Libvirt VMs: mount a disk and update a filesystem or create a ConfigDrive with a Cloud-init config. This is controlled by the "seed" parameter on cluster and node levels. When set to _True or "qemu-nbd", the old method of mounting a disk will be used. When set to "cloud-init", the new method will be used. When set to _False, no seeding will happen. The default value is _True, meaning the "qemu-nbd" method will be used. This is done for backward compatibility and may be changed in future.

The recommended method is to use Cloud-init. It's controlled by the "cloud_init" dictionary on cluster and node levels. Node level parameters are merged on top of cluster level parameters. The Salt Minion config is populated automatically based on a VM name and config settings of the minion who is actually executing a state. To override them, add the "salt_minion" section into the "user_data" section as shown above. It is possible to disable Cloud-init by setting "cloud_init.enabled" to _False.

To enable Redis plugin for the Salt caching subsystem, use the below pillar structure:

Jinja options

Use the following options to update default Jinja renderer options. Salt recognize Jinja options for templates and for the sls files.

For full list of options, see Jinja documentation: http://jinja.pocoo.org/docs/api/#high-level-api

With the line_statement/comment* _prefix options enabled following code statements are valid:

Encrypted pillars

Note

NACL and the below configuration will be available in Salt > 2017.7.

External resources:

Configure salt NACL module:

NACL encrypt secrets:

NACL encrypted values on pillar:

Use Boxed syntax NACL[CryptedValue=] to encode value on pillar:

NACL large files:

NACL within template/native pillars:

Salt Syndic

The master of masters:

Lower syndicated master:

Syndicated master with multiple master of masters:

Salt Minion

Minion ID by default triggers dependency on Linux formula, as it uses fqdn configured from linux.system.name and linux.system.domain pillar. To override, provide exact minion ID you require. The same can be set for master ID rendered at master.conf.

Simplest Salt minion setup with central configuration node:

tests/pillar/minion_master.sls

Multi-master Salt minion setup:

tests/pillar/minion_multi_master.sls

Salt minion with salt mine options:

tests/pillar/minion_mine.sls

Salt minion with graphing dependencies:

tests/pillar/minion_graph.sls

Salt minion behind HTTP proxy:

Salt minion to specify non-default HTTP backend. The default tornado backend does not respect HTTP proxy settings set as environment variables. This is useful for cases where you need to set no_proxy lists.

Salt minion with PKI certificate authority (CA):

tests/pillar/minion_pki_ca.sls

Salt minion using PKI certificate

tests/pillar/minion_pki_cert.sls

Salt minion trust CA certificates issued by salt CA on a specific host (ie: salt-master node):

Salt Minion Proxy

Salt proxy pillar:

Note

This is pillar of the the real salt-minion

Proxy pillar for IOS device:

Note

This is pillar of the node thats not able to run salt-minion itself.

Proxy pillar for JunOS device:

Note

This pillar applies to the node that can not run salt-minion itself.

Salt SSH

Salt SSH with sudoer using key:

tests/pillar/master_ssh_minion_key.sls

Salt SSH with sudoer using password:

tests/pillar/master_ssh_minion_password.sls

Salt SSH with root using password:

tests/pillar/master_ssh_minion_root.sls

Salt control (cloud/kvm/docker)

Salt cloud with local OpenStack provider:

tests/pillar/control_cloud_openstack.sls

Salt cloud with Digital Ocean provider:

tests/pillar/control_cloud_digitalocean.sls

Salt virt with KVM cluster:

tests/pillar/control_virt.sls

Salt virt with custom destination for image file:

tests/pillar/control_virt_custom.sls

Usage

Working with salt-cloud:

Debug LIBCLOUD for salt-cloud connection:

Read more

salt-cloud

Documentation and Bugs