Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enhance the proxy_napalm_wrap decorator to allow "proxyless" execution #48290

Merged
merged 3 commits into from Jun 26, 2018

Conversation

Projects
None yet
3 participants
@mirceaulinic
Copy link
Member

commented Jun 25, 2018

Usually the Salt proxies have been designed using a single methodology: for
each device you aim to manage, start one Proxy Minion. The NAPALM modules didn't
make an exception, however, beginning with #38339
(therefore starting with release Nitrogen), the functionality has been enhanced
in such a way that we can execute the code when we are able to install the
regular Minion directly on the network hardware.

There is another use case, for something, let's call it "proxyless" when we
don't particularly need to start a Proxy process for each device, but rather
simply invoke arbitrary functions. These changes make this possible, so with one
single Minion (whether Proxy or regular), one is able to execute Salt commands
going through the NAPALM library to connect to the remote network device by
specifying the connection details from the command line and/or opts/pillar, e.g.
salt server1 bgp.neighbors driver=junos host=1.2.3.4 username=salt.

If the server1 Minion has the following block in the Pillar (for example):

napalm:
  driver: junos
  username: salt

The user would only need to provide the rest of the credentials (depending on
each individual case):

salt server1 bgp.neighbors host=1.2.3.4.


Having the following configuration in the opts/pillar, for example:

napalm:
  username: salt
  password: test

napalm_inventory:
  1.2.3.4:
    driver: eos
  edge01.bzr01:
    driver: junos

With the above available in the opts/pillar for a Minion, say server1, the
user would be able to execute: salt 'server1' bgp.neighbors host=1.2.3.4 or
salt 'server1' net.arp host=edge01.bzr01, etc.

mirceaulinic added some commits Jun 25, 2018

Enhance the proxy_napalm_wrap decorator to allow proxyless execution
Usually the Salt proxies have been designed using a single methodology: for
each device you aim to manage, start one Proxy Minion. The NAPALM modules didn't
make an exception, however, beginning with #38339
(therefore starting with release Nitrogen), the functionality has been enhanced
in such a way that we can execute the code when we are able to install the
regular Minion directly on the network hardware.

There is another use case, for something, let's call it "proxyless" when we
don't particularly need to start a Proxy process for each device, but rather
simply invoke arbitrary functions. These changes make this possible, so with one
single Minion (whether Proxy or regular), one is able to execute Salt commands
going through the NAPALM library to connect to the remote network device by
specifying the connection details from the command line and/or opts/pillar, e.g.
``salt server1 bgp.neighbors driver=junos host=1.2.3.4 username=salt``.

If the ``server1`` Minion has the following block in the Pillar (for example):

```yaml
napalm:
  driver: junos
  username: salt
```

The user would only need to provide the rest of the credentials (depending on
each individual case):

``salt server1 bgp.neighbors host=1.2.3.4``.
Add the possibility to have the credentials for all devices into napa…
…lm_inventory

This is going to ease the CLI usage, so we can reduce everything to simply
specifying the host of the device, e.g.,

Having the following configuration in the opts/pillar:

```yaml
napalm:
  username: salt
  password: test

napalm_inventory:
  1.2.3.4:
    driver: eos
  edge01.bzr01:
    driver: junos
```

With the above available in the opts/pillar for a Minion, say ``server1``, the
user would be able to execute: ``salt 'server1' bgp.neighbors host=1.2.3.4`` or
``salt 'server1' net.arp host=edge01.bzr01``, etc.

@rallytime rallytime requested a review from saltstack/team-core Jun 25, 2018

napalm_opts.update(inventory_opts)
log.debug('Merging the config for %s with the details found in the napalm inventory:', host)
log.debug(napalm_opts)
opts = opts.copy() # make sure we don't override the original

This comment has been minimized.

Copy link
@gtmanfred

gtmanfred Jun 25, 2018

Contributor

you will probably want to do a copy.deepcopy(opts) here instead, to get any nested options.

This comment has been minimized.

Copy link
@mirceaulinic

mirceaulinic Jun 26, 2018

Author Member

Good catch, thanks! When I wrote this I was too focused only at that chunk of config, but indeed, the others would be affected if not using copy.deepcopy.

This comment has been minimized.

Copy link
@mirceaulinic

mirceaulinic Jun 26, 2018

Author Member

Added bf7baae to fix this.

@gtmanfred

This comment has been minimized.

Copy link
Contributor

commented Jun 25, 2018

Other than one small change, this LGTM

@rallytime rallytime merged commit 0bb4ece into saltstack:develop Jun 26, 2018

5 of 10 checks passed

jenkins/PR/salt-pr-linode-ubuntu16-py3 Pull Requests » Salt PR - Linode Ubuntu16.04 - PY3 #10993 — ABORTED
Details
jenkins/PR/salt-pr-rs-cent7-n Pull Requests » Salt PR - RS CentOS 7 #20076 — ABORTED
Details
codeclimate 2 issues to fix
Details
default Build finished.
Details
jenkins/PR/salt-pr-linode-cent7-py3 Pull Requests » Salt PR - Linode CentOS 7 - PY3 #6023 — FAILURE
Details
WIP ready for review
Details
jenkins/PR/salt-pr-clone Pull Requests » Salt PR - Clone #26227 — SUCCESS
Details
jenkins/PR/salt-pr-docs-n Pull Requests » Salt PR - Docs #18274 — SUCCESS
Details
jenkins/PR/salt-pr-linode-ubuntu14-n Pull Requests » Salt PR - Linode Ubuntu14.04 #23951 — SUCCESS
Details
jenkins/PR/salt-pr-lint-n Pull Requests » Salt PR - Code Lint #22909 — SUCCESS
Details
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.