Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

napalm grains not available during template rendering #181

Closed
syntax-terr0r opened this issue Oct 14, 2020 · 10 comments · Fixed by #193
Closed

napalm grains not available during template rendering #181

syntax-terr0r opened this issue Oct 14, 2020 · 10 comments · Fixed by #193
Labels
bug Something isn't working pending triage

Comments

@syntax-terr0r
Copy link

syntax-terr0r commented Oct 14, 2020

Describe the bug
Grains collected by the napalm.py script are not available during rendering of pillar SLS files

Steps To Reproduce
I have a minion with the following grains:

# salt-sproxy cr-testing* grains.get id
cr-testing01.lab1:
    cr-testing01.lab1

# salt-sproxy cr-testing* grains.get model
cr-testing01.lab1:
    VMX

adding a test pillar:

# cat /srv/pillar/shared/test.sls
foo: {{ grains.get('id') }}
bar: {{ grains.get('model') }}

# cat /srv/pillar/top.sls
base:
  '*':
    - 'shared.general'
  'cr-testing01.lab1':
    - shared.test

result:

# salt-sproxy cr-testing* pillar.get foo
cr-testing01.lab1:
    cr-testing01.lab1

# salt-sproxy cr-testing* pillar.get bar
cr-testing01.lab1:
    None

Expected behavior
Expect the "bar" pillar data to contain the information of grains['model']

Versions Report

Salt Version:
           Salt: 3001.1
    Salt SProxy: 2020.7.0

Dependency Versions:
        Ansible: Not Installed
           cffi: 1.14.2
       dateutil: 2.7.3
      docker-py: Not Installed
          gitdb: 2.0.5
      gitpython: 2.1.11
         Jinja2: 2.10
     junos-eznc: 2.5.3
       jxmlease: Not Installed
        libgit2: 0.27.7
       M2Crypto: Not Installed
           Mako: Not Installed
   msgpack-pure: Not Installed
 msgpack-python: 0.5.6
         NAPALM: 3.1.0
       ncclient: 0.6.9
        Netmiko: 3.2.0
       paramiko: 2.7.2
      pycparser: 2.19
       pycrypto: 2.6.1
   pycryptodome: 3.6.1
         pyeapi: 0.8.3
         pygit2: 0.27.4
       PyNetBox: Not Installed
          PyNSO: Not Installed
         Python: 3.7.3 (default, Jul 25 2020, 13:03:44)
   python-gnupg: Not Installed
         PyYAML: 3.13
          PyZMQ: 17.1.2
            scp: 0.13.2
          smmap: 2.0.5
        textfsm: 1.1.0
        timelib: Not Installed
        Tornado: 4.5.3
            ZMQ: 4.3.1

System Versions:
           dist: debian 10 buster
         locale: utf-8
        machine: x86_64
        release: 4.19.0-10-amd64
         system: Linux
        version: Debian GNU/Linux 10 buster

Additional context
It seems that the napalm grains (salt/grains/napalm.py) are only merged into the grains dict after the pillars have been rendered:

2020-10-13 12:34:05,142 [salt.template    :127 ][DEBUG   ][14313] Rendered data from file: /srv/pillar/shared/test.sls:
[...]
2020-10-13 12:34:08,780 [salt.loaded.ext.runners.proxy:665 ][DEBUG   ][14313] Caching Grains for cr-testing01.lab1
2020-10-13 12:34:08,780 [salt.loaded.ext.runners.proxy:666 ][DEBUG   ][14313] OrderedDict([('foo', 'bar'), ('cwd', '/root'), ('ip_gw', True), ('ip4_gw', '62.138.167.49'), ('ip6_gw', False), ('dns', {'nameservers': ['80.237.128.144', '80.237.128.145', '8.8.8.8'], 'ip4_nameservers': ['80.237.128.144', '80.237.128.145', '8.8.8.8'], 'ip6_nameservers': [], 'sortlist': [], 'domain': '', 'search': ['bb.gdinf.net', 'bb.godaddy.com', 'lab.mass.systems', 'cse.mass.systems', 'mass.systems', 'intern.hosteurope.de', 'hosteurope.de'], 'options': []}), ('fqdns', []), ('machine_id', 'f6183af91209426f812aca156ae54f5a'), ('master', 'salt'), ('hwaddr_interfaces', {'lo': '00:00:00:00:00:00', 'eth0': '02:ce:0a:5d:c0:49'}), ('id', 'cr-testing01.lab1'), ('kernelparams', [('BOOT_IMAGE', '/boot/vmlinuz-4.19.0-10-amd64'), ('root', None), ('ro', None), ('quiet', None)]), ('locale_info', {}), ('num_gpus', 0), ('gpus', []), ('kernel', 'proxy'), ('nodename', 'salt-gbone.lab.mass.systems'), ('kernelrelease', 'proxy'), ('kernelversion', 'proxy'), ('cpuarch', 'x86_64'), ('osrelease', 'proxy'), ('os', 'junos'), ('os_family', 'proxy'), ('osfullname', 'proxy'), ('osarch', 'x86_64'), ('mem_total', 0), ('virtual', 'LXC'), ('ps', 'ps -efHww'), ('osrelease_info', ('proxy',)), ('osfinger', 'proxy-proxy'), ('path', '/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin'), ('systempath', ['/usr/local/sbin', '/usr/local/bin', '/usr/sbin', '/usr/bin', '/sbin', '/bin']), ('pythonexecutable', '/usr/bin/python3'), ('pythonpath', ['/usr/lib/python3/dist-packages/git/ext/gitdb', '/usr/local/bin', '/usr/lib/python37.zip', '/usr/lib/python3.7', '/usr/lib/python3.7/lib-dynload', '/usr/local/lib/python3.7/dist-packages', '/usr/lib/python3/dist-packages', '/usr/lib/python3/dist-packages/gitdb/ext/smmap']), ('pythonversion', [3, 7, 3, 'final', 0]), ('saltpath', '/usr/lib/python3/dist-packages/salt'), ('saltversion', '3001.1'), ('saltversioninfo', [3001, 1]), ('zmqversion', '4.3.1'), ('disks', []), ('ssds', ['sdb', 'sda']), ('shell', '/bin/bash'), ('username', None), ('groupname', 'root'), ('pid', 14313), ('gid', 0), ('uid', 0), ('zfs_support', False), ('zfs_feature_flags', False), ('host', 'cr-testing01.lab1'), ('hostname', 'cr-testing01.lab1'), ('interfaces', ['ge-0/0/0', 'lc-0/0/0', 'pfe-0/0/0', 'pfh-0/0/0', 'ge-0/0/1', 'ge-0/0/2', 'ge-0/0/3', 'ge-0/0/4', 'ge-0/0/5', 'ge-0/0/6', 'ge-0/0/7', 'ge-0/0/8', 'ge-0/0/9', 'cbp0', 'demux0', 'dsc', 'em1', 'esi', 'fxp0', 'gre', 'ipip', 'irb', 'jsrv', 'lo0', 'lsi', 'mtun', 'pimd', 'pime', 'pip0', 'pp0', 'rbeb', 'tap', 'vtep']), ('model', 'VMX'), ('optional_args', {'config_lock': False, 'keepalive': 5}), ('serial', 'VM5B598A6585'), ('uptime', 2505569), ('vendor', 'Juniper'), ('version', '17.4R1.16')])

After modifying the test.sls like so

# cat /srv/pillar/shared/test.sls
foo: {{ grains.get('id') }}
bar: {{ grains.get('model') }}

{% for k in grains.keys() %}
{%- do salt.log.error(k) -%}
{% endfor %}

these are the grains keys available at SLS rendering:

# salt-sproxy cr-testing* pillar.get bar
[ERROR   ] cwd
[ERROR   ] ip_gw
[ERROR   ] ip4_gw
[ERROR   ] ip6_gw
[ERROR   ] dns
[ERROR   ] fqdns
[ERROR   ] machine_id
[ERROR   ] master
[ERROR   ] hwaddr_interfaces
[ERROR   ] id
[ERROR   ] kernelparams
[ERROR   ] locale_info
[ERROR   ] num_gpus
[ERROR   ] gpus
[ERROR   ] kernel
[ERROR   ] nodename
[ERROR   ] kernelrelease
[ERROR   ] kernelversion
[ERROR   ] cpuarch
[ERROR   ] osrelease
[ERROR   ] os
[ERROR   ] os_family
[ERROR   ] osfullname
[ERROR   ] osarch
[ERROR   ] mem_total
[ERROR   ] virtual
[ERROR   ] ps
[ERROR   ] osrelease_info
[ERROR   ] osfinger
[ERROR   ] path
[ERROR   ] systempath
[ERROR   ] pythonexecutable
[ERROR   ] pythonpath
[ERROR   ] pythonversion
[ERROR   ] saltpath
[ERROR   ] saltversion
[ERROR   ] saltversioninfo
[ERROR   ] zmqversion
[ERROR   ] disks
[ERROR   ] ssds
[ERROR   ] shell
[ERROR   ] username
[ERROR   ] groupname
[ERROR   ] pid
[ERROR   ] gid
[ERROR   ] uid
[ERROR   ] zfs_support
[ERROR   ] zfs_feature_flags
cr-testing01.lab1:
    None

As you can see it's missing the following keys

-host
-hostname
-interfaces
-model
-optional_args
-serial
-uptime
-vendor
-version

which as far as I can see are all gathered in napalm.py

@syntax-terr0r syntax-terr0r added bug Something isn't working pending triage labels Oct 14, 2020
@syntax-terr0r
Copy link
Author

Hey there,
is there anything else I can provide/do in order to get this resolved?
Thanks!

@mirceaulinic
Copy link
Owner

I'll let you know if I need anything else when I'll have time to look into it @syntax-terr0r. In the meantime, feel free to give it a try yourself. :)

@mirceaulinic
Copy link
Owner

@syntax-terr0r can you check #187 and confirm it solves the issue?

mirceaulinic added a commit that referenced this issue Oct 28, 2020
Issue #181: Re-compile the Pillar after establishing the connection
@mirceaulinic
Copy link
Owner

The fix is released in 2020.10.0, just released on PyPI: https://salt-sproxy.readthedocs.io/en/latest/releases/2020.10.0.html. If any issues, feel free to reopen.

@syntax-terr0r
Copy link
Author

syntax-terr0r commented Oct 29, 2020

Hi there,
I was finablly able to test now with the fix in place and unfortunately I can only report a partial fix.
With the same setup as above I now get the model returned by pillar.get but the collection of the napalm grains still happens after template rendering.

That means that the model grain is still not available during template rendering.
As you can see, at first only the default grains are available, with those it tries rendering the templates which consequently fails because model is not available.
Only after that are the napalm grains added and therefore printed below the error from the template rendering.

root@salt-gbone:~# salt-sproxy cr-testing* grains.get model
[ERROR   ] default
[ERROR   ] cwd
[ERROR   ] ip_gw
[ERROR   ] ip4_gw
[ERROR   ] ip6_gw
[ERROR   ] dns
[ERROR   ] fqdns
[ERROR   ] machine_id
[ERROR   ] master
[ERROR   ] hwaddr_interfaces
[ERROR   ] id
[ERROR   ] kernelparams
[ERROR   ] locale_info
[ERROR   ] num_gpus
[ERROR   ] gpus
[ERROR   ] kernel
[ERROR   ] nodename
[ERROR   ] kernelrelease
[ERROR   ] kernelversion
[ERROR   ] cpuarch
[ERROR   ] osrelease
[ERROR   ] os
[ERROR   ] os_family
[ERROR   ] osfullname
[ERROR   ] osarch
[ERROR   ] mem_total
[ERROR   ] virtual
[ERROR   ] ps
[ERROR   ] osrelease_info
[ERROR   ] osfinger
[ERROR   ] path
[ERROR   ] systempath
[ERROR   ] pythonexecutable
[ERROR   ] pythonpath
[ERROR   ] pythonversion
[ERROR   ] saltpath
[ERROR   ] saltversion
[ERROR   ] saltversioninfo
[ERROR   ] zmqversion
[ERROR   ] disks
[ERROR   ] ssds
[ERROR   ] shell
[ERROR   ] username
[ERROR   ] groupname
[ERROR   ] pid
[ERROR   ] gid
[ERROR   ] uid
[ERROR   ] zfs_support
[ERROR   ] zfs_feature_flags
[ERROR   ] Rendering exception occurred
Traceback (most recent call last):
  File "/usr/local/lib/python3.7/dist-packages/salt/utils/templates.py", line 498, in render_jinja_tmpl
    output = template.render(**decoded_context)
  File "/usr/local/lib/python3.7/dist-packages/jinja2/environment.py", line 1090, in render
    self.environment.handle_exception()
  File "/usr/local/lib/python3.7/dist-packages/jinja2/environment.py", line 832, in handle_exception
    reraise(*rewrite_traceback_stack(source=source))
  File "/usr/local/lib/python3.7/dist-packages/jinja2/_compat.py", line 28, in reraise
    raise value.with_traceback(tb)
  File "<template>", line 1, in top-level template code
jinja2.exceptions.UndefinedError: 'salt.utils.odict.OrderedDict object' has no attribute 'model'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.7/dist-packages/salt/utils/templates.py", line 260, in render_tmpl
    output = render_str(tmplstr, context, tmplpath)
  File "/usr/local/lib/python3.7/dist-packages/salt/utils/templates.py", line 505, in render_jinja_tmpl
    raise SaltRenderError("Jinja variable {}{}".format(exc, out), buf=tmplstr)
salt.exceptions.SaltRenderError: Jinja variable 'salt.utils.odict.OrderedDict object' has no attribute 'model'
[CRITICAL] Rendering SLS 'shared.target_software' failed, render error:
Jinja variable 'salt.utils.odict.OrderedDict object' has no attribute 'model'
Traceback (most recent call last):
  File "/usr/local/lib/python3.7/dist-packages/salt/utils/templates.py", line 498, in render_jinja_tmpl
    output = template.render(**decoded_context)
  File "/usr/local/lib/python3.7/dist-packages/jinja2/environment.py", line 1090, in render
    self.environment.handle_exception()
  File "/usr/local/lib/python3.7/dist-packages/jinja2/environment.py", line 832, in handle_exception
    reraise(*rewrite_traceback_stack(source=source))
  File "/usr/local/lib/python3.7/dist-packages/jinja2/_compat.py", line 28, in reraise
    raise value.with_traceback(tb)
  File "<template>", line 1, in top-level template code
jinja2.exceptions.UndefinedError: 'salt.utils.odict.OrderedDict object' has no attribute 'model'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.7/dist-packages/salt/pillar/__init__.py", line 884, in render_pstate
    **defaults
  File "/usr/local/lib/python3.7/dist-packages/salt/template.py", line 101, in compile_template
    ret = render(input_data, saltenv, sls, **render_kwargs)
  File "/usr/local/lib/python3.7/dist-packages/salt/renderers/jinja.py", line 79, in render
    **kws
  File "/usr/local/lib/python3.7/dist-packages/salt/utils/templates.py", line 260, in render_tmpl
    output = render_str(tmplstr, context, tmplpath)
  File "/usr/local/lib/python3.7/dist-packages/salt/utils/templates.py", line 505, in render_jinja_tmpl
    raise SaltRenderError("Jinja variable {}{}".format(exc, out), buf=tmplstr)
salt.exceptions.SaltRenderError: Jinja variable 'salt.utils.odict.OrderedDict object' has no attribute 'model'
[CRITICAL] Pillar render error: Rendering SLS 'shared.target_software' failed. Please see master log for details.
[ERROR   ] default
[ERROR   ] cwd
[ERROR   ] ip_gw
[ERROR   ] ip4_gw
[ERROR   ] ip6_gw
[ERROR   ] dns
[ERROR   ] fqdns
[ERROR   ] machine_id
[ERROR   ] master
[ERROR   ] hwaddr_interfaces
[ERROR   ] id
[ERROR   ] kernelparams
[ERROR   ] locale_info
[ERROR   ] num_gpus
[ERROR   ] gpus
[ERROR   ] kernel
[ERROR   ] nodename
[ERROR   ] kernelrelease
[ERROR   ] kernelversion
[ERROR   ] cpuarch
[ERROR   ] osrelease
[ERROR   ] os
[ERROR   ] os_family
[ERROR   ] osfullname
[ERROR   ] osarch
[ERROR   ] mem_total
[ERROR   ] virtual
[ERROR   ] ps
[ERROR   ] osrelease_info
[ERROR   ] osfinger
[ERROR   ] path
[ERROR   ] systempath
[ERROR   ] pythonexecutable
[ERROR   ] pythonpath
[ERROR   ] pythonversion
[ERROR   ] saltpath
[ERROR   ] saltversion
[ERROR   ] saltversioninfo
[ERROR   ] zmqversion
[ERROR   ] disks
[ERROR   ] ssds
[ERROR   ] shell
[ERROR   ] username
[ERROR   ] groupname
[ERROR   ] pid
[ERROR   ] gid
[ERROR   ] uid
[ERROR   ] zfs_support
[ERROR   ] zfs_feature_flags
[ERROR   ] host
[ERROR   ] hostname
[ERROR   ] interfaces
[ERROR   ] model
[ERROR   ] optional_args
[ERROR   ] serial
[ERROR   ] uptime
[ERROR   ] vendor
[ERROR   ] version
cr-testing01.lab1:
    VMX

This is the contents of the target-software.sls:

# cat /srv/pillar/shared/target_software.sls
{% if grains['model'] == 'VMX' %}
target_software_version: 17.4
{% endif %}

Unfortunately I don't understand enough about the flow of salt-sproxy to be able to determine where this would need to be fixed though.

@syntax-terr0r
Copy link
Author

I have to correct myself I wasn't aware that not only the pillars are now being re-compiled but also the templates are rendered again after connection to the minion was made.

I do get the expected output now.
However I also still get the error messages for the failed first template rendering:

# salt-sproxy cr-testing01* pillar.get target_software_version
[ERROR   ] Rendering exception occurred
Traceback (most recent call last):
  File "/usr/local/lib/python3.7/dist-packages/salt/utils/templates.py", line 498, in render_jinja_tmpl
    output = template.render(**decoded_context)
  File "/usr/local/lib/python3.7/dist-packages/jinja2/environment.py", line 1090, in render
    self.environment.handle_exception()
  File "/usr/local/lib/python3.7/dist-packages/jinja2/environment.py", line 832, in handle_exception
    reraise(*rewrite_traceback_stack(source=source))
  File "/usr/local/lib/python3.7/dist-packages/jinja2/_compat.py", line 28, in reraise
    raise value.with_traceback(tb)
  File "<template>", line 1, in top-level template code
jinja2.exceptions.UndefinedError: 'salt.utils.odict.OrderedDict object' has no attribute 'model'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.7/dist-packages/salt/utils/templates.py", line 260, in render_tmpl
    output = render_str(tmplstr, context, tmplpath)
  File "/usr/local/lib/python3.7/dist-packages/salt/utils/templates.py", line 505, in render_jinja_tmpl
    raise SaltRenderError("Jinja variable {}{}".format(exc, out), buf=tmplstr)
salt.exceptions.SaltRenderError: Jinja variable 'salt.utils.odict.OrderedDict object' has no attribute 'model'
[CRITICAL] Rendering SLS 'shared.target_software' failed, render error:
Jinja variable 'salt.utils.odict.OrderedDict object' has no attribute 'model'
Traceback (most recent call last):
  File "/usr/local/lib/python3.7/dist-packages/salt/utils/templates.py", line 498, in render_jinja_tmpl
    output = template.render(**decoded_context)
  File "/usr/local/lib/python3.7/dist-packages/jinja2/environment.py", line 1090, in render
    self.environment.handle_exception()
  File "/usr/local/lib/python3.7/dist-packages/jinja2/environment.py", line 832, in handle_exception
    reraise(*rewrite_traceback_stack(source=source))
  File "/usr/local/lib/python3.7/dist-packages/jinja2/_compat.py", line 28, in reraise
    raise value.with_traceback(tb)
  File "<template>", line 1, in top-level template code
jinja2.exceptions.UndefinedError: 'salt.utils.odict.OrderedDict object' has no attribute 'model'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.7/dist-packages/salt/pillar/__init__.py", line 884, in render_pstate
    **defaults
  File "/usr/local/lib/python3.7/dist-packages/salt/template.py", line 101, in compile_template
    ret = render(input_data, saltenv, sls, **render_kwargs)
  File "/usr/local/lib/python3.7/dist-packages/salt/renderers/jinja.py", line 79, in render
    **kws
  File "/usr/local/lib/python3.7/dist-packages/salt/utils/templates.py", line 260, in render_tmpl
    output = render_str(tmplstr, context, tmplpath)
  File "/usr/local/lib/python3.7/dist-packages/salt/utils/templates.py", line 505, in render_jinja_tmpl
    raise SaltRenderError("Jinja variable {}{}".format(exc, out), buf=tmplstr)
salt.exceptions.SaltRenderError: Jinja variable 'salt.utils.odict.OrderedDict object' has no attribute 'model'
[CRITICAL] Pillar render error: Rendering SLS 'shared.target_software' failed. Please see master log for details.
cr-testing01.lab1:
    17.4

I'm also not exactly sure why the grains are not available for the first rendering. I was under the impression the grains were being cached so at least on the second run it should have all grains available when first attempting to render the template should it not?

@mirceaulinic
Copy link
Owner

Hey @syntax-terr0r - can you share your pillar file tree? I can see the error, but I also need to see what you have to understand the issue.

@mirceaulinic mirceaulinic reopened this Oct 30, 2020
@syntax-terr0r
Copy link
Author

Hey @mirceaulinic,
here is my pillar tree.
I should probably let you know that I am still in the evaluation phase for our SALT installation and trying to figure out what we can and want to do with it and how best to do it.
That being said, there is a lot of stuff in the pillar tree that I am not actively using at the moment. These are parts of tests I did earlier and stuff that's supposed to be used in upcoming use cases.

# tree /srv/pillar/
/srv/pillar/
|-- device
|   |-- cr-testing01_lab1
|   |   |-- config
|   |   |   `-- bgppeers.sls
|   |   `-- init.sls
[…]
|   `-- cr-foobar_lab1
|       |-- config
|       `-- init.sls
|-- ext
|   `-- init.sls
|-- pfl-master.sls
|-- shared
|   |-- config
|   |   |-- chassis
|   |   |   `-- base.sls
|   |   |-- groups
|   |   |   |-- aggregates.sls
|   |   |   |-- bleedoff.sls
|   |   |   |-- dualRE.sls
|   |   |   |-- ISIS.sls
|   |   |   |-- RSVP-FRR.sls
|   |   |   `-- sampling.sls
|   |   `-- system
|   |       |-- login.sls
|   |       |-- ntp.sls
|   |       `-- syslog.sls
|   |-- general_info.sls
|   |-- roles.sls
|   |-- saltcontrol.sls
|   |-- target_software.sls
|   |-- test.sls
|   `-- textfsm_mapping.sls
`-- top.sls

From that list, this is what's active at the moment:

# cat /srv/pillar/top.sls
base:
  '*':
    - shared.general_info
#    - shared.target_software
  cr-testing01.lab1:
    - device.cr-testing01_lab1
    - shared.target_software
[...]
  cr-foobar.lab1:
    - device.cr-foobar.lab1

The general_info.sls only generates some information based on the ID of the minion (like metro area and location), it's probably not important for this.
The offending sls is the target_software.sls where I set the target software version of the devices based on their model:

# cat /srv/pillar/shared/target_software.sls
{% if grains['model'] == 'VMX' %}
target_software_version: 17.4
{% endif %}

Apart from that there is only the init.sls in the device subfolder that has the proxy pillar.
The other pillars are not being rendered at the moment.

Does that help? Please let me know if you need any addtl info.

Thanks!

@mirceaulinic
Copy link
Owner

Hi @syntax-terr0r. I see what happens, the error is purely cosmetical, which you can prevent by having if grains.get('model') instead - which would work whether you have cached data or not. However, when relying on cached data, it wouldn't hurt to look it up during the initial pillar compilation: #193.

@syntax-terr0r
Copy link
Author

Hi @mirceaulinic ,
yes you are right, grains.get('model') should work now that the pillar is compiled a second time.
I had that in there initially but changed it when it wasn't working due to the fact the template was only rendered once before the grain was available and thus target_software_version was always empty.
To track down the issue I changed it to the above to actually get an error message.

I will change it back to get rid of the error message now.
Thank you!

mirceaulinic added a commit that referenced this issue Nov 2, 2020
Issue #181: Use cached Grains for the initial Pillar compilation
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working pending triage
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants