-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
TypeError: string indices must be integers, not str #21480
Comments
@msciciel Hmm...Any other information you can provide? You say it only happens occasionally and on different boxes each time? Any similarities between these boxes? And only when you run state.highstate? Any other configs you've changed? @basepi or @jfindlay or @cachedout Any of you heard of anything similar to this lately? |
I think this is a duplicate of #20777. See also, #18729 (comment). |
It looks random. I have check of last highstate run and it raports this error. I do not run highstate manually. If check report that error occured and then i login into machine and run highstate then it's always ok. It's happens always on few boxes from 1600. #20777 - my master is in 2014.7.1 version Is this related to pillar data or maybe formulas ? I have pillar data updated from git repo with 5 minutes interval. |
Didnt see this issue until upgraded to RC2. Traceback (most recent call last):
File "/opt/blue-python/2.7/lib/python2.7/site-packages/salt/master.py", line 1415, in run_func
ret = getattr(self, func)(load)
File "/opt/blue-python/2.7/lib/python2.7/site-packages/salt/master.py", line 1270, in _syndic_return
self._return(ret)
File "/opt/blue-python/2.7/lib/python2.7/site-packages/salt/master.py", line 1244, in _return
self.opts, load, event=self.event, mminion=self.mminion)
File "/opt/blue-python/2.7/lib/python2.7/site-packages/salt/utils/job.py", line 55, in store_job
load.update({'user': ret_['user']})
TypeError: string indices must be integers, not str Also, salt-run jobs.lookup_jid doesnt return anything now but i can see the ret events are flowing in. Salt: 2015.2.0
Python: 2.7.9 (default, Dec 30 2014, 13:07:54)
Jinja2: 2.7.3
M2Crypto: 0.22
msgpack-python: 0.4.2
msgpack-pure: Not Installed
pycrypto: 2.6.1
libnacl: Not Installed
PyYAML: 3.11
ioflo: Not Installed
PyZMQ: 14.4.1
RAET: Not Installed
ZMQ: 4.0.5
Mako: Not Installed |
I am getting these errors, too. It seems mostly random. I have been commenting out things from my highstate to try to determine what could be happening. It does NOT occur if I run state.highstate on one single minion. I also have NOT seen it occur if I don't use any formulas. However, if I have any formulas active, and if I am calling state.highstate on two or more minions, I often (not always) see the error on the first one, or several, minions, and the last one, or several, minions completes successfully. Data failed to compile:
----------
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/salt/state.py", line 2853, in call_highstate
top = self.get_top()
File "/usr/lib/python2.7/dist-packages/salt/state.py", line 2355, in get_top
tops = self.get_tops()
File "/usr/lib/python2.7/dist-packages/salt/state.py", line 2228, in get_tops
saltenv
File "/usr/lib/python2.7/dist-packages/salt/fileclient.py", line 147, in cache_file
return self.get_url(path, '', True, saltenv)
File "/usr/lib/python2.7/dist-packages/salt/fileclient.py", line 522, in get_url
return self.get_file(url, dest, makedirs, saltenv)
File "/usr/lib/python2.7/dist-packages/salt/fileclient.py", line 992, in get_file
if not data['data']:
TypeError: string indices must be integers, not str Versions:
Salt: 2014.7.2
Python: 2.7.6 (default, Mar 22 2014, 22:59:56)
Jinja2: 2.7.2
M2Crypto: 0.21.1
msgpack-python: 0.3.0
msgpack-pure: Not Installed
pycrypto: 2.6.1
libnacl: Not Installed
PyYAML: 3.10
ioflo: Not Installed
PyZMQ: 14.0.1
RAET: Not Installed
ZMQ: 4.0.4
Mako: 0.9.1
Debian source package: 2014.7.2+ds-1trusty2 |
I am now seeing the same error when I run state.highstate with just one minion--it just seems less frequent. |
After reviewing the code in the about-to-be-released update to 2014.7, I believe this issue has been resolved. I am marking his as Fixed Pending Verification until somebody can verify a fix in the next release. |
I am getting the same error too on versions 2014.7.2 or 2015.0.2rc1 but i think i find a workaround to set the saltenv on every command even if for base. I remark that the problem appears when GitFs and ext_pillar git are activated and some branches exist on repositories For exemple : |
I have ext pillar disabled and error sometimes occurs. |
We are seeing this all the time ever since we upgraded to 2014.7.2, quite annoying |
2014.7.4 has been tagged and we're just waiting on packagers. If someone wants to install from source or pip in the meantime and see if this is resolved, that would be lovely. Otherwise we'll wait until the official announcement. |
I am trying to solve this issue as well. I made a state that basically just did a file.manged and nothing more. From a client I did the following:
All went fine the first 14 times, after that it broke:
Here is my very simple sls file:
|
Forgot the versions:
|
I think the problem is really due to GitFS and list of env issue. If you enable GitFS and you have multiple repos you have to create same branches on all your git repos or else use master conf (See below example) : gitfs_env_blacklist:
- release*
gitfs_env_whitelist:
- base
- bamboo If you want you can check all declared GitFS envs names in file : /var/cache/salt/master/gitfs/envs.p |
We are using 1 repo as gitfs_remote. |
Running the same loop with saltenv=base did not solve the problem. Still broke after about 20 cycles. |
@PitrJ It looks like you're on 2014.7.2, where we had some gitfs regressions. These have now been fixed in 2014.7.4, which was recently tagged and is now in the packaging stage. I'd recommend giving that tag a try, if you're able. If not, please let us know if you still hit this issue once the packages for 2014.7.4 are available and you've had a chance to retest. |
@rallytime Which particular issue are you referring to and in which commit was it fixed? Given that .4 won't make it into the repositories for some time I might consider building and distributing a locally maintained version or fix it in whatever way is appropriate, but for that I'd need to know more details. |
@BABILEN Unfortunately, I don't have a particular commit or pull request number offhand. I was going off of @cachedout's comment above about reviewing the code and marking this as fixed pending verification. I think really we are just wondering if someone can test this on 2014.7.4, where the tag (v2014.7.4) is pushed to the saltstack repo, or on packages once they come out. We'll certainly leave this open until we have confirmed that this was fixed, or revisit it if more work needs to be done. |
2014.7.4 works fine, thank you very much!
|
Awesome! Thanks for testing this everyone! With at least 4 confirmations that this has been fixed, I feel comfortable closing this issue. If this pops up again after upgrading to 2014.7.4, please leave a comment and we can re-address the issue. |
I just encountered this error message as well, running under 2014.7.4. TL;DR: If you see this error, check file permissions. Running
The state in question:
Relevant snippet of
Versions:
However, when I removed
So, after close inspection of the filenames, and much copy & pasting of paths, I discovered that it was actually the file permissions of So I guess the ultimate upshot here is that the core issue behind this ticket has not really been remedied, which is: Whatever is triggering this particular error message needs to be more careful about not masking other more useful error messages. |
@raumkraut, thanks for the report. I'll be looking into this. |
I can reproduce this by establishing a ZMQ channel to the master and then bringing down the master and then bringing it back up. After doing so, all calls to the channel's send method will return an empty string, since the channel no longer has the right AES key. This is fixed in #23154 |
Any progress with fix ? |
@msciciel Does the pull request mentioned above resolve this issue for you? |
Is 2014.7.5 or 2015.5.0 fixed version ? |
@msciciel It look like the fix didn't make it into an official release yet. The pull request was merged in 2015.5 right after we cut 2015.5.2. Therefore, the fix will be available when 2015.5.3 is released. We can definitely keep this open until you've had a chance to test it. |
Ok. I'll verify after release of 2015.5.3. I saw today that problem still exists. |
Similar error appeared in archive.extracted after upgrading Salt to 2015.5.3
How can this be fixed? It breaks our deployment system! =( |
@yermulnik Do you have a similar stacktrace as the others in this thread? Or is this specifically only happening when using an |
That's only what I get. No stacktrace. I've encountered it only when using |
@yermulnik Does it happen each time you run the state, or only intermittently? If it's consistently failing each time, it might be a different bug. If not, then something is still amiss here. |
Yes, it happens each time I run the state. And it appeared since I had upgraded to 2015.5.3 yesterday. |
@yermulnik Would you mind opening a new one? That sounds different to me than these other ones and that way we can get some more eyes on it. Thanks! |
Ok, no problem: #26090 |
@msciciel Just a follow-up ping here - were you able to confirm if this bug is fixed for you or not on a 2015.5.3 or newer release of salt? |
I have now all minions in version 2015.5.8 and havent seen this problem since upgrade. |
Just to confirm, I ran a few tests with Salt 2015.5.3 in a simple VM, and the issue I was encountering earlier does also appear to have been fixed. |
Awesome! Thank you for confirming. |
Occasionaly during highstate i got error (it looks like it occur on random machine with random time):
Any ideas what is wrong ?
The text was updated successfully, but these errors were encountered: