-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BUG] Possible concurrency issue with reactor/orchestrate running on master only #57626
Comments
To remove the 3 tasks from the troubleshooting, I've replaced with one tasks that writes to the log file and still have the same issue.
|
Interesting, thanks for the report @dpizzle unfortunately I don't have much experience with reactors so I'll ask if anyone on @saltstack/team-core is able to assist here. 😄 |
Following up on this one and seeing if anyone had a chance to upgrade to 3002 and verify if that release resolves this issue. |
Closing this due to no response. Please open an new issue if the PR #56513 does not resolve the issue. |
Description
I have a salt master which is configured with 3 reactor/orchestrate pairs. The master listens to messages from network devices via napalm-logs, then performs 3 tasks -
updates a mariadb with the event details
updates the db with the host details
runs a python script which connects to the network device and collects further information to add to the db.
As the number of messages has increased, I'm noticing that more of the jobs are failing with either of these 2 responses. The errors are sporadic and not linked to specific network devices.
Can
state.orchestrate
only run one job at a time, so if a secondstate.orchestrate
is called from the reactor, whilst the first is running, the second job will fail?Setup
Versions Report
salt --versions-report
(Provided by running salt --versions-report. Please also mention any differences in master/minion versions.)Additional Information
I've upgraded to 3000.3 and still experience the same behaviour.
The text was updated successfully, but these errors were encountered: