Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

jobs.list_job is slow with big highstate output #18518

Closed
Lothiraldan opened this issue Nov 26, 2014 · 8 comments
Closed

jobs.list_job is slow with big highstate output #18518

Lothiraldan opened this issue Nov 26, 2014 · 8 comments
Labels
Bug broken, incorrect, or confusing behavior Core relates to code central or existential to Salt P3 Priority 3 severity-low 4th level, cosemtic problems, work around exists stale
Milestone

Comments

@Lothiraldan
Copy link
Contributor

I have a big highstate job (the job cache is about 2,4Mo). When I try to get it details, using salt-run jobs.list_job, I noticed it takes a significant time (~17sec on a macbook air).

I tried to dig into the issue and the job.list_job function is always displaying the output (https://github.com/saltstack/salt/blob/v2014.7.0/salt/runners/jobs.py#L102). If i comment this line, it takes only 2 seconds to return the result.

When using the API, displaying the output in the console seems useless, it should be optional.

I've also noticed that this function has changed in develop (https://github.com/saltstack/salt/blob/develop/salt/runners/jobs.py#L128-L132) so I don't know it this issue is solved with the develop branch.

@Lothiraldan
Copy link
Contributor Author

Just see that salt-api use jobs.lookup_jid but it's the same problem, jobs.lookup_jid is also always displaying the output.

@rallytime rallytime added Bug broken, incorrect, or confusing behavior severity-low 4th level, cosemtic problems, work around exists labels Nov 26, 2014
@rallytime rallytime added this to the Approved milestone Nov 26, 2014
@rallytime
Copy link
Contributor

Thanks for the report - we'll look into it!

@whiteinge
Copy link
Contributor

You're right about the changes in develop. Runners handling their own output has always been a pain-point. (Your use-case is an interesting pain-point I haven't seen before.) Runners in the develop branch are now event-based and no longer handle their own output, the salt-run CLI command will do that from now on (and rightly so!).

That addition will go live in the next major feature release (code-named Boron) which is likely a few months out.

A quick workaround for the version of Salt you're on now is to copy the jobs runner into a custom runner (quietjobs.py (see the runner_dirs option in your master config)) and strip out the print statements. Then to call your custom runner via the API instead:

curl [...] -d client=runner -d fun=quietjobs.list_jobs

@Lothiraldan
Copy link
Contributor Author

@whiteinge, the workaround should work, but I'm developing a web GUI for SaltStack (https://github.com/tinyclues/saltpad) and I couldn't ask every users to use a custom runner. I will maybe drop a note about it in the README.

@whiteinge
Copy link
Contributor

Oh, neat. Yeah, that won't work for a project like this. :-/ A note in the
README with a link to this issue would be good though -- huge job caches
are common but not the norm.

Speaking of the README, the salt-api interface should be consistent between
2014.1.x and 2014.7. If you can pin-point where the difference is, I'd love
to investigate.

On Wed Nov 26 2014 at 2:03:55 PM Boris Feld notifications@github.com
wrote:

@whiteinge https://github.com/whiteinge, the workaround should work,
but I'm developing a web GUI for SaltStack (
https://github.com/tinyclues/saltpad) and I couldn't ask every users to
use a custom runner. I will maybe drop a note about it in the README.


Reply to this email directly or view it on GitHub
#18518 (comment).

@Lothiraldan
Copy link
Contributor Author

I just tried with both versions of the API today and couldn't reproduce the errors, I will surely open an issue if I can reproduce.

@daveneeley
Copy link
Contributor

I see this problem in 2015.5.0. This is from extracting really large archives.

72 MB: salt-run jobs.list_job 20150517235434286789 > /tmp/20150517235434286789
68 MB: salt-run jobs.lookup_jid 20150517235434286789 > /tmp/20150517235434286789_jid

Makes it really hard to tell what steps failed due to all the output from the extraction steps.

@pass-by-value pass-by-value added P3 Priority 3 Core relates to code central or existential to Salt labels Jun 2, 2015
@stale
Copy link

stale bot commented Nov 10, 2017

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

If this issue is closed prematurely, please leave a comment and we will gladly reopen the issue.

@stale stale bot added the stale label Nov 10, 2017
@stale stale bot closed this as completed Nov 17, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Bug broken, incorrect, or confusing behavior Core relates to code central or existential to Salt P3 Priority 3 severity-low 4th level, cosemtic problems, work around exists stale
Projects
None yet
Development

No branches or pull requests

5 participants