-
Notifications
You must be signed in to change notification settings - Fork 243
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Parallelize perfparser #394
Comments
hey @the8472 - this request is super unhelpful. I would also like to parallelize it, but the problem is simply not easily amendable to being parallelized. sampling events need to be handled in order, the best we could do is potentially parallelize the unwinding for separate processes. patches welcome I guess, but again - not an easy task. |
I'll see if I can get a flamegraph of the parsing that's slow for me. Would that help?
I don't know anything about the perf data format. Is it not separable into large chunks that can be processed independently and then joined? I had that impression because it occasionally prints the message that some chunks were lost and it's able to recover from that.
That may help in at least one of my cases where I was profiling a benchmark suite which spawned ~10k short-lived, separately compiled processes. |
hey @the8472, thanks for the flamegraphs, but as-is this is still totally unactionable to me. The flamegraphs only show that dwarf resolution of inline frames is slow, as well as repeated mmap/munmap on your system. The latter is surprising, but I'm unsure I can do anything about it. The former is less surprising, as DWARF inline frame resolving is generally a slow thing. If that's needed a ton, then it's simply slow. Why it is so slow for your case compared to other scenarios I looked at so far, I cannot say. Without a way for me to reproduce this issue, I cannot look into it. If you want me to do anything about this, you will need to either document how you record the perf file such that I can reproduce the issue. Alternatively, you will need to upload the perf.data file as well as all the binaries and libraries and debug files it references - this can easily become a very large tarball requiring multiple gigabytes of storage space. Upload it to a cloud downloader of your choice and share a link here. Note that this obviously only is an option if the binaries included are open source, if that includes proprietary code there's nothing I can do about this task. |
This will require beefy hardware or patience:
|
thanks, I'll see when I can find the time to replicate that environment. But for now, note already that you should change your perf invocation to leverage leader sampling. I.e. right now, you program the PMU to sample both, instructions and cycles independently at ~97 samples per second. This is probably not what you want - instead, write it like this:
this will sample on cycles and then whenever that happens also record the instruction count too. |
Hm I cannot compile the rust compiler, it seems to not honor PATH but instead just looks only in the first entry for
|
I think the build tools try to infer the ar path based on the c compiler path. If the default logic doesn't work you can explicitly set its path in |
@the8472: Would you mind to record as per the suggestion above and recheck the loading with a current appimage? |
Which suggestion are you referring to? |
Using this leads to tiny profiles (less than a megabyte) instead of gigabytes and opening such a profile crashes hotspot. |
I can't reproduce that with some simple examples: perf record --call-graph dwarf -F 97 -e "{cycles,instructions}:S" true
perf record --call-graph dwarf -F 97 true both open fine (same for a bigger application). Can you please share the perf file that crashes hotspot for you (if not already tested, please check the latest Appimage) and how you recorded that. |
I'm rerunning my steps from #394 (comment) only modifying the record command:
It crashes both
|
I think the crashes should be resolved in newer hotspot. But the fact that the data files are tiny sounds like a kernel bug - is that still the case now? |
Rechecked: appimage from October has that fault as above after loading "long", using the appimage from today hotspot directly opens, no chrash any longer. For the original case we likely still need a more reproducible step :-/ |
Can you explain what is not reproducible about #394 (comment) ? |
It's just hard to setup, but I think it should be enough. I just need to find the time to replicate your setup, which is a hurdle for me. Last time I tried I failed, and now I just need to try again with your suggestion on how to workaround that issue. |
Would Dockerfile containing all the steps to reproduce it help? It'll be a bit tricky to also get the GUI to run in docker but I think it should be possible. |
A docker file would help, I can just access it from the outside then using hotspot to analyze the data. I.e. the docker image just needs rust + perf and no UI at all I think. |
FROM archlinux
RUN pacman -Sy && pacman -S --noconfirm git perf base-devel
WORKDIR /opt
RUN git clone --depth 1 https://github.com/rust-lang/rust --branch master --single-branch rust
WORKDIR /opt/rust
RUN cp src/bootstrap/defaults/config.compiler.toml config.toml
CMD ./x test ui --stage 1 -- "does not exist" && perf record -m 128 -F59 --call-graph dwarf -e cycles:u ./x test ui --stage 1
The privileged flag is required to run |
Alright, I looked at this a bit now. It might theoretically help for this specific problem if we could parallelize the perfparser analysis on the pid level, since there are so many of them in the above record. Generally though, there is a ton of inherent overhead that I would like to see removed/reduced, but I fear elfutils is not prepared for that - most notably we load the same libraries and seem to spend the most time during unwinding on finding CFI data and building some elfutils-specific cache, only to then remove that again and rebuilding it for the following process in quick succession. I.e. the problem really is that there are thousands of short running processes here, which is apparently the worst case for analysis. |
Is your feature request related to a problem? Please describe.
Opening large perf recordings (e.g. of a test-suite spawning many binaries) takes minutes, most of the time is spent executing a single
hotspot-perfparser
threadDescribe the solution you'd like
Anything that makes it significantly faster, parallelizing the work being a candidate.
Describe alternatives you've considered
I have tried lowering the sampling rate and profiling fewer tests at a time, but that merely reduced it from lunch-break to coffee-break time.
Additional context
Tested on hotspot 1.3.0
The text was updated successfully, but these errors were encountered: