-
Notifications
You must be signed in to change notification settings - Fork 232
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix crash when importing big linux perf tool files #435
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice! A few things inline that need changing, then assume CI passes this should be good to go.
Also, just as a sanity check, this does now load the massive file you referenced in the issue, right?
I'm also curious how quickly such a large file loads, and what the performance bottlenecks of the load are |
8778315
to
19186d1
Compare
Yes, it works ok now |
Currently, importing files generated by linux perf tool whose some blocks exceed V8 strings limit can crash the application. This issue is similar to the one in jlfwong#385. This PR fixes it by changing parseEvents to work directly with lines instead of chunking lines into blocks first. Fixes jlfwong#433
19186d1
to
e1dacfe
Compare
Thank you! |
Ironically, I just realize that it's not a linux perf tool file but a Brendan Gregg-format file. Oh silly me. The code just accidentally hits the error part and crashes. Anw, it took ~23 seconds to import the file and took about ~1 second to render. Out of 23 seconds, ~2 seconds was reading the file into the buffer. The rest is parsing. I have the Chrome profiler dump but it's too large to attach to Github. Do you want to have a look? System info:
|
Ah jeez, I see -- for plaintext files with no known file extension, we always try If you can think of a way of avoiding that by doing some cheap heuristic detection on one format or the other, I'd take a PR to do that too. To see if that's worthwhile for performance reasons, you could delete the lines that run |
Follow up on this PR #435. Currently, it took roughly 22 seconds to load my 1.3GB file. After inspecting the profiler, there's a large chunk of time spending in Frame.getOrInsert. I figure we can reduce the number of invocations by half. It reduces the load time to roughly 18 seconds.I also tested with a smaller file (~350MB), and it show similar gains, about 15-20%
Currently, importing files generated by linux perf tool whose some blocks exceed V8 strings limit can crash the application. This issue is similar to the one in jlfwong#385. This PR fixes it by changing parseEvents to work directly with lines instead of chunking lines into blocks first. Fixes jlfwong#433
Follow up on this PR jlfwong#435. Currently, it took roughly 22 seconds to load my 1.3GB file. After inspecting the profiler, there's a large chunk of time spending in Frame.getOrInsert. I figure we can reduce the number of invocations by half. It reduces the load time to roughly 18 seconds.I also tested with a smaller file (~350MB), and it show similar gains, about 15-20%
Currently, importing files generated by linux perf tool whose some blocks exceed V8 strings limit can crash the application. This issue is similar to the one in #385.
This PR fixes it by changing parseEvents to work directly with lines instead of chunking lines into blocks first.
Fixes #433