improve web benchmark error reporting#51490
Conversation
|
It looks like this pull request may not have tests. Please make sure to add tests before merging. If you need an exemption to this rule, contact Hixie. Reviewers: Read the Tree Hygiene page and make sure this patch meets those guidelines before LGTMing. |
| // Don't keep on truckin' if there's an error. | ||
| if (_hasErrored) { | ||
| return; | ||
| } |
There was a problem hiding this comment.
Instead of checking for _hasErrored in multiple places, what if we check for it in _shouldContinue() and rely on existing logic to stop pumping frames?
There was a problem hiding this comment.
I can't think of a clean way to use it. Currently _shouldContinue depends on everything being healthy. For example, it extracts the profile from rendered frames. It only communicates two signals: "continue running" and "benchmark finished successfully". We'd have to add a third value communicating "benchmark errored and must halt". When an error happens the system could be in a corrupted state, in which case our job is to halt immediately and report the error before it's obscured by something else. IOW I think they have sufficiently different roles to warrant separate codepaths and separate signals.
|
This pull request is not suitable for automatic merging in its current state.
|
This reverts commit cd0fbd3.
|
LGTM |
Description
MaterialandDirectionality. Otherwise it fails assertions in debug mode (probably for good reason). This was not noticed in benchmarks because we don't run them in debug mode.