Join GitHub today
GitHub is home to over 50 million developers working together to host and review code, manage projects, and build software together.
Sign upPerformanceResourceTiming: responseEnd #21263
Comments
|
servo/components/net/http_loader.rs Lines 1284 to 1302 in 481662d |
|
@jdm Which file(s) should I be looking at for this issue? I would assume the above linked files have to be changed, but I docs.servo.org/servo is crashing on me and I can't find ResourceFetchTiming structure. Is there any other way to access the docs? |
|
|
You could try running |
|
Hi @jdm , |
|
Sorry for the delay; I was on vacation. Yes, that sounds right to me. |
|
@highfive assign me |
|
Hey @tdelacour! Thanks for your interest in working on this issue. It's now assigned to you! |
|
Hi @jdm (or whoever can help!) I've added a couple of statements inside the
But this causes a compiler error:
I think I understand, broadly, what the issue is: I'm referencing As you can probably tell, I'm very new to Rust. I'm reading up on closures, moves, concurrency, send / sync, etc in the Rust book, but I could definitely use some more targeted help |
|
If you make a clone of |
|
amazing, thank you! |
Add PerformanceResourceTiming: ResponseEnd <!-- Please describe your changes on the following line: --> 1. Added `ResponseEnd` to `ResourceAttribute` enum in `net_traits` and added it to the `set_attribute` function on `ResourceFetchTiming` 2. Added `response_end` field to `performanceresourcetiming.rs` 3. In `http_loader.rs`, set ResponseEnd after response body read is complete, or before return due to network error. I could use a little guidance on testing. After building and running `wpt` tests, I noticed that some tests now "Pass" when they were expected to "Fail". As per the wiki instructions, I've removed those expectations from the `metadata`. I noticed that there are a handful of other "failing" test expectations associated with `responseEnd`, but those still do not pass. I looked through some similar PRs (`connectEnd`, `redirectStart`, etc) and saw that they also still have a few failing test expectations here and there. Does that mean this is OK for now? How can I better understand which tests we expect to resolve for a given issue? Thanks! --- <!-- Thank you for contributing to Servo! Please replace each `[ ]` by `[X]` when the step is complete, and replace `___` with appropriate data: --> - [X] `./mach build -d` does not report any errors - [X] `./mach test-tidy` does not report any errors - [X] These changes fix #21263 (GitHub issue number if applicable) <!-- Either: --> - [X] There are tests for these changes OR - [ ] These changes do not require tests because ___ <!-- Also, please make sure that "Allow edits from maintainers" checkbox is checked, so that we can help you if you get stuck somewhere along the way.--> <!-- Pull requests that do not address these steps are welcome, but they will require additional verification as part of the review process. --> <!-- Reviewable:start --> --- This change is [<img src="https://reviewable.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.io/reviews/servo/servo/23178) <!-- Reviewable:end -->
Add PerformanceResourceTiming: ResponseEnd <!-- Please describe your changes on the following line: --> 1. Added `ResponseEnd` to `ResourceAttribute` enum in `net_traits` and added it to the `set_attribute` function on `ResourceFetchTiming` 2. Added `response_end` field to `performanceresourcetiming.rs` 3. In `http_loader.rs`, set ResponseEnd after response body read is complete, or before return due to network error. I could use a little guidance on testing. After building and running `wpt` tests, I noticed that some tests now "Pass" when they were expected to "Fail". As per the wiki instructions, I've removed those expectations from the `metadata`. I noticed that there are a handful of other "failing" test expectations associated with `responseEnd`, but those still do not pass. I looked through some similar PRs (`connectEnd`, `redirectStart`, etc) and saw that they also still have a few failing test expectations here and there. Does that mean this is OK for now? How can I better understand which tests we expect to resolve for a given issue? Thanks! --- <!-- Thank you for contributing to Servo! Please replace each `[ ]` by `[X]` when the step is complete, and replace `___` with appropriate data: --> - [X] `./mach build -d` does not report any errors - [X] `./mach test-tidy` does not report any errors - [X] These changes fix #21263 (GitHub issue number if applicable) <!-- Either: --> - [X] There are tests for these changes OR - [ ] These changes do not require tests because ___ <!-- Also, please make sure that "Allow edits from maintainers" checkbox is checked, so that we can help you if you get stuck somewhere along the way.--> <!-- Pull requests that do not address these steps are welcome, but they will require additional verification as part of the review process. --> <!-- Reviewable:start --> --- This change is [<img src="https://reviewable.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.io/reviews/servo/servo/23178) <!-- Reviewable:end -->
On getting, the responseEnd attribute MUST return as follows:
Spec: https://w3c.github.io/resource-timing/#dom-performanceresourcetiming-responseend