Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Restructure content process shutdown ack with threaded sender #23972

Merged

Conversation

@gterzian
Copy link
Member

gterzian commented Aug 15, 2019

Recycling some work from #23909


  • ./mach build -d does not report any errors
  • ./mach test-tidy does not report any errors
  • These changes fix #___ (GitHub issue number if applicable)
  • There are tests for these changes OR
  • These changes do not require tests because ___

This change is Reviewable

@highfive
Copy link

highfive commented Aug 15, 2019

Heads up! This PR modifies the following files:

  • @asajeffrey: components/script/dom/htmliframeelement.rs, components/constellation/pipeline.rs, components/script/dom/windowproxy.rs, components/script/script_thread.rs
  • @cbrewster: components/constellation/pipeline.rs
  • @KiChjang: components/script/dom/htmliframeelement.rs, components/script/dom/windowproxy.rs, components/script/script_thread.rs, components/script_traits/lib.rs
@highfive
Copy link

highfive commented Aug 15, 2019

warning Warning warning

  • These commits modify script code, but no tests are modified. Please consider adding a test!
@gterzian gterzian requested a review from asajeffrey Aug 15, 2019
@gterzian
Copy link
Member Author

gterzian commented Aug 15, 2019

This is another recycled commit from the big ipc experiment over at #23909

In this case, it restructures the "wait for content to shutdown" pattern, in the following way:

  1. Since it's only used in multiprocess mode, and the "waiting" is done from the "main thread" in the content process, it seems like a threaded channel, created at the point of use, will suffice.
  2. I remove the "wait on layout" part, since the script-thread will itself wait on layout to shutdown each time it shuts down a layout thread, so it seems to me the main-thread only needs to wait on script?
  3. Since dropping the only sender in existence for that channel will already wake-up the receiver, it seems unnecessary to send an explicit message. Instead, we just store the sender on the script-thread, and it will drop along with it when it exits and notify the main thread(of the content-process).

@asajeffrey r?

@jdm
Copy link
Member

jdm commented Aug 15, 2019

@highfive highfive assigned asajeffrey and unassigned nox Aug 15, 2019
Copy link
Member

asajeffrey left a comment

I'm a bit confused by these changes. Replacing an IPC channel by a crossbeam channel IO can understand, since the channel never leaves the process. But AFAICT the channel is being used as a latch, we're never actually sending a message on it?

let signal = content_process_shutdown_port.recv();
// When the script-thread drops the sender,
// we expect to receive an Err(_).
assert!(signal.is_err());

This comment has been minimized.

@asajeffrey

asajeffrey Aug 15, 2019

Member

Perhaps an error! log rather than a panic?

This comment has been minimized.

@gterzian

gterzian Aug 15, 2019

Author Member

Oh yeah, forgot this is in the constellation thread...

This comment has been minimized.

@gterzian

gterzian Aug 15, 2019

Author Member

It's not in the constellation thread actually, but I guess an error is better.

let signal = content_process_shutdown_port.recv();
// When the script-thread drops the sender,
// we expect to receive an Err(_).
assert!(signal.is_err());

This comment has been minimized.

@asajeffrey

asajeffrey Aug 15, 2019

Member

I'm confused about who sends on this channel, since the only uses of send(()) appear to have been deleted. Are we using a channel just for its Drop implementation?

This comment has been minimized.

@gterzian

gterzian Aug 15, 2019

Author Member

Yes, previously there was an explicit sending of a message when the script-thread exits, which I can bring back if you think that's clearer. Or we can effectively wait until the channel drops, and assert it is indeed the Err that would be received in such a case.

I think we can count on the channel not sending any spurious errors.

This comment has been minimized.

@gterzian

gterzian Aug 15, 2019

Author Member

Actually, I think you're right it's better to send an explicit message, since the drop could also be the result of an unexpected shutdown of the script-thread, while a message will almost always mean there was a graceful shutdown...

@@ -744,6 +744,7 @@ impl ScriptThreadFactory for ScriptThread {
thread_state::initialize(ThreadState::SCRIPT);
PipelineNamespace::install(state.pipeline_namespace_id);
TopLevelBrowsingContextId::install(state.top_level_browsing_context_id);

This comment has been minimized.

@asajeffrey

asajeffrey Aug 15, 2019

Member

Don't think we need a new blank line :)

@gterzian gterzian force-pushed the gterzian:cleanup_content_process_shutdown_ack branch 2 times, most recently from afc6480 to 4cd2730 Aug 15, 2019
@gterzian
Copy link
Member Author

gterzian commented Aug 15, 2019

Ok, comments addressed @asajeffrey

@gterzian gterzian requested a review from asajeffrey Aug 16, 2019
Copy link
Member

asajeffrey left a comment

OK, a question, but otherwise LGTM.

@@ -545,6 +530,7 @@ impl UnprivilegedPipelineContent {
self.script_chan.clone(),
self.load_data.url.clone(),
);
let (content_process_shutdown_chan, content_process_shutdown_port) = unbounded();

This comment has been minimized.

@asajeffrey

asajeffrey Aug 16, 2019

Member

Should this be unbounded or a one-shot channel?

This comment has been minimized.

@gterzian

gterzian Aug 16, 2019

Author Member

I'm not aware of crossbeam having a dedicated one-shot variant.

This comment has been minimized.

@asajeffrey

asajeffrey Aug 16, 2019

Member

I meant bounded(1).

This comment has been minimized.

@gterzian

gterzian Aug 16, 2019

Author Member

for what reason?

This comment has been minimized.

@asajeffrey

asajeffrey Aug 17, 2019

Member

Possible space efficiency, but it probably doesn't matter that much.

This comment has been minimized.

@gterzian

gterzian Aug 17, 2019

Author Member

I see what you mean, counterintuitively, the unbounded variant appears to require less bookkeeping than the bounded variant.

@asajeffrey
Copy link
Member

asajeffrey commented Aug 17, 2019

@bors-servo
Copy link
Contributor

bors-servo commented Aug 17, 2019

📌 Commit 4cd2730 has been approved by asajeffrey

@bors-servo
Copy link
Contributor

bors-servo commented Aug 17, 2019

Testing commit 4cd2730 with merge e0267e8...

bors-servo added a commit that referenced this pull request Aug 17, 2019
… r=asajeffrey

Restructure content process shutdown ack with threaded sender

Recycling some work from #23909

<!-- Please describe your changes on the following line: -->

---
<!-- Thank you for contributing to Servo! Please replace each `[ ]` by `[X]` when the step is complete, and replace `___` with appropriate data: -->
- [ ] `./mach build -d` does not report any errors
- [ ] `./mach test-tidy` does not report any errors
- [ ] These changes fix #___ (GitHub issue number if applicable)

<!-- Either: -->
- [ ] There are tests for these changes OR
- [ ] These changes do not require tests because ___

<!-- Also, please make sure that "Allow edits from maintainers" checkbox is checked, so that we can help you if you get stuck somewhere along the way.-->

<!-- Pull requests that do not address these steps are welcome, but they will require additional verification as part of the review process. -->

<!-- Reviewable:start -->
---
This change is [<img src="https://reviewable.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.io/reviews/servo/servo/23972)
<!-- Reviewable:end -->
@bors-servo
Copy link
Contributor

bors-servo commented Aug 17, 2019

💔 Test failed - status-taskcluster

@gterzian gterzian force-pushed the gterzian:cleanup_content_process_shutdown_ack branch from 4cd2730 to 8482d2e Aug 17, 2019
@gterzian
Copy link
Member Author

gterzian commented Aug 17, 2019

@asajeffrey Ok that was layout_2020 that required updating, done...

@asajeffrey
Copy link
Member

asajeffrey commented Aug 17, 2019

@bors-servo
Copy link
Contributor

bors-servo commented Aug 17, 2019

📌 Commit 8482d2e has been approved by asajeffrey

@bors-servo
Copy link
Contributor

bors-servo commented Aug 17, 2019

Testing commit 8482d2e with merge 0ff5dc5...

bors-servo added a commit that referenced this pull request Aug 17, 2019
… r=asajeffrey

Restructure content process shutdown ack with threaded sender

Recycling some work from #23909

<!-- Please describe your changes on the following line: -->

---
<!-- Thank you for contributing to Servo! Please replace each `[ ]` by `[X]` when the step is complete, and replace `___` with appropriate data: -->
- [ ] `./mach build -d` does not report any errors
- [ ] `./mach test-tidy` does not report any errors
- [ ] These changes fix #___ (GitHub issue number if applicable)

<!-- Either: -->
- [ ] There are tests for these changes OR
- [ ] These changes do not require tests because ___

<!-- Also, please make sure that "Allow edits from maintainers" checkbox is checked, so that we can help you if you get stuck somewhere along the way.-->

<!-- Pull requests that do not address these steps are welcome, but they will require additional verification as part of the review process. -->

<!-- Reviewable:start -->
---
This change is [<img src="https://reviewable.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.io/reviews/servo/servo/23972)
<!-- Reviewable:end -->
@bors-servo
Copy link
Contributor

bors-servo commented Aug 17, 2019

💔 Test failed - status-taskcluster

@gterzian
Copy link
Member Author

gterzian commented Aug 17, 2019

@bors-servo retry

seems like a taskcluster error: TASK FAILURE during artifact upload: file-missing-on-worker: Could not read file '/Users/worker/tasks/task_1566064170/repo/intermittents.log'

@bors-servo
Copy link
Contributor

bors-servo commented Aug 17, 2019

💣 Failed to start rebuilding: Unknown error

@bors-servo
Copy link
Contributor

bors-servo commented Aug 17, 2019

Testing commit 8482d2e with merge a8ae0e5...

bors-servo added a commit that referenced this pull request Aug 17, 2019
… r=asajeffrey

Restructure content process shutdown ack with threaded sender

Recycling some work from #23909

<!-- Please describe your changes on the following line: -->

---
<!-- Thank you for contributing to Servo! Please replace each `[ ]` by `[X]` when the step is complete, and replace `___` with appropriate data: -->
- [ ] `./mach build -d` does not report any errors
- [ ] `./mach test-tidy` does not report any errors
- [ ] These changes fix #___ (GitHub issue number if applicable)

<!-- Either: -->
- [ ] There are tests for these changes OR
- [ ] These changes do not require tests because ___

<!-- Also, please make sure that "Allow edits from maintainers" checkbox is checked, so that we can help you if you get stuck somewhere along the way.-->

<!-- Pull requests that do not address these steps are welcome, but they will require additional verification as part of the review process. -->

<!-- Reviewable:start -->
---
This change is [<img src="https://reviewable.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.io/reviews/servo/servo/23972)
<!-- Reviewable:end -->
@bors-servo
Copy link
Contributor

bors-servo commented Aug 17, 2019

☀️ Test successful - linux-rel-css, linux-rel-wpt, status-taskcluster
Approved by: asajeffrey
Pushing a8ae0e5 to master...

@bors-servo bors-servo merged commit 8482d2e into servo:master Aug 17, 2019
3 checks passed
3 checks passed
Taskcluster (pull_request) TaskGroup: success
Details
continuous-integration/appveyor/pr AppVeyor build succeeded
Details
homu Test successful
Details
@bors-servo bors-servo mentioned this pull request Aug 17, 2019
0 of 5 tasks complete
@gterzian gterzian deleted the gterzian:cleanup_content_process_shutdown_ack branch Aug 18, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Linked issues

Successfully merging this pull request may close these issues.

None yet

6 participants
You can’t perform that action at this time.