Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Scheduler progress sense #7889

Closed
bblum opened this issue Jul 18, 2013 · 13 comments
Closed

Scheduler progress sense #7889

bblum opened this issue Jul 18, 2013 · 13 comments
Labels
A-concurrency Area: Concurrency related issues. A-runtime Area: std's runtime and "pre-main" init for handling backtraces, unwinds, stack overflows E-easy Call for participation: Easy difficulty. Experience needed to fix: Not much. Good first issue. E-hard Call for participation: Hard difficulty. Experience needed to fix: A lot.

Comments

@bblum
Copy link
Contributor

bblum commented Jul 18, 2013

The new runtime should be able to figure out (a) when all tasks are blocked, in which case it should report a deadlock, and (b) when a task is stuck in a "tight" infinite loop (i.e., not hitting the scheduler). The former can be done precisely; the latter will probably have to be done heuristically with some sort of watchdog thread. This maybe should be two different bugs.

The former will work with 2 global reference counts - one which tracks the number of non-sleeping schedulers, and one which tracks the number of tasks blocked on I/O. When the last scheduler goes to sleep, if the I/O-blocking refcount is zero, it means all tasks are either exited or blocked on pipes. If the latter, the runtime should emit a "your tasks deadlocked and now the process is hanging" message and exit. This will build on the refcounts we'll probably use for #7702.

@glaebhoerl
Copy link
Contributor

Haskell throws an exception to the main thread in this case.

Edit: Oh, but Rust doesn't have async exceptions I think. Never mind me.

@Thiez
Copy link
Contributor

Thiez commented Sep 17, 2013

If you keep track of the tasks that are blocked on acquiring a resource you can detect deadlocks even when they don't block the entire program. This would require the scheduler to somehow keep track of resources, but this could be cheap (it only needs sufficient information to construct a wait-for graph). There could be a low priority task that periodically constructs the graph and performs a cycle detection.

Even if one is unwilling to pay the price of such a task, the wait-for graph could still be constructed by the scheduler when the 'all tasks are deadlocked' scenario occurs. Having an error message that describes the scenario could be really helpful with debugging.

@bblum
Copy link
Contributor Author

bblum commented Sep 17, 2013

How would you track wait-for information in the case of pipes? I think this would require tasks to record whenever they give a pipe endpoint away to another task.

@eholk
Copy link
Contributor

eholk commented Oct 18, 2013

One of the features about pipes that hopefully is still there is that if you were blocking on a receive and the task with the other end of the pipe fails then you would be woken up. Assuming this behavior is still there, instead of crashing when there's a pipe-related deadlock, the scheduler could just pick a task at random and fail it. Rust programs were originally meant to use the crash-only software philosophy, where they would be designed to restart a failed task and recover.

@bblum
Copy link
Contributor Author

bblum commented Oct 18, 2013

The feature is still there.

@thestinger
Copy link
Contributor

Closing in favour of #1930 (Thread Sanitizer), since #17325 means Rust will no longer need any special tooling for this kind of debugging.

@bblum
Copy link
Contributor Author

bblum commented Sep 19, 2014

I disagree. Detecting data races and detecting deadlocks or infinite loops are totally different challenges.

@thestinger
Copy link
Contributor

Thread sanitizer does detect deadlocks. Detecting an infinite loop would definitely require using ptrace hacks and doesn't seem to be a challenge specific to Rust. It would need to operate at a machine code / system call level as it would need to detect side effects to find a no-op infinite loop.

@bblum
Copy link
Contributor Author

bblum commented Sep 19, 2014

Does #17325 mean the green thread scheduler is gone completely, and rust tasks will be 1:1 with pthreads instead?

@thestinger
Copy link
Contributor

Yes, it's going to be removed from the standard libraries and likely moved out to a third party repository. It would need to provide a new native IO / concurrency library in addition to a green one if it wants to keep doing the dynamic runtime selection.

@bblum
Copy link
Contributor Author

bblum commented Sep 19, 2014

Is there a plan to transfer these scheduler-related issues to the other repository's issue tracker, or is the plan to just forget about them?

@thestinger
Copy link
Contributor

@bblum: Well, I'm linking every one to #17325 so that GitHub makes a list of the relevant issues and they can then be transferred over. I think many are not going to be relevant to a new implementation without tight integration into the standard libraries.

@bblum
Copy link
Contributor Author

bblum commented Sep 19, 2014

I see. Thanks for clarifying.

flip1995 pushed a commit to flip1995/rust that referenced this issue Nov 23, 2021
1. Fix the problem of manual_split_once changing the original behavior.
2. Add a new lint needless_splitn.

changelog: Fix the problem of manual_split_once changing the original behavior and add a new lint needless_splitn.
flip1995 pushed a commit to flip1995/rust that referenced this issue Nov 23, 2021
…teffen

Fix for rust-lang#7889 and add new lint needless_splitn

fixes: rust-lang#7889
1. Fix the problem of manual_split_once changing the original behavior.
2. Add a new lint needless_splitn.

changelog: Fix the problem of manual_split_once changing the original behavior and add a new lint needless_splitn.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
A-concurrency Area: Concurrency related issues. A-runtime Area: std's runtime and "pre-main" init for handling backtraces, unwinds, stack overflows E-easy Call for participation: Easy difficulty. Experience needed to fix: Not much. Good first issue. E-hard Call for participation: Hard difficulty. Experience needed to fix: A lot.
Projects
None yet
Development

No branches or pull requests

5 participants