Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix bug in Dispatch executor preventing long sleeps #64304

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

rauhul
Copy link
Contributor

@rauhul rauhul commented Mar 11, 2023

Dispatch executor's swift_task_enqueueGlobalWithDelayImpl adds the requested delay to the current time and enqueues the job to run at that time. This has a bug when using a very large delay on the order of 2^63 to 2^64 nanoseconds because dispatch_time(), which is used to preform the now + delay calculation, returns DISPATCH_TIME_FOREVER. However, dispatch_after(), which is used to actually enqueue the job, has undefined behavior for this time; the result in practice is that the job is immediately run.

This change resolves the bug by leaking the job if the result of dispatch_time is DISPATCH_TIME_FOREVER. This should effectively be the same as the job getting delayed forever.

Radar-Id: rdar://problem/89455918

Dispatch executor's swift_task_enqueueGlobalWithDelayImpl adds the
requested delay to the current time and enqueues the job to run at that
time. This has a bug when using a very large delay on the order of 2^63
to 2^64 nanoseconds because dispatch_time(), which is used to preform
the now + delay calculation, returns DISPATCH_TIME_FOREVER. However,
dispatch_after(), which is used to actually enqueue the job, has
undefined behavior for this time; the result in practice is that the job
is immediately run.

This change resolves the bug by leaking the job if the result of
dispatch_time is DISPATCH_TIME_FOREVER. This should effectively be the
same as the job getting delayed forever.

Radar-Id: rdar://problem/89455918
@rauhul
Copy link
Contributor Author

rauhul commented Mar 11, 2023

@DougGregor I don't know the details of Task cancellation so I can't tell if leaking the job to handle this exceptional case is actually valid. Do you have any ideas?

@rauhul rauhul added bug A deviation from expected or documented behavior. Also: expected but undesirable behavior. concurrency Feature: umbrella label for concurrency language features labels Mar 11, 2023
@ktoso
Copy link
Member

ktoso commented Mar 12, 2023

Hm... pretty weird.

Could we perhaps test that the run does not at least run immediately like the bug suggested; by sleeping "forever" and then causing a cancellation after e.g. 2 seconds or so and see that it properly is cancelled? Just droping a task on the floor is not right, it still must be able to get cancelled like that after all.

@ktoso ktoso self-requested a review March 12, 2023 01:26
@ktoso
Copy link
Member

ktoso commented Feb 29, 2024

Seems this might supersede my WIP over here which I didn't finish back then: #69118

Resolves #59174

@ktoso
Copy link
Member

ktoso commented Feb 29, 2024

Just leaking is too naive; we need to support cancellation on such task:

let t = Task {
  try? await Task.sleep(.max)
  exit(0)
}
t.cancel()

should be cancelled and return properly -- that's a test-case worth adding

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug A deviation from expected or documented behavior. Also: expected but undesirable behavior. concurrency Feature: umbrella label for concurrency language features
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants