Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

syntax for guaranteed coroutine frame allocation elision #1260

Closed
andrewrk opened this issue Jul 18, 2018 · 2 comments
Closed

syntax for guaranteed coroutine frame allocation elision #1260

andrewrk opened this issue Jul 18, 2018 · 2 comments
Labels
proposal This issue suggests modifications. If it also has the "accepted" label then it is planned.
Milestone

Comments

@andrewrk
Copy link
Member

When you await a promise in the same stack frame or coroutine frame that you created it with async, we can guarantee that no memory allocation occurs. With status quo Zig, I've been using async foo() catch unreachable, but this is problematic for a few reasons:

  • It's not the correct semantics. This means that the allocation may occur, but we know we have enough memory available in the allocator so that the next allocation will not fail.
  • catch unreachable should stick out as a noticeable code smell. Guaranteed memory allocation elision for async calls is common, and should be used often. False positives like this harm people's respect for trying to avoid catch unreachable.

Proposal: use async<> with empty angle brackets to require frame allocation elision.

async fn foo() void {
    await async<> bar();
}

Using async<> asserts that the corresponding await or cancel will occur in the same stack frame or coroutine frame.

Implementation details:

It's certainly possible to make a runtime safety check for this. We can make a convention that passing a null pointer as the allocator means it must be elided, and then when asked to do an allocation, if the allocator is null, call panic.

Theoretically we should be able to have a compile error for not being able to do the elision. However this depends on LLVM IR passes. So it would require inspection into the LLVM IR module post-optimization. This is inherently tricky and would require research.

@andrewrk andrewrk added the proposal This issue suggests modifications. If it also has the "accepted" label then it is planned. label Jul 18, 2018
@andrewrk andrewrk added this to the 0.3.0 milestone Jul 18, 2018
@andrewrk andrewrk mentioned this issue Aug 10, 2018
6 tasks
@andrewrk andrewrk modified the milestones: 0.3.0, 0.4.0 Sep 11, 2018
@andrewrk
Copy link
Member Author

andrewrk commented Oct 2, 2018

New proposal:

  • Scrap zig's coroutine implementation. LLVM's coroutine support is buggy, slow, and has poor semantics. New implementation based on functions that accept a secret struct pointer as the first argument. Splitting is done by zig, not llvm, which means we can make it fast.
  • Coroutines are no longer generic across an allocator pointer. Coroutines no longer allocate memory. The memory must be passed to async.
  • New type: @Frame(func) where func is a coroutine. This is an opaque struct that holds the coroutine frame of the function. This type can be in the stack, in a struct field, global variable, or allocated on the heap, just like any other type.
  • No more promise type. The instance of @Frame(func) is used in place.
  • async can no longer fail.

Examples:

var frame1 = async readFile("foo.txt");
var frame2 = async readFile("bar.txt");
const foo_data = await frame1;
const bar_data = await frame2;

@typeOf(frame1) == @Frame(readFile)

With error handling:

var frame1 = async readFile("foo.txt");
defer cancel frame1; 
var frame2 = async readFile("bar.txt");
defer cancel frame2;
const foo_data = try await frame1;
const bar_data = try await frame2;

cancel is now sufficient. no need for cancelasync and cancelsuspend. This issue supercedes #1363. cancel is idempotent.

Here's the above example but using the event.Group API:

var group = event.Group.init(loop);
defer group.deinit();
try group.call(readFile, "foo.txt"); // allocates a Frame(readFile) on the heap along with results[0]
try group.call(readFile, "bar.txt"); // allocates a Frame(readFile) on the heap along with results[1]
await async group.wait();
const foo_data = try group.results[0];
const bar_data = try group.results[1];

Chaining await and async works well. The memory goes into the stack/frame of the current function.

const foo_data = try await async readFile("foo.txt");
const bar_data = try await async readFile("bar.txt");

When we solve #157, this recursive function would be a compile error:

fn fib(x: i32) i32 {
    if (x <= 1) return x;
    return fib(x - 1) + fib(x - 2); // error: unbounded stack growth
}

This is solved with stackless functions (aka coroutines aka async functions):

async fn fib(allocator: *Allocator, x: i32) !i32 {
    if (x <= 1) return x;
    const frame = try allocator.create(@Frame(fib));
    defer allocator.destroy(frame);
    frame.* = async fib(x - 1);
    const value1 = await frame.*;
    frame.* = async fib(x - 2);
    const value2 = await frame.*;
    return value1 + value2;
}

Calls to this function grow heap memory, not stack memory. The function can return error.OutOfMemory. This is a leaf node in the call graph analysis; it does not store any information on the stack.

@andrewrk
Copy link
Member Author

Fixed by the merge of #3033

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
proposal This issue suggests modifications. If it also has the "accepted" label then it is planned.
Projects
None yet
Development

No branches or pull requests

1 participant