-
Notifications
You must be signed in to change notification settings - Fork 101
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
v2 API Proposal Document #339
base: main
Are you sure you want to change the base?
Conversation
|
Thank you @aryan-25 ! I'll have a look this morning |
|
This is not related to this doc, but should we move the Current struct : Proposed struct |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great steps forward!
Sources/AWSLambdaRuntimeCore/Documentation.docc/Proposals/0001-v2-api.md
Outdated
Show resolved
Hide resolved
|
|
||
| `LambdaContext` will be largely unchanged, but the `eventLoop` property as well as the `allocator` property (of type `ByteBuffer`) will be removed. | ||
|
|
||
| A new function `backgroundTask()` will also be added. This will allow tasks to be run in the background while and after the response is/has been sent. Please note that `LambdaContext` will not be Sendable anymore. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| A new function `backgroundTask()` will also be added. This will allow tasks to be run in the background while and after the response is/has been sent. Please note that `LambdaContext` will not be Sendable anymore. | |
| A new function `addBackgroundTask(_:)` will also be added. This will allow tasks to be run in the background while and after the response is/has been sent. Please note that `LambdaContext` will not be Sendable anymore. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What's the reasoning behind making the context non-sendable? Is it for the background task stuff? It should be fine given the scope of sharing it will be pretty narrow but it could cause issues with strict concurrency for those that aren't aware
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In order to allow adding background tasks in a structured way, we will need to back the LambdaContext with a TaskGroup. Since TaskGroup is not Sendable we can't mark LambdaContext as Sendable. However if users need any property (all of them are Sendable) out of the LambdaContext they can just get that property out of it and then pass it around or close over it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why does the context need to hold the TaskGroup? I smell structured concurrency violations :)
|
|
||
| ### StreamingLambdaHandler | ||
|
|
||
| The new `StreamingLambdaHandler` protocol is the base protocol to implement a Lambda function. Most users will not use this protocol and instead use the `LambdaHandler` protocol defined in the `Codable` Support section. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just for clarity it might be nice to have a section talking about the LambdaHandler - I spent a bit of time trying to find the detailed explanation of it before realising it's in the top section
|
|
||
| ### LambdaRuntime | ||
|
|
||
| `LambdaRuntime` is the class that communicates with the Lambda control plane as defined in [Building a custom runtime for AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/runtimes-custom.html) and forward the invocations to the provided `StreamingLambdaHandler`. It will conform to `ServiceLifecycle.Service` to provide support for `swift-service-lifecycle`. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does this work with LambdaHandler as well, or does that conform to StreamingLambdaHandler to ferry calls through?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In order to use a LambdaHandler users will wrap it in a LambdaCodableAdapter. LambdaCodableAdapter conforms to StreamingLambdaHandler. So the LambdaRuntime type that you will use when using LambdaHandler will be:
let runtime: LambdaRuntime<LambdaCodableAdapter<MyLambdaHandler, Event, Output, JSONDecoder, JSONEncoder>>
but users basically never have to use that type explicitly. So to answer your question: Yes it works with LambdaHandler as well, but via the LambdaCodableAdapter.
Sources/AWSLambdaRuntimeCore/Documentation.docc/Proposals/0001-v2-api.md
Outdated
Show resolved
Hide resolved
|
|
||
| We decided to implement the approach in which a `LambdaResponseWriter` is passed to the function, since the approach in which a `LambdaResponse` is returned can trivially be built on top of it. This is not true vice versa. | ||
|
|
||
| We welcome the discussion on this topic and are open to change our minds and API here. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It would be interesting to see if anyone has any use cases that integrate something like Swift Middleware
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is a fair point and I would love to see how middleware would work with this writer approach. IMO middleware do make some sense for Lambda
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If using Middleware with a request and context, do you think the response writer would be placed in the context or be a separate parameter, thus meaning a departure from the current swift-middleware setup
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Regarding whether you provide a response writer, versus returning a response that includes a writer I not sure. I have gone back and forward on this when building HB. In most cases I have not seen any need for a response writer but there are still situations where they makes things clearer. eg Tracing (You want to finished your request span once the response has been written, not when you have returned a response which includes the closure that'll write the response).
In the end you can implement returning a response on top of an API what uses a response writer but not the reverse so for flexibility it should probably be a response writer.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I made this point above, middleware definitely does make sense for lambda (for example tracing should just be a middleware that can be used anywhere) and should just be integrated IMO. My only quasi-concern with the passed-in-writer-approach is that for it to work with middleware, the middleware protocol has to suppress Copyable conformance on one of its associated types - its writer type, something that isn't going to be supported in Swift 6.0.
That said I do think the argument that this is a lower-level and more flexible API makes sense.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think we want to integrate the swift-middleware library directly. I think if users have those needs they should consider using hummingbird/smoke/vapor in lambda mode, which then brings the option to use middleware.
This shall not mean that writing a middleware can be impossible. We should allow users to write middleware that they can stack.
Also I think swift-middleware is currently not targeted for anytime soon. So in order to make progress we should not depend on it. Integrating swift-middleware (as is) on top of this proposed API is absolutely possible and therefore should not be considered a blocker.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can debate whether we think the swift-middleware library should be integrated directly but I do agree it shouldn't be considered a blocker for this work/release.
Sources/AWSLambdaRuntimeCore/Documentation.docc/Proposals/0001-v2-api.md
Outdated
Show resolved
Hide resolved
| /// Write a response part into the stream and end the response stream | ||
| consuming func writeAndFinish(_ buffer: ByteBuffer) async throws | ||
| /// Report an error in the response stream | ||
| consuming func reportError(_ error: any Error) async throws |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This seems like it is finishing as well? Do just want to spell this finish(error: (any Error)?) instead?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure I agree here. I think the reportError is more explicit in what it does, than a finish with an error.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I tend to agree with @FranzBusch. Underneath this call it will write the error into some trailing headers and end the response. So it is always going to finish the stream. reportError sounds like something akin to emitting a log or metric.
I would propose spelling the API as finishWithMidstreamError(_: any Error) or some variation of to mirror the terminology Lambda itself uses and to be explicit about what the API will do.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This seems to dovetail with the comments about making handle support throwing of errors. Do we want users to report errors, via reportError or via throwing an error. Given this will end the stream I think I agree with Franz.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
After thinking about this more, I think we should just make the handler method throwing. In this case we will do the error reporting, which means users don't have to deal with it, which in turn means we should remove this method here.
@FranzBusch, @tachyonics, @adam-fowler Can you get behind this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The only issue with that is the user can't customise their response to the error, they might want to report the error differently if they haven't started streaming response data versus being in the middle of streaming a response.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
They can still do this by doing a custom do/catch. It just makes the default case easier
| /// the runtime can ask for further events. Note that AWS will continue to | ||
| /// charge you after the response has been returned, but work is still | ||
| /// processing in the background. | ||
| public func addBackgroundTask(_ body: sending @escaping () async -> ()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This doesn't feel necessary to me with the current API proposal. Since we pass in an independent writer. A user can just write some bytes, the call writer then finish and then do any background work before returning from the handle method. They can even set that up in child tasks in the handle method to start the background work while handling the actual request.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's a great call. Yes if we keep the API that passes in a LambdaResponseWriter we can put the user in charge!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agree. This is unstructured concurrency and given that the user controls finish, they know when the response is done.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@weissi this would not be unstructured, as the underlying runtime would inject a taskGroup into the context and that taskGroup would be used to schedule the child task here. The underlying runtime would then ensure that all tasks have completed, before it would ask for more work. This approach would be necessary, if we used an approach in which the user returns a LambdaResponse:
protocol LambdaStreamingResponse {
func handle(_ event: ByteBuffer, context: LambdaContext) async throws -> LambdaResponse
}However this is needed for the LambdaHandler API:
public protocol CodableLambdaHandler: ~Copyable {
associatedtype Event
associatedtype Output
// only way to schedule background work that can continue after returning the Output here
// is by having the addBackgroundTask on `LambdaContext` here.
func handle(_ event: Event, context: LambdaContext) async throws -> Output
}@FranzBusch given this, do we want to have two different LambdaContexts? Or are we fine with keeping the LambdaContext.addBackgroundTask even in the situation where it isn't really needed? I think I would opt for keeping it (so that we have it for the 99% use-case)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@fabianfett I'm pretty sure that it's is unstructured, it just manages to now use a Task { to achieve it. It passes the background work into a reference which will then be owned by something further up the stack.
The test is easy, if you write this code
do { // creating a useless piece of structure (so we can observe Structured Concurrency)
thing.addBackgroundWork { print("AAAA") }
} // the structure ends here
print("BBBB")
Is there any way in which AAAA could print after BBBB? If so: unstructured.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The problem with splitting it across two methods is if the post work needs context from the initial work.
but we can pass the context, no?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@weissi what if you want to create a task that starts immediately but is allowed to run longer... I totally get the benefits. I think we need something that is better than Task {} here. And I think addBackgroundTask {} is significantly better here, as it will guarantee that you get runtime until your background task finishes. We could not guarantee this for Task {}. Also two methods on the interface won't work with the closure API.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@fabianfett Not sure I understand what "a task that starts immediately but is allowed to run longer" is. Could you provide some example code?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
An API call that
- does a short computation which is returned to you immediately
- spawns a longer computation, which's result is send to you via mail once it finishes (can take up to 15min)
Both should start as soon as the API is invoked. We can potentially question the API design here. But this is possible with Lambda and we should allow users to do this, if they opt to.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Probably the most consistent approach for @fabianfett's use case is to provide a wrapping CodableResponseWriter (potentially with just a writeAndFinish API) while allowing work to continue past the result being returned to the control plane. It is not as simple as returning the result directly. We could provide both forms but that does add its own complexity to the library overall.
| /// `Logger` to log with. | ||
| /// | ||
| /// - note: The `LogLevel` can be configured using the `LOG_LEVEL` environment variable. | ||
| public var logger: Logger { get } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wondering how we feel about this in relation to apple/swift-log#315. Also why do we call out the env variable here? Shouldn't we just pass in the logger to the runtime and that logger determines the log level?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think your benchmark shows clearly that using TaskLocals is more expensive than passing around a logger time wise.
Given that Lambdas are payed by the milliseconds I certainly don't want to make using TaskLocals the only approach.
Also I want this API to land sooner rather than later and not block on the above discussion that can be seen as an addition.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1 Around logging and TaskLocals, it shouldn't be the only approach.
Explicitly passing a correctly-configured logger into the Runtime should be the recommended approach IMO rather than being able to partially configure logging through environment variables.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yep, there'll be a lot of discussion around using a TaskLocal logger and it shouldn't hold this work up.
| public var invokedFunctionARN: String { get } | ||
|
|
||
| /// The timestamp that the function times out. | ||
| public var deadline: DispatchWallTime { get } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we use ContinuousClock.Instant here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nope this is epoch time.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As per Lambda runtime API
Lambda-Runtime-Deadline-Ms – The date that the function times out in Unix time milliseconds. For example, 1542409706888.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Got you so we would need a UTCClock.Instant here
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes. Sadly we don't have that yet. Only way out would be if Lambda built its own Clock. But I don't think we want to do that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe you should create a LambdaUTCTimeClock type that conforms to all the new hotness but implements the clock. With the hope that one day you can deprecate it and to typealias LambdaUTCTimeClock = UTClock with a bit of luck.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@weissi would you build the sleeping on top of dispatch?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@weissi would you build the sleeping on top of dispatch?
No.
I'd just create a custom Clock type (because the language lacks UTCClock for some reason). Once you have your own clock (which you could for example back by clock_gettime and struct timespec but hidden as implementation detail).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How would you implement the required sleeping methods though?
https://developer.apple.com/documentation/swift/clock/sleep(for:tolerance:)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can just use the normal Task.sleep on the non-suspending clock for that. UTC time and the non-suspending clock run at the same speed
| enum Lambda { | ||
| /// This returns the default EventLoop that a LambdaRuntime is scheduled on. | ||
| /// It uses `NIOSingletons.posixEventLoopGroup.next()` under the hood. | ||
| public static var defaultEventLoop: any EventLoop { get } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Like noted I don't see why we would need to create a separate static var for this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
answered above.
| We also add an enum to store a static function and a property on. We put this on the static `Lambda` because `LambdaRuntime` is generic and thus has bad ergonomics for static properties and functions. | ||
|
|
||
| ```swift | ||
| enum Lambda { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this namespace really worth it if we remove the EL default and can we then just make the reportStartupError global method?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See above.
| public static var defaultEventLoop: any EventLoop { get } | ||
|
|
||
| /// Report a startup error to the Lambda Control Plane API | ||
| public static func reportStartupError(any Error) async |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What if we make this a with method so that users can do any setup and we report any error that happens there. Instead of them having to do a do/catch?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we will opt for catching the error directly.
| #### LambdaHandler | ||
|
|
||
| ```swift | ||
| public protocol LambdaHandler { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just out of curiosity we can't solve this with protocol inheritance right? We do need the various different protocols and adapters?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We do have protocol inheritance today and I don't like it. The problem here is that there is never a case where the protocol inheritance here makes sense from a user point of view. You either want to use pure streaming or you want to use Codable. But there is no point in which you opt to override the streaming API when you implement LambdaHandler. Worst of all injecting custom Encoders and Decoders only works through globals.
|
|
||
| We decided to implement the approach in which a `LambdaResponseWriter` is passed to the function, since the approach in which a `LambdaResponse` is returned can trivially be built on top of it. This is not true vice versa. | ||
|
|
||
| We welcome the discussion on this topic and are open to change our minds and API here. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is a fair point and I would love to see how middleware would work with this writer approach. IMO middleware do make some sense for Lambda
| for _each_ different handler protocol and for both `String` and `JSON` formats. This has resulted in a lot of | ||
| boilerplate code which can very easily be made generic and simplified in v2. | ||
|
|
||
| ### New features |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We touched on this briefly in a previous discussion but do we want to directly integrate middleware into the library - or at the very least ensure that there is a clear path to allow users to do so themselves?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's do the discussion here
| /// Write a response part into the stream and end the response stream | ||
| consuming func writeAndFinish(_ buffer: ByteBuffer) async throws | ||
| /// Report an error in the response stream | ||
| consuming func reportError(_ error: any Error) async throws |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I tend to agree with @FranzBusch. Underneath this call it will write the error into some trailing headers and end the response. So it is always going to finish the stream. reportError sounds like something akin to emitting a log or metric.
I would propose spelling the API as finishWithMidstreamError(_: any Error) or some variation of to mirror the terminology Lambda itself uses and to be explicit about what the API will do.
| ### LambdaContext | ||
|
|
||
| `LambdaContext` will be largely unchanged, but the `eventLoop` property as well as the `allocator` property (of type | ||
| `ByteBufferAllocator`) will be removed. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As we discussed earlier, it would be useful to add context here as to why allocator is being removed - it that it is related to an optimisation that was never provided.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We generally want to reduce number of NIO types that we expose in APIs. NIO shall eventually become an implementation detail.
If you look at ByteBufferAllocator you can see that it does not optimize creating allocations today. Instead the pattern that is used over and over is reusing existing ByteBuffers as much as possible. The new CodableAdapter helps significantly in that regard. Is there a use-case for which you need a ByteBufferAllocator?
cc @weissi please correct me if this is wrong.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it's fine to remove this. The original idea was that you could use more efficient allocators for ByteBuffers but we never actually used that. Swift itself is still not able to use alternative allocators (except if you globally set malloc/free to a different one) so I don't think this was as useful as we thought it would be (outside of ByteBuffer tests).
| /// `Logger` to log with. | ||
| /// | ||
| /// - note: The `LogLevel` can be configured using the `LOG_LEVEL` environment variable. | ||
| public var logger: Logger { get } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1 Around logging and TaskLocals, it shouldn't be the only approach.
Explicitly passing a correctly-configured logger into the Runtime should be the recommended approach IMO rather than being able to partially configure logging through environment variables.
| /// If no response or error is written to the `responseWriter` it will | ||
| /// report an error to the invoker. | ||
| /// - context: The LambdaContext containing the invocation's metadata | ||
| mutating func handle(_ event: ByteBuffer, responseWriter: consuming some LambdaResponseWriter, context: LambdaContext) async |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So a question around a throwing function and Non-copyable types.
In the happy case, a user will have to consume responseWriter within the body of this function - so some kind of explicit finish* call. I am ok with this although a little more verbose, it is explicit in what is happening.
In the throwing case, I am assuming the writer will still need to be consumed prior to throwing. Is this correct and if so, what would the runtime do with a thrown error but a now-consumed writer?
| public init( | ||
| handler: consuming sending Handler, | ||
| eventLoop: EventLoop = Lambda.defaultEventLoop, | ||
| logger: Logger = Logger(label: "Lambda") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1 to @0xTim's point. There should be some basic logging by default for discoverability and I would think it is the expectation of people coming from the rest of the ecosystem. @FranzBusch why don't you want to default the logger?
That said, the v1 implementation was way too noisy IMO, particularly at higher log levels. We should make sure the runtime is a good citizen and conforms to https://www.swift.org/documentation/server/guides/libraries/log-levels.html (ie. mostly logging at trace but enough logs at debug to show that the library is working to first-time users)
| handler: Handler, | ||
| encoder: Encoder, | ||
| decoder: Decoder |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would it make more sense for handler to be the final argument so it could be a trailing closure if the user wants?
Is there an end-to-end example somewhere of how this looks from main() for the case where custom coders need to be used?
| decoder: Decoder | ||
| ) where Output: Encodable | ||
|
|
||
| /// For handler with a void output -- the user doesn't specify an encoder. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why wouldn't we have the inverse for a void input?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Interesting point, we didn't support this before. Are there Lambdas without input?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the answer is "no" (or at least an empty JSON {})
Source: the runtime API and the invoke API.
I asked the team to confirm
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I confirm a payload is required. Minimum payload is {}
|
|
||
| We decided to implement the approach in which a `LambdaResponseWriter` is passed to the function, since the approach in which a `LambdaResponse` is returned can trivially be built on top of it. This is not true vice versa. | ||
|
|
||
| We welcome the discussion on this topic and are open to change our minds and API here. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I made this point above, middleware definitely does make sense for lambda (for example tracing should just be a middleware that can be used anywhere) and should just be integrated IMO. My only quasi-concern with the passed-in-writer-approach is that for it to work with middleware, the middleware protocol has to suppress Copyable conformance on one of its associated types - its writer type, something that isn't going to be supported in Swift 6.0.
That said I do think the argument that this is a lower-level and more flexible API makes sense.
| let runtime = LambdaRuntime { (event: Input, context: LambdaContext) in | ||
| Greeting(echoedMessage: event.message) | ||
| } | ||
|
|
||
| try await runtime.run() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For this, the simplest use case, would it make sense to provide a static main() on LambdaRuntime that just calls its run method so users can use @main if this is all they need to do?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure I can follow here. How would that code look like?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wouldn't it look like-
@main
LambdaRuntime { (event: Input, context: LambdaContext) in
Greeting(echoedMessage: event.message)
}|
|
||
| The current API extensively uses the `EventLoop` family of interfaces from SwiftNIO in many areas. To use these | ||
| interfaces correctly though, it requires developers to exercise great care and be aware of certain details such as never | ||
| running blocking code on the same `EventLoop` the library uses. Developers also need to understand the various transform |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same applies to async/await. I'd scratch this
|
|
||
| ```swift | ||
| /// A writer object to write the Lambda response stream into | ||
| public protocol LambdaResponseWriter: ~Copyable { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think making this ~Copyable will lead to a load of issues. Can we have a motivation on why we allow non-copyable types to implement this? If we allow non-copyable types here, then anything accepting LambdaResponseWriters needs to support this which sounds like a tall ask.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LambdaResponseWriter is not a protocol that users would need to implement. It is marked NonCopyable as we want to enforce through the compiler that users can't write() after they have called finish().
If we allow non-copyable types here, then anything accepting
LambdaResponseWriters needs to support this which sounds like a tall ask.
Would you mind elaborating on this a bit?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure, but don't you provide a ~Copyable LambdaResponseWriter to the user?
| /// the runtime can ask for further events. Note that AWS will continue to | ||
| /// charge you after the response has been returned, but work is still | ||
| /// processing in the background. | ||
| public func addBackgroundTask(_ body: sending @escaping () async -> ()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agree. This is unstructured concurrency and given that the user controls finish, they know when the response is done.
|
|
||
| ```swift | ||
| /// The base StreamingLambdaHandler protocol | ||
| public protocol StreamingLambdaHandler: ~Copyable { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think making this ~Copyable will lead to a load of issues. Can we have a motivation on why we allow non-copyable types to implement this? If we allow non-copyable types here, then anything accepting StreamingLambdaHandler s needs to support this which sounds like a tall ask.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we have a motivation on why we allow non-copyable types to implement this?
It is not necessary to have a non copyable Handler. We just allow it. Tbh. the motivation here is mostly, because we can.
If we allow non-copyable types here, then anything accepting
StreamingLambdaHandlers needs to support this which sounds like a tall ask.
I think the only thing that should accept this protocol is LambdaRuntime and we have support for non copyable already implemented in our prototype. What else would consume a StreamingLambdaHandler?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What else would consume a StreamingLambdaHandler?
The user's tests?
we have support for non copyable already implemented in our prototype.
Sure, but it could cause problems further down the line, no? Like if you figure out that you will need to copy it
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The user's tests?
I think we would provide a TestRuntime that does that for the user.
Sure, but it could cause problems further down the line, no? Like if you figure out that you will need to copy it
Makes sense. But in this case requirements would have changed so much that we should investigate a v3.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Really not sure about that. I don't think it gives you any benefit but it restricts your code's evolution. Don't see the point :).
Sources/AWSLambdaRuntimeCore/Documentation.docc/Proposals/0001-v2-api.md
Outdated
Show resolved
Hide resolved
Sources/AWSLambdaRuntimeCore/Documentation.docc/Proposals/0001-v2-api.md
Outdated
Show resolved
Hide resolved
Sources/AWSLambdaRuntimeCore/Documentation.docc/Proposals/0001-v2-api.md
Outdated
Show resolved
Hide resolved
|
|
||
| ```swift | ||
| /// The base LambdaHandler protocol | ||
| public protocol StreamingLambdaHandler: ~Copyable { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same question around allowing ~Copyable types here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's discuss here: #339 (comment) once we have a solution we will fix or resolve here.
| /// - Parameter logger: A logger | ||
| public init( | ||
| handler: consuming sending Handler, | ||
| eventLoop: EventLoop = Lambda.defaultEventLoop, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will we still have the ability to run everything in just one thread (the main thread)? I think for many lambdas you'll never want to fork off a second thread at all. NIO thread & async/await should just run in the main thread (but as global default executor not as @MainActor)
| _ event: ByteBuffer, | ||
| responseWriter: consuming some LambdaResponseWriter, | ||
| context: LambdaContext | ||
| ) async { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Out of curiosity: at which point is the HTTP response started/the head sent? Before the user handler is invoked? Lazily right before the first responseWriter.write(...) call?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This would be lazily before the first write(). If the user calls writeAndFinish() just once, we would send the response back in non streaming mode (if it fits into the 6MB limit).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What about
responseWriter.withResponseStream { stream in
try await stream.write(...)
try await stream.write(...)
try await stream.write(...)
try await stream.write(...)
try await stream.write(...)
}
where the head is sent on { and the finish is sent on }? That way things can't go wrong. If you throw that'd mean reportError.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That'd kinda answer both of @czechboy0's questions pretty clearly. No implicit, lazy head anymore and no way to forget finish.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@weissi would you mind elaborating what the issue is with the ~Copyable approach. I feel like we reinvent the closure hell now in structured concurrency land.
Also did you deliberately skip the try await before responseWriter.withResponseStream or is that an oversight?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
~Copyable doesn't help you if you don't call finish. You can't mark deinit as @available(*, unavailable) deinit {}. It does prevent double-calling but it also comes with massive strings attached, namely that the user can't copy it if the type's actually ~Copyable.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The head has to be sent on the first write because the control plane has different endpoints for a complete failure vs a successful response or a midstream error. So we need to wait until at least a partial response to determine what endpoint to send the response to.
| responseWriter.write(ByteBuffer(string: #"\#(i)\n\r"#)) | ||
| try? await Task.sleep(for: .milliseconds(500)) | ||
| } | ||
| responseWriter.finish() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What happens if the user doesn't call finish explicitly?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We would call it for you. We should decide if we send it in deinit or once the call completes.
Hello 👋
I am Aryan, a Computer Science student and currently an intern at Apple on the Swift on Server team. As part of my internship, I have been working with @fabianfett to propose a new v2 API for the
swift-aws-lambda-runtimelibrary.swift-aws-lambda-runtimeis an important library for the Swift on Server ecosystem. The initial API was written before async/await was introduced to Swift. When async/await was introduced, shims were added to bridge between the underlying SwiftNIOEventLoopinterfaces and async/await.However, just like
gRPC-swiftandpostgres-nio, we now want to shift to solely using async/await constructs instead ofEventLoopinterfaces. For this, large parts of the current API have to be reconsidered. This also provides a good opportunity to add support for new AWS Lambda features such as response streaming.We have written a document that explains the current limitations of the library and proposes a detailed design for the v2 API.
Please read the proposal and voice your opinions either here or on the forum post. We are looking forward to your feedback!
cc: @sebsto @tachyonics @FranzBusch @Lukasa @tomerd @weissi @adam-fowler @ktoso @0xTim