Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make pulumi.runtime.invoke synchronous. #3019

Merged
merged 2 commits into from
Aug 2, 2019
Merged

Conversation

pgavlin
Copy link
Member

@pgavlin pgavlin commented Aug 2, 2019

These changes make the pulumi.runtime.invoke function invokable in a
synchronous manner. Because this function still needs to perform
asynchronous work under the covers--namely awaiting a provider URN and
ID if a provider instance is present in the InvokeOptions--this
requires some creativity. This creativity comes in the form of a helper
function, _sync_await, that performs a logical yield from the
currently running event, manually runs the event loop until the given
future completes, performs a logical resume back to the
currently executing event, and returns the result of the future.

The code in _sync_await is a bit scary, as it relies upon knowledge of
(and functions in) the internals of the asyncio package. The necessary
work performed in this function was derived from the implementations of
task_step (which pointed out the need to call _{enter,leave}_task)
and BaseEventLoop.run_forever (which illustrated how the event loop is
pumped). In addition to potential breaking changes to these internals,
the code may not work if a user has provided an alternative implementation
for EventLoop. That said, the code is a close enough copy of
BaseEventLoop.run_forever that it should be a reasonable solution.

@pgavlin
Copy link
Member Author

pgavlin commented Aug 2, 2019

CI failure was in the engine events perf test. Restarting.

These changes make the `pulumi.runtime.invoke` function invokable in a
synchronous manner. Because this function still needs to perform
asynchronous work under the covers--namely awaiting a provider URN and
ID if a provider instance is present in the `InvokeOptions`--this
requires some creativity. This creativity comes in the form of a helper
function, `_sync_await`, that performs a logical yield from the
currently running event, manually runs the event loop until the given
future completes, performs a logical resume back to the
currently executing event, and returns the result of the future.

The code in `_sync_await` is a bit scary, as it relies upon knowledge of
(and functions in) the internals of the `asyncio` package. The necessary
work performed in this function was derived from the implementations of
`task_step` (which pointed out the need to call `_{enter,leave}_task`)
and `BaseEventLoop.run_forever` (which illustrated how the event loop is
pumped). In addition to potential breaking changes to these internals,
the code may not work if a user has provided an alternative implementation
for `EventLoop`. That said, the code is a close enough copy of
`BaseEventLoop.run_forever` that it should be a reasonable solution.
Copy link
Contributor

@ellismg ellismg left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM.

This is some pretty cool stuff, Pat.

@AstraLuma
Copy link

Because inner event loops are fun! (GTK's loop does the same thing.)

Yeah, the only reason this is ok is because pulumi controls the calling context and therefor what loop is used. If you wanted to support, say, uvloop, I would want to go on a spike and see what loops would be ok with this.

For the record, the way ASGI deals with sync/async context crossing is with multi-threading.

@pgavlin pgavlin merged commit 63eb7ab into master Aug 2, 2019
@pulumi-bot pulumi-bot deleted the pgavlin/pythonInvoke branch August 2, 2019 21:20
@AstraLuma AstraLuma mentioned this pull request Aug 6, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants