-
-
Notifications
You must be signed in to change notification settings - Fork 140
refactor: Replace async- and job abstractions #356
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
- Implement retries and fail conditions for jobs. - Remove all job handling from 'utils.lua'. - Vastly simplify the implementation for the job function provided from 'utils.lua'. - Remove 'ensure_output()'.
Simplifies a lot of code that previously relied on events.
This was a mess of booleans acting as mutex's and callbacks passed along through state vars.
Enables easier handling of complex fail conditions for groups of jobs that rely on each other.
benelan
added a commit
to benelan/dotfiles
that referenced
this pull request
Jul 9, 2023
There is an inconsistent issue where the shell freezes after staging a file in diffview and I have to kill the tmux window. I think it started after this PR: pr: sindrets/diffview.nvim#356
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
A few different things have bugged me about the async abstraction in Plenary and
I've decided that it's sufficiently beneficial for us to roll our own.
The new async lib deviates from the behavior / design of Plenary's
implementation in a few different ways:
Plenary's async lib conditionally uses implicit awaits. Meaning that, in
certain circumstances, calling an async function from an async context
will cause it to run synchronously.
More specifically,
async.wrap
-ed functions are blocking (within theparent coroutine) when called without a callback.
async.void
functionsare always non-blocking, but can't be awaited.
In our implementation all async functions are non-blocking by default, and
can be made synchronous within the current coroutine with
async.await()
.stack trace is missing a lot of context. Important context from parent
coroutines + main thread is lost.
yielding parent coroutines as well as the main thread such that we get a
far more detailed stack trace.
async.scheduler()
is frequently used in async code in order to yielduntil the Neovim API is available. Plenary's scheduler always calls
vim.schedule()
and yields until the callback is invoked, even when it'sunnecessary.
by locks.
Now the only things left that we were using from Plenary were the jobs and the
logger. These are both things that I have long considered reimplementing for a
few different reasons. Let's start with the job abstraction.
Standard output in Plenary's jobs is streamed and processed line-by-line. This
certainly can be useful. In this plugin however, 99% of the time, we just want
to consume all the output from a job and we don't benefit from streaming.
The new job implementation has streaming disabled by default (we still use it
for file-history). Writing the job class alongside a logger also made it easy to
integrate better logging, which is incredibly useful for debugging.
There's now a way to configure custom fail conditions for different jobs, as
well as a mechanism for automatically retrying failed jobs. A multi-job
abstraction was included that enables simpler handling of complex fail
conditions for groups of jobs that rely on each other.
On to the logger. Plenary's file logger (which was what we were using) performs
a lot of I/O operations. Every time one of the log functions is called, the
logger 1. opens a file 2. appends data to the file, and 3. closes the file. This
makes the log functions expensive operations that causes a lot of I/O overhead.
This is easily solved by using a message buffer, and only writing to file every
few seconds when it's necessary. So that's what we do now.
TL;DR plenary.nvim is no longer a dependency.