New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[RFC] Add support for client-side RPC #872

Merged
merged 13 commits into from Jun 24, 2014

Conversation

Projects
None yet
10 participants
@tarruda
Copy link
Member

tarruda commented Jun 20, 2014

This PR finishes implementing the necessary infrastructure(channel_send_call function) that will support the ex commands responsible for running code other scripting engines(python, pyfile, ruby, rubyfile, etc). They will be compatible with vim versions with the following differences:

  • Code will run in another process
  • The channel_send_call function blocks until the client responds
  • There's a hardcoded 3 second timeout before nvim assumes the client is stuck and and an error is returned(python << EOF while True: pass EOF would only block for at most 3 seconds, for example).
  • The call stack depth is limited to 20 which should be more than enough for any use case. If a plugin ever needs this level of nesting for RPC calls, it should probably be rewritten.
@aktau

This comment has been minimized.

Copy link
Member

aktau commented Jun 20, 2014

Very cool!

The call stack depth is limited to 20 which should be more than enough for any use case. If a plugin ever needs this level of nesting for RPC calls, it should probably be rewritten.

Without looking at the implementation (don't have a lot of time apart from my other neovim activity), is this for cases where the scripting hosts calls back into vim, vim calls back into the scripting host, et cetera?

Like: vim -> python -> vim -> python -> vim

Is it always for the same interpreter or can this happen too?

vim -> python -> vim -> ruby -> vim -> ...

@tarruda

This comment has been minimized.

Copy link
Member

tarruda commented Jun 20, 2014

Without looking at the implementation (don't have a lot of time apart from my other neovim activity), is this for cases where the scripting hosts calls back into vim, vim calls back into the scripting host, et cetera?

Yes, because this is currently possible in vim(both the scripting engine and vim share the call stack). I added this more as a compatiblity feature, because plugins should never require this. In fact, events should be the main method of communication because they never block, though sometimes it might be useful to wait for a plugin response before proceeding(ref #159/#622 by @SirVer)

@tarruda

This comment has been minimized.

Copy link
Member

tarruda commented Jun 20, 2014

Is it always for the same interpreter or can this happen too?

vim -> python -> vim -> ruby -> vim -> ...

This could happen, yes

@@ -12535,6 +12539,47 @@ do_searchpair (
}

// "send_event()" function

This comment has been minimized.

@philix

philix Jun 20, 2014

Member

Comment is wrong

This comment has been minimized.

@tarruda
@equalsraf

This comment has been minimized.

Copy link
Contributor

equalsraf commented Jun 20, 2014

Isn't closing the channel too aggressive? Assuming we run a single python interpreter process for all python plugins, one plugin could interfere with all the others by calling while True:?

@tarruda

This comment has been minimized.

Copy link
Member

tarruda commented Jun 20, 2014

If I understand this right, this is for the sync API, and thus mostly seen as a vanilla vim compatibility thing. But then why is it registered as a VimL function? To be able to generalize over jobs, and not just :python/:ruby/...?

It might be useful to expose the ability to peform blocking calls on arbitrary plugin processes. For example, a plugin might register a command that can be called directly by the user(such a command would probably need to wait until the plugin finished processing)

Isn't closing the channel too aggressive? Assuming we run a single python interpreter process for all python plugins, one plugin could interfere with all the others by calling while True:?

Yes, it's just the simplest way I found to deal with the issue at the time. A more robust approach would be:

  • Pass -1 as timeout to event_poll
  • Check if the user interrupted(ctrl+c)
  • Forward the interrupt request to the plugin host, which would try to deal with it in some way, possibly killing a thread running the blocking code.
  • On the next event_poll call, pass a timeout
  • If the plugin host is still not responding, kill the channel.
@equalsraf

This comment has been minimized.

Copy link
Contributor

equalsraf commented Jun 20, 2014

I was thinking more along the lines of keeping it simpler:

  • Enforce a timeout and return an error to the caller (like your are doing right now)
  • Basically never kill the channel because a send_call() fails
  • Expose an event/function through the API so the process can know Neovim's timeout policy
  • If the process knows Neovim's timeout it can clean up after itself (without Neovim having to do anything)
  • If the process answers too late, I assume the message would just be dropped

Basically I don't see the need to go around killing connections, when the caller, or the running job should be able to handle it by themselves.

@@ -92,7 +98,7 @@ bool wstream_write(WStream *wstream, WBuffer *buffer)
assert(!wstream->freed);

if (wstream->curmem > wstream->maxmem) {
return false;
goto err;

This comment has been minimized.

@oni-link

oni-link Jun 20, 2014

Contributor

Just saw broadcast_event (again). It reuses the same buffer to send events for different channels. When one write fails, the remaining writes do not have a valid buffer. Would it work to set the refcount (= number of uses) before the first call to wstream_write and only delete the buffer if refcount == 1 (so no refcount++ in wstream_write)?

This comment has been minimized.

@aktau

aktau Jun 20, 2014

Member

That seems to be the most clear-cut solution at first glance indeed.

Another alternative is to increment the refcount immediately but schedule the actual write request for the next event loop and then decrease the refcount as normal on error. But I find this rather ugly, roundabout, and inefficient.

This comment has been minimized.

@tarruda

tarruda Jun 24, 2014

Member

Just saw broadcast_event (again). It reuses the same buffer to send events for different channels. When one write fails, the remaining writes do not have a valid buffer. Would it work to set the refcount (= number of uses) before the first call to wstream_write and only delete the buffer if refcount == 1 (so no refcount++ in wstream_write)?

👍

@tarruda

This comment has been minimized.

Copy link
Member

tarruda commented Jun 20, 2014

Basically I don't see the need to go around killing connections, when the caller, or the running job should be able to handle it by themselves.

👍 I will implement it after some recursion problems are fixed

snprintf(buf,
sizeof(buf),
"Channel %" PRIu64 " was closed due to lack of activity while "
"processing a RPC request",

This comment has been minimized.

@justinmk

justinmk Jun 21, 2014

Member

wording suggestion: "lack of activity" => "inactivity" (also line 210)

This comment has been minimized.

@tarruda
} else {
broadcast_event(type, data);
broadcast_event(name, arg);

This comment has been minimized.

@justinmk

justinmk Jun 21, 2014

Member

If id is negative, this turns it into 0. It might be useful to have a sanity check if id < 0.

@justinmk

This comment has been minimized.

Copy link
Member

justinmk commented Jun 21, 2014

Will :pydo, :rubydo, :luado also be supported? We could wait and see if anyone even misses those commands. I've always thought they were neat, but never actually used them.

@justinmk

This comment has been minimized.

Copy link
Member

justinmk commented Jun 21, 2014

Looks great, 👍 to the name changes and refactoring.

Added differences to the wiki: https://github.com/neovim/neovim/wiki/Differences-from-Vim

frame->errored = true;
(void)kv_pop(channel->call_stack);
frame->result = STRING_OBJ(msg);
}

This comment has been minimized.

@oni-link

oni-link Jun 21, 2014

Contributor

Here we start modifying elements from the beginning of the vector, but at the same time removing elements from the end. The removed elements are not modified, because the vector size in the loop condition is not constant.

Would be nice to have a kv_clear that sets the size of the vector to zero, so no kv_pop would be needed here.

This comment has been minimized.

@aktau

aktau Jun 21, 2014

Member

Indeed @oni-link, this is a dubious practice. There are several ways to rewrite it, a kv_clear could work and would perhaps be more efficient. My first mental rewrite was:

while (kv_size(channel->call_stack)) {
  ChannelCallFrame *frame = kv_pop(channel->call_stack);
  frame->errored = true;
  frame->result = STRING_OBJ(msg);
}

I wasn't sure why the kv_pop was ordered between the frame-> assignments.

This comment has been minimized.

@tarruda
@Shougo

This comment has been minimized.

Copy link
Contributor

Shougo commented Jun 22, 2014

This PR finishes implementing the necessary infrastructure(channel_send_call function) that will support the ex commands responsible for running code other scripting engines(python, pyfile, ruby, rubyfile, etc). They will be compatible with vim versions with the following differences:

Will :pydo, :rubydo, :luado also be supported? We could wait and see if anyone even misses those commands. I've always thought they were neat, but never actually used them.

It is cool!

I want to use neocomplete in neovim.

@tarruda

This comment has been minimized.

Copy link
Member

tarruda commented Jun 23, 2014

It is cool!

I want to use neocomplete in neovim.

According to @ZyX-I, a vim_bindeval API function must be implemented before we can use lua plugins wtih Neovim(lua's eval == python's bindeval). Three new API types must be added(with proper methods for working with the remote objects): VimDictionary, VimList and VimFunction. These would be implemented similarly to Buffer, Window and Tabpage.

My goal is to focus on getting python plugins(developed for vim 7.3 and lower) work on Neovim before looking into implementing vim_bindeval, but anyone is free to give it a shot. @justinmk, perhaps we should put this as a 'ready' task? Though I'm not sure if it should be considered 'entry-level'

@justinmk justinmk added api labels Jun 23, 2014

@Shougo

This comment has been minimized.

Copy link
Contributor

Shougo commented Jun 23, 2014

Thank you. I get it.

I wait.

@tarruda

This comment has been minimized.

Copy link
Member

tarruda commented Jun 24, 2014

Almost ready, I will probably merge this tomorrow after performing more fixes proposed by reviewers and adding more tests

@ZyX-I

This comment has been minimized.

Copy link
Contributor

ZyX-I commented Jun 24, 2014

There's a hardcoded 3 second timeout before nvim assumes the client is stuck and and an error is returned(python << EOF while True: pass EOF would only block for at most 3 seconds, for example).

Any hardcoded timeout is not a good thing. Note that

  1. This channel infrastructure is going to be used by VimL.
  2. I have at least one VimL plugin that may require more then 3 seconds to complete because (depending on what user requested) it may want to receive full subversion log from remote server accessed over the Internet (it may also require more then 3 seconds to format full log with any DVCS because VimL is too slow, but this problem may be fixed by using luajit).

I see that the following was not implemented. Is it true?

Yes, it's just the simplest way I found to deal with the issue at the time. A more robust approach would be:

@mikaelj

This comment has been minimized.

Copy link

mikaelj commented Jun 24, 2014

Could it be solved by nvim asking the process if it's still running? A
watchdog of sorts, that queries internal state of the language interface to
heuristically deduce whether the plug-in is still running.

(also, I too have plug-ins that regularly take >3s to return a result)
Den 24 jun 2014 05:50 skrev "ZyX-I" notifications@github.com:

There's a hardcoded 3 second timeout before nvim assumes the client is
stuck and and an error is returned(python << EOF while True: pass EOF would
only block for at most 3 seconds, for example).

Any hardcoded timeout is not a good thing. Note that

  1. This channel infrastructure is going to be used by VimL.
  2. I have at least one VimL plugin that may require more then 3
    seconds to complete because (depending on what user requested) it may want
    to receive full subversion log from remote server accessed over the
    Internet (it may also require more then 3 seconds to format full log with
    any DVCS because VimL is too slow, but this problem may be fixed by using
    luajit).

I see that the following was not implemented. Is it true?

Yes, it's just the simplest way I found to deal with the issue at the
time. A more robust approach would be:


Reply to this email directly or view it on GitHub
#872 (comment).

@aktau

This comment has been minimized.

Copy link
Member

aktau commented Jun 24, 2014

I have at least one VimL plugin that may require more then 3 seconds to complete because (depending on what user requested) ...

It sounds like a configurable timeout (with a good default value) is a good idea. However, for tasks taking so long, they would lock up the entire process. If you know something can take that long, perhaps it's best to nudge people towards using jobs for that?

@tarruda

This comment has been minimized.

Copy link
Member

tarruda commented Jun 24, 2014

I have at least one VimL plugin that may require more then 3 seconds to complete because (depending on what user requested) it may want to receive full subversion log from remote server accessed over the Internet (it may also require more then 3 seconds to format full log with any DVCS because VimL is too slow, but this problem may be fixed by using luajit).

That will only be a problem if you wait for the download to finish before returning from the RPC call. Probably what you want is to implement a :StartDownload command that would return immediately and periodically report progress to the user(maybe through a temporary buffer). You don't even need to use send_call(which requires a return value from the plugin) for this, send_event should be enough to kick off the job.

If for some reason, you absolutely must perform the work in a blocking command, you can always use threads to send notifications even if the language API is fully blocking(provided that the client library supports calls from multiple threads like the python client)

I see that the following was not implemented. Is it true?

No I haven't implemented yet, I'm planning to follow @equalsraf suggestion:

I was thinking more along the lines of keeping it simpler:

  • Enforce a timeout and return an error to the caller (like your are doing right now)
  • Basically never kill the channel because a send_call() fails
  • Expose an event/function through the API so the process can know Neovim's timeout policy
    If the process knows Neovim's timeout it can clean up after itself (without Neovim having to do anything)
  • If the process answers too late, I assume the message would just be dropped

Basically I don't see the need to go around killing connections, when the caller, or the running job should be able to handle it by themselves.

There's no need to kill connections, the call can simply fail if it takes too long to return(though it will eventually be killed if the process is stuck causing the OS buffer to become full of pending writes from nvim)

Could it be solved by nvim asking the process if it's still running? A
watchdog of sorts, that queries internal state of the language interface to
heuristically deduce whether the plug-in is still running.

Thats the reason the timeout is there, but it is the process that should send the notifications(as opposed of waiting for nvim to ask for status)

It sounds like a configurable timeout (with a good default value) is a good idea. However, for tasks taking so long, they would lock up the entire process. If you know something can take that long, perhaps it's best to nudge people towards using jobs for that?

I'm still in doubt if this timeout should be configurable because it might encourage the bad practice of performing long-running tasks while blocking the user interface. Since the configuration option is there, plugin authors might simply take the "shortest path" which is to increase the timeout during their call, instead of properly keeping the UI responsive

@equalsraf

This comment has been minimized.

Copy link
Contributor

equalsraf commented Jun 24, 2014

Any hardcoded timeout is not a good thing.

True. But since this call is synchronous there is probably no way around it. Of course we can always keep this timeout disabled by default - at least until we check further how this impacts plugins.

Could it be solved by nvim asking the process if it's still running? A watchdog of sorts, that queries internal state of the language interface to heuristically deduce whether the plug-in is still running.

Or the process could notify Neovim to wait a bit more if it is getting closer to the timeout (provided that it knows the timeout). But before we go down that road I think we can hold on this a bit.

@tarruda just for clarification, is the goal of this timeout feature to police the behaviour of the external process as whole or of that one specific send_call()?

If the goal is to police the behaviour of the external process as a whole, then I would drop this timeout feature and consider it on a wider PR (e.g. process keep alive API).

If the goal is to enforce good behaviour for a specific send_call() (i.e. a specific plugin) then we can treat this as a feature to keep legacy plugins in check (and might even disable it by default). Plugins that are aware of Neovim can always disable/change the timeout

    set plugin_timeout=0
    python << EOF while True: pass EOF
@equalsraf

This comment has been minimized.

Copy link
Contributor

equalsraf commented Jun 24, 2014

This is a tricky subject because of compatibility with older plugin code.

I'm still in doubt if this timeout should be configurable because it might encourage the bad practice of performing long-running tasks while blocking the user interface. Since the configuration option is there, plugin authors might simply take the "shortest path" which is to increase the timeout during their call, instead of properly keeping the UI responsive

For older plugins I'm afraid this will happen anyway.

Assuming the feature is not configurable, then It all comes down to how you want user experience to go down. i.e.

  1. you can keep it off by default and treat it as a plugin debugging feature (would be an awesome option for plugin developers but no more than that).
  2. it is an early warning system for poorly written code - and then it should warn users that a certain plugin is misbehaving and that they should report it upstream for fixing

Finally we can always keep this off for now and turn into (2.) at a later time when we have a more established plugin userbase going around.

@aktau

This comment has been minimized.

Copy link
Member

aktau commented Jun 24, 2014

it is an early warning system for poorly written code - and then it should warn users that a certain plugin is misbehaving and that they should report it upstream for fixing

Hadn't thought of that yet. It's basically with Sublime Text does. It tells the user if a plugin is taking too long.

@tarruda

This comment has been minimized.

Copy link
Member

tarruda commented Jun 24, 2014

@tarruda just for clarification, is the goal of this timeout feature to police the behaviour of the external process as whole or of that one specific send_call()?

If the goal is to police the behaviour of the external process as a whole, then I would drop this timeout feature and consider it on a wider PR (e.g. process keep alive API).

If the goal is to enforce good behaviour for a specific send_call() (i.e. a specific plugin) then we can treat this as a feature to keep legacy plugins in check (and might even disable it by default). Plugins that are aware of Neovim can always disable/change the timeout

This is just for send_call since it's a feature created mainly to support the legacy external language interface(though it may be useful to have anyway since it can be used as a synchronization mechanism across plugins running in different processes).

The goal is mainly to encourage plugins to become aware of neovim through has('neovim'), and by providing non-blocking implementations for commands that would normally block the UI for too long. It's not a securify/behavior check mechanism, the python << EOF while True: pass EOF example was just for explaining what happens when the plugin takes too long to respond.

it is an early warning system for poorly written code - and then it should warn users that a certain plugin is misbehaving and that they should report it upstream for fixing

Finally we can always keep this off for now and turn into (2.) at a later time when we have a more established plugin userbase going around.

You are right, I will disable the timeout for now and handle it in a later PR once we had more time to discuss whats the best approach.

If anyone is interested in testing this feature before I merge it, they can use this branch of the python client. Here are some usage examples

@mikaelj

This comment has been minimized.

Copy link

mikaelj commented Jun 24, 2014

On Tue, Jun 24, 2014 at 10:47 AM, Nicolas Hillegeer <
notifications@github.com> wrote:

I have at least one VimL plugin that may require more then 3 seconds to
complete because (depending on what user requested) ...

It sounds like a configurable timeout (with a good default value) is a
good idea. However, for tasks taking so long, they would lock up the entire
process. If you know something can take that long, perhaps it's best to
nudge people towards using jobs for that?

On Tue, Jun 24, 2014 at 11:25 AM, Thiago de Arruda <notifications@github.com

wrote:

I have at least one VimL plugin that may require more then 3 seconds to
complete because (depending on what user requested) it may want to receive
full subversion log from remote server accessed over the Internet (it may
also require more then 3 seconds to format full log with any DVCS because
VimL is too slow, but this problem may be fixed by using luajit).

That will only be a problem if you wait for the download to finish before
returning from the RPC call. Probably what you want is to implement a
:StartDownload command that would return immediately and periodically
report progress to the user(maybe through a temporary buffer). You don't
even need to use send_call(which requires a return value from the plugin)
for this, send_event should be enough to kick off the job.

If for some reason, you absolutely must perform the work in a blocking
command, you can always use threads to send notifications even if the
language API is fully blocking(provided that the client library supports
calls from multiple threads like the python client)

Indeed, it's ctrlp blocking for >3s while reading a huge list of files.
Still, this breaks backwards compatibility with Vim plugins that might not
want to adapt to Neovim since it would break Vim compatibility. I think
such unwillingness to be taken into account.

-- Mikael

@tarruda

This comment has been minimized.

Copy link
Member

tarruda commented Jun 24, 2014

Will :pydo, :rubydo, :luado also be supported? We could wait and see if anyone even misses those commands. I've always thought they were neat, but never actually used them.

On nvim side, there will be very little cost of implementing these functions, mainly because the dispatching code will be shared for all language hosts

@tarruda

This comment has been minimized.

Copy link
Member

tarruda commented Jun 24, 2014

Still, this breaks backwards compatibility with Vim plugins that might not
want to adapt to Neovim since it would break Vim compatibility. I think
such unwillingness to be taken into account.

👍

tarruda added some commits Jun 20, 2014

event: Bail out of event_poll when any event is processed
The loop condition was set to only exit when user input is processed, but we
must exit on any event to properly notify `event_poll` callers
event: Decouple user input checks from `event_poll`
This was done to generalize the usage of `event_poll`, which will now return
`true` only if a event has been processed/deferred before the timeout(if not
-1).

To do that, the `input_ready` calls have been extracted to the input.c
module(the `event_poll` call has been surrounded by `input_ready` calls,
resulting in the same behavior).

The `input_start`/`input_stop` calls still present in `event_poll` are
temporary: When the API becomes the only way to read user input, it will no
longer be necessary to start/stop the input stream.
wstream: Refactor buffer memory management
- Extract code to release WBuffer instances into `release_wbuffer`
- Fix memory leak when wstream_write returns false
job: Add a `maxmem` parameter to job_start
The value is forwarded to it's own WStream instance
channel/msgpack_rpc: Refactor msgpack_rpc_notification/serialize_event
- Generalize some argument names(event type -> event name,
                                 event data -> event arg)
- Rename serialize_event to serialize_message
- Rename msgpack_rpc_notification to msgpack_rpc_message
- Extract the message type out of msgpack_rpc_message
- Add 'id' parameter to msgpack_rpc_message/serialize_message to create messages
  that are not notifications
channel: Implement the 'channel_send_call' function
This function is used to send RPC calls to clients. In contrast to
`channel_send_event`, this function will block until the client sends a
response(But it will continue processing requests from that client).

The RPC call stack has a maximum depth of 20.
channel/msgpack_rpc: Refactor to better split functions across modules
Move validation/conversion functions and to msgpack_rpc_helpers to separate
those from the functions that are used from the channel module
channel/msgpack_rpc: Refactor API dispatching
This is how API dispatching worked before this commit:

- The generated `msgpack_rpc_dispatch` function receives a the `msgpack_packer`
  argument.
- The response is incrementally built while validating/calling the API.
- Return values/errors are also packed into the `msgpack_packer` while the
  final response is being calculated.

Now the `msgpack_packer` argument is no longer provided, and the
`msgpack_rpc_dispatch` function returns `Object`/`Error` values to
`msgpack_rpc_call`, which will use those values to build the response in a
single pass.

This was done because the new `channel_send_call` function created the
possibility of having recursive API invocations, and this wasn't possible when
sharing a single `msgpack_sbuffer` across call frames(it was shared implicitly
through the `msgpack_packer` instance).

Since we only start to build the response when the necessary information has
been computed, it's now safe to share a single `msgpack_sbuffer` instance
across all channels and API invocations.

Some other changes also had to be performed:

- Handling of the metadata discover was moved to `msgpack_rpc_call`
- Expose more types as subtypes of `Object`, this was required to forward the
  return value from `msgpack_rpc_dispatch` to `msgpack_rpc_call`
- Added more helper macros for casting API types to `Object`
  any

@tarruda tarruda merged commit 296da85 into neovim:master Jun 24, 2014

1 check passed

continuous-integration/travis-ci The Travis CI build passed
Details

tarruda added a commit that referenced this pull request Jun 24, 2014

return false;
}

if (kv_size(channel->call_stack) > 20) {

This comment has been minimized.

@oni-link

oni-link Jun 24, 2014

Contributor

This block only causes a memory leak, because *result is later overwritten.

This comment has been minimized.

@mikaelj

mikaelj Jun 25, 2014

Maybe introduce a constant describing what the number "20" means so it's easier to change it later? Magic numbers are bad.

@oni-link

This comment has been minimized.

Copy link

oni-link commented on src/nvim/os/wstream.c in 11916b6 Jun 25, 2014

That does not fix the "broadcast" problem: While broadcasting, writing to one channel fails, buffer is deleted (if there can be pending write_cb calls, assume the first write fails) and the remaining broadcast writes have to use the freed buffer.

Why not initialize refcount to 1 in wstream_new_buffer and remove the refcount++ here. In broadcast_event, set the refcount to the number of channels in subscribed. Or would that change conflict with the way WBuffer is used at the moment?

This comment has been minimized.

Copy link
Owner

tarruda replied Jun 26, 2014

👍 I did not catch that, thanks. As you suggested, I will modify the wstream_new_buffer constructor to receive the number of references(kv_size(subscribed)).

@oni-link

This comment has been minimized.

Copy link

oni-link commented on src/nvim/os/wstream.c in 30fc6a4 Jun 25, 2014

Can this behaviour be documented in the function description?

This comment has been minimized.

Copy link
Owner

tarruda replied Jun 26, 2014

👍

@oni-link

This comment has been minimized.

Copy link

oni-link commented on src/nvim/os/wstream.c in c722e22 Jun 25, 2014

Just checking: If uv_write returns not 0, was write_cb called (release_wbuffer could be called twice)?

This comment has been minimized.

Copy link
Owner

tarruda replied Jun 26, 2014

Not sure about this. Perhaps @saghul knows the answer

This comment has been minimized.

Copy link

saghul replied Jun 26, 2014

Hi there! If uv_write returns != 0 then it's guaranteed that the callback will not be called.

This comment has been minimized.

Copy link
Owner

tarruda replied Jun 26, 2014

thanks saghul

@oni-link

This comment has been minimized.

Copy link

oni-link commented on src/nvim/os/channel.c in ea7a389 Jun 26, 2014

Why not use kv_pop here instead a few lines later.

This comment has been minimized.

Copy link
Owner

tarruda replied Jun 26, 2014

👍

I have no idea why I did this :)

@oni-link

This comment has been minimized.

Copy link

oni-link commented on src/nvim/os/channel.c in ea7a389 Jun 26, 2014

Coverity found this: If channel is closed here, function further up in the call chain are not aware of this and still use the freed channel pointer.

This comment has been minimized.

Copy link
Owner

tarruda replied Jun 26, 2014

👍 Already fixed in my next branch

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment