Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[2.0 RFC] "pull based" event loop #6

Closed
saghul opened this issue Nov 25, 2014 · 13 comments
Closed

[2.0 RFC] "pull based" event loop #6

saghul opened this issue Nov 25, 2014 · 13 comments

Comments

@saghul
Copy link
Member

saghul commented Nov 25, 2014

EDIT: the proposal is now here: libuv/leps#3

Please read the README for information on the LEP process: https://github.com/libuv/leps/blob/master/README.md

@creationix
Copy link
Contributor

I love it. Could we have the option to go one step farther and instead of having uv_backend_dispatch call the C callbacks, instead have a function that returns the next item in the queue till nothing is left and leave dispatching up to the user?

@saghul
Copy link
Member Author

saghul commented Nov 26, 2014

I love it. Could we have the option to go one step farther and instead of having uv_backend_dispatch call the C callbacks, instead have a function that returns the next item in the queue till nothing is left and leave dispatching up to the user?

I'm not sure that would be possible, because not all callbacks are the same, so libuv will actually need to inspect the queued request type and call the callback accordingly. What use case do you have in mind?

@creationix
Copy link
Contributor

My main use case is for efficient FFI from luajit. It may now support C callbacks, but it's a terribly slow and inefficient path in luajit.

If I was able to instead do something like:

for event in uv.next_event() do
  -- Route to lua callback based on properties on event struct
end

Then I think it would be faster. At the C level, I understand how types can get in the way, but I'd be fine with a void* or leaving it out entirely for things like C callbacks since I plan on passing in NULL anyway for this use case.

@txdv
Copy link
Contributor

txdv commented Nov 26, 2014

C calling into lua is slow?

@creationix
Copy link
Contributor

@txdv, C calling into Lua is a little slow, but not too bad. That's what I do currently with my luv bindings.

What I want is the option of pure ffi libuv bindings without writing any C and without using luajit's callback feature in the ffi since it is terribly slow.

http://luajit.org/ext_ffi_semantics.html#callback

@creationix
Copy link
Contributor

From Mike Pall's docs on ffi, he says:

For new designs avoid push-style APIs: a C function repeatedly calling a callback for each result. Instead use pull-style APIs: call a C function repeatedly to get a new result. Calls from Lua to C via the FFI are much faster than the other way round. Most well-designed libraries already use pull-style APIs (read/write, get/put).

@txdv
Copy link
Contributor

txdv commented Nov 26, 2014

I actually would like this too, since I have an idea which requires me to be in total control of the stack.

@saghul
Copy link
Member Author

saghul commented Nov 27, 2014

@creationix, @txdv I don't think that is going to fly. Requests are opaque structures for the most part, and a few of them will be "internal" as in, you cannot create them, but the loop uses them internally. Thus, the dispatch function will need to know how to call the callback associated with that request. This would result in even more FFI calls.

Lets analyze an example (pseudocode):

uv_req_t* uv_backend_dequeue(uv_loop_t* loop);
void uv_backend_dispatch(uv_req_t* req);

/* dispatch all requests */
while ((req = uv_backend_dequeue(loop)) {
    /* this would trigger the callback associated with the request */
    uv_backend_dispatch(req);
}

The above is C pseudocode for how you'd do it manually. Now, if you use FFI to call into C you'd need to call uv_backend_dispatch for each queued request so that its associated callback is called. If we follow my initial proposal, you'd just need to call a single function, which would fire all callbacks.

I'm not convinced I want this, or maybe it's too early to decide. Nevertheless, what you propose (if I understood correctly) is not orthogonal with what I say, so a new proposal could be made once this one has progressed enough to have a clearer picture.

@txdv
Copy link
Contributor

txdv commented Nov 27, 2014

    /* this would trigger the callback associated with the request */

Should be

We call some funtion which returns the data and then we do the callback in our own programming language.

@saghul
Copy link
Member Author

saghul commented Nov 27, 2014

We call some funtion which returns the data and then we do the callback in our own programming language.

Different requests have different data, I'm not sure this can be achieved in a clean way. I'll keep it in mind, but it won't be part of this particular LEP.

@txdv
Copy link
Contributor

txdv commented Nov 27, 2014

We could like make a union and then according to the type https://github.com/txdv/libuv/blob/master/include/uv.h#L161-L169 let the user use whatever he wants in it?

@bnoordhuis
Copy link
Member

That's already how it works unless I'm misunderstanding you. Requests have a common base, uv_req_t. You can look at the type field to figure out what kind of request you are dealing with. Do you (generic you) need anything more?

@piscisaureus
Copy link
Contributor

Continues in libuv/leps#3

Crunkle pushed a commit to Crunkle/libuv that referenced this issue Aug 8, 2019
If this occurs, the system has no ALSA support in the kernel,
and it is appropriate for the backend to fail.

closes libuv#6
zhaozg added a commit to zhaozg/libuv that referenced this issue Jan 18, 2021
ERROR: LeakSanitizer: detected memory leaks

```
Direct leak of 432 byte(s) in 9 object(s) allocated from:
    #0 0x1062eedc2 in __sanitizer_mz_calloc+0x92 (libclang_rt.asan_osx_dynamic.dylib:x86_64+0x46dc2)
    #1 0x7fff20171eb6 in _malloc_zone_calloc+0x3a (libsystem_malloc.dylib:x86_64+0x1beb6)
    libuv#2 0x7fff203ac180 in _CFRuntimeCreateInstance+0x124 (CoreFoundation:x86_64h+0x4180)
    libuv#3 0x7fff203ab906 in __CFStringCreateImmutableFunnel3+0x84d (CoreFoundation:x86_64h+0x3906)
    libuv#4 0x7fff203ab0a1 in CFStringCreateWithCString+0x48 (CoreFoundation:x86_64h+0x30a1)
    libuv#5 0x1056f63e1 in uv__get_cpu_speed darwin.c:267
    libuv#6 0x1056f491e in uv_cpu_info darwin.c:338
```
zhaozg added a commit to zhaozg/libuv that referenced this issue Jan 19, 2021
ERROR: LeakSanitizer: detected memory leaks

```
Direct leak of 432 byte(s) in 9 object(s) allocated from:
    #0 0x1062eedc2 in __sanitizer_mz_calloc+0x92 (libclang_rt.asan_osx_dynamic.dylib:x86_64+0x46dc2)
    #1 0x7fff20171eb6 in _malloc_zone_calloc+0x3a (libsystem_malloc.dylib:x86_64+0x1beb6)
    libuv#2 0x7fff203ac180 in _CFRuntimeCreateInstance+0x124 (CoreFoundation:x86_64h+0x4180)
    libuv#3 0x7fff203ab906 in __CFStringCreateImmutableFunnel3+0x84d (CoreFoundation:x86_64h+0x3906)
    libuv#4 0x7fff203ab0a1 in CFStringCreateWithCString+0x48 (CoreFoundation:x86_64h+0x30a1)
    libuv#5 0x1056f63e1 in uv__get_cpu_speed darwin.c:267
    libuv#6 0x1056f491e in uv_cpu_info darwin.c:338
```
zhaozg added a commit to zhaozg/libuv that referenced this issue Mar 20, 2021
ERROR: LeakSanitizer: detected memory leaks

```
Direct leak of 432 byte(s) in 9 object(s) allocated from:
    #0 0x1062eedc2 in __sanitizer_mz_calloc+0x92 (libclang_rt.asan_osx_dynamic.dylib:x86_64+0x46dc2)
    #1 0x7fff20171eb6 in _malloc_zone_calloc+0x3a (libsystem_malloc.dylib:x86_64+0x1beb6)
    libuv#2 0x7fff203ac180 in _CFRuntimeCreateInstance+0x124 (CoreFoundation:x86_64h+0x4180)
    libuv#3 0x7fff203ab906 in __CFStringCreateImmutableFunnel3+0x84d (CoreFoundation:x86_64h+0x3906)
    libuv#4 0x7fff203ab0a1 in CFStringCreateWithCString+0x48 (CoreFoundation:x86_64h+0x30a1)
    libuv#5 0x1056f63e1 in uv__get_cpu_speed darwin.c:267
    libuv#6 0x1056f491e in uv_cpu_info darwin.c:338
```
santigimeno pushed a commit that referenced this issue Apr 4, 2021
ERROR: LeakSanitizer: detected memory leaks

```
Direct leak of 432 byte(s) in 9 object(s) allocated from:
    #0 0x1062eedc2 in __sanitizer_mz_calloc+0x92 (libclang_rt.asan_osx_dynamic.dylib:x86_64+0x46dc2)
    #1 0x7fff20171eb6 in _malloc_zone_calloc+0x3a (libsystem_malloc.dylib:x86_64+0x1beb6)
    #2 0x7fff203ac180 in _CFRuntimeCreateInstance+0x124 (CoreFoundation:x86_64h+0x4180)
    #3 0x7fff203ab906 in __CFStringCreateImmutableFunnel3+0x84d (CoreFoundation:x86_64h+0x3906)
    #4 0x7fff203ab0a1 in CFStringCreateWithCString+0x48 (CoreFoundation:x86_64h+0x30a1)
    #5 0x1056f63e1 in uv__get_cpu_speed darwin.c:267
    #6 0x1056f491e in uv_cpu_info darwin.c:338
```

PR-URL: #3098
Reviewed-By: Colin Ihrig <cjihrig@gmail.com>
Reviewed-By: Ben Noordhuis <info@bnoordhuis.nl>
Reviewed-By: Santiago Gimeno <santiago.gimeno@gmail.com>
JeffroMF pushed a commit to JeffroMF/libuv that referenced this issue May 16, 2022
ERROR: LeakSanitizer: detected memory leaks

```
Direct leak of 432 byte(s) in 9 object(s) allocated from:
    #0 0x1062eedc2 in __sanitizer_mz_calloc+0x92 (libclang_rt.asan_osx_dynamic.dylib:x86_64+0x46dc2)
    #1 0x7fff20171eb6 in _malloc_zone_calloc+0x3a (libsystem_malloc.dylib:x86_64+0x1beb6)
    #2 0x7fff203ac180 in _CFRuntimeCreateInstance+0x124 (CoreFoundation:x86_64h+0x4180)
    #3 0x7fff203ab906 in __CFStringCreateImmutableFunnel3+0x84d (CoreFoundation:x86_64h+0x3906)
    libuv#4 0x7fff203ab0a1 in CFStringCreateWithCString+0x48 (CoreFoundation:x86_64h+0x30a1)
    libuv#5 0x1056f63e1 in uv__get_cpu_speed darwin.c:267
    libuv#6 0x1056f491e in uv_cpu_info darwin.c:338
```

PR-URL: libuv#3098
Reviewed-By: Colin Ihrig <cjihrig@gmail.com>
Reviewed-By: Ben Noordhuis <info@bnoordhuis.nl>
Reviewed-By: Santiago Gimeno <santiago.gimeno@gmail.com>
trevnorris added a commit to trevnorris/libuv that referenced this issue Mar 31, 2023
Proof-of-concept to demonstrate the API of being able to queue work from
any thread, without needing to be attached to a specific event loop.

There's a problem when running the test threadpool_task. It shows
uv_library_shutdown() running even though it's never called. Which is
causing it to abort. Here's the stack:

    * thread #1, name = 'uv_run_tests_a', stop reason = signal SIGABRT
        frame libuv#6: 0x00000000004ce768 uv_run_tests_a`uv__threadpool_cleanup at threadpool.c:192:3
        frame libuv#7: 0x00000000004d5d56 uv_run_tests_a`uv_library_shutdown at uv-common.c:956:3
        frame libuv#8: 0x00007ffff7fc924e
        frame libuv#9: 0x00007ffff7c45495 libc.so.6`__run_exit_handlers(status=0, listp=<unavailable>, run_list_atexit=true, run_dtors=true) at exit.c:113:8

Can't figure out why this is happening.

Another problem is when you remove the uv_sleep() from task_cb() it'll
cause another abort when trying to lock the mutex. Here's the stack:

        frame libuv#8: 0x00000000004ed375 uv_run_tests_a`uv_mutex_lock(mutex=0x0000000001460c50) at thread.c:344:7
        frame libuv#9: 0x00000000004cebdf uv_run_tests_a`uv__queue_work(w=0x0000000001462378) at threadpool.c:342:5
    (lldb) f 9
    frame libuv#9: 0x00000000004cebdf uv_run_tests_a`uv__queue_work(w=0x0000000001462378) at threadpool.c:342:5
       341    if (w->done == NULL) {
    -> 342      uv_mutex_lock(&mutex);
       343      w->work = NULL;

Still not sure if this is a race condition.
trevnorris added a commit to trevnorris/libuv that referenced this issue Mar 31, 2023
Proof-of-concept to demonstrate the API of being able to queue work from
any thread, without needing to be attached to a specific event loop.

There's a problem when running the test threadpool_task. It shows
uv_library_shutdown() running even though it's never called. Which is
causing it to abort. Here's the stack:

    * thread #1, name = 'uv_run_tests_a', stop reason = signal SIGABRT
        frame libuv#6: 0x00000000004ce768 uv_run_tests_a`uv__threadpool_cleanup at threadpool.c:192:3
        frame libuv#7: 0x00000000004d5d56 uv_run_tests_a`uv_library_shutdown at uv-common.c:956:3
        frame libuv#8: 0x00007ffff7fc924e
        frame libuv#9: 0x00007ffff7c45495 libc.so.6`__run_exit_handlers(status=0, listp=<unavailable>, run_list_atexit=true, run_dtors=true) at exit.c:113:8

Can't figure out why this is happening.

Another problem is when you remove the uv_sleep() from task_cb() it'll
cause another abort when trying to lock the mutex. Here's the stack:

        frame libuv#8: 0x00000000004ed375 uv_run_tests_a`uv_mutex_lock(mutex=0x0000000001460c50) at thread.c:344:7
        frame libuv#9: 0x00000000004cebdf uv_run_tests_a`uv__queue_work(w=0x0000000001462378) at threadpool.c:342:5
    (lldb) f 9
    frame libuv#9: 0x00000000004cebdf uv_run_tests_a`uv__queue_work(w=0x0000000001462378) at threadpool.c:342:5
       341    if (w->done == NULL) {
    -> 342      uv_mutex_lock(&mutex);
       343      w->work = NULL;

Still not sure if this is a race condition.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants