New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[2.0 RFC] "pull based" event loop #6
Comments
I love it. Could we have the option to go one step farther and instead of having |
I'm not sure that would be possible, because not all callbacks are the same, so libuv will actually need to inspect the queued request type and call the callback accordingly. What use case do you have in mind? |
My main use case is for efficient FFI from luajit. It may now support C callbacks, but it's a terribly slow and inefficient path in luajit. If I was able to instead do something like: for event in uv.next_event() do
-- Route to lua callback based on properties on event struct
end Then I think it would be faster. At the C level, I understand how types can get in the way, but I'd be fine with a |
C calling into lua is slow? |
@txdv, C calling into Lua is a little slow, but not too bad. That's what I do currently with my luv bindings. What I want is the option of pure ffi libuv bindings without writing any C and without using luajit's callback feature in the ffi since it is terribly slow. |
From Mike Pall's docs on ffi, he says:
|
I actually would like this too, since I have an idea which requires me to be in total control of the stack. |
@creationix, @txdv I don't think that is going to fly. Requests are opaque structures for the most part, and a few of them will be "internal" as in, you cannot create them, but the loop uses them internally. Thus, the dispatch function will need to know how to call the callback associated with that request. This would result in even more FFI calls. Lets analyze an example (pseudocode):
The above is C pseudocode for how you'd do it manually. Now, if you use FFI to call into C you'd need to call I'm not convinced I want this, or maybe it's too early to decide. Nevertheless, what you propose (if I understood correctly) is not orthogonal with what I say, so a new proposal could be made once this one has progressed enough to have a clearer picture. |
Should be We call some funtion which returns the data and then we do the callback in our own programming language. |
Different requests have different data, I'm not sure this can be achieved in a clean way. I'll keep it in mind, but it won't be part of this particular LEP. |
We could like make a union and then according to the type https://github.com/txdv/libuv/blob/master/include/uv.h#L161-L169 let the user use whatever he wants in it? |
That's already how it works unless I'm misunderstanding you. Requests have a common base, uv_req_t. You can look at the type field to figure out what kind of request you are dealing with. Do you (generic you) need anything more? |
Continues in libuv/leps#3 |
If this occurs, the system has no ALSA support in the kernel, and it is appropriate for the backend to fail. closes libuv#6
ERROR: LeakSanitizer: detected memory leaks ``` Direct leak of 432 byte(s) in 9 object(s) allocated from: #0 0x1062eedc2 in __sanitizer_mz_calloc+0x92 (libclang_rt.asan_osx_dynamic.dylib:x86_64+0x46dc2) #1 0x7fff20171eb6 in _malloc_zone_calloc+0x3a (libsystem_malloc.dylib:x86_64+0x1beb6) libuv#2 0x7fff203ac180 in _CFRuntimeCreateInstance+0x124 (CoreFoundation:x86_64h+0x4180) libuv#3 0x7fff203ab906 in __CFStringCreateImmutableFunnel3+0x84d (CoreFoundation:x86_64h+0x3906) libuv#4 0x7fff203ab0a1 in CFStringCreateWithCString+0x48 (CoreFoundation:x86_64h+0x30a1) libuv#5 0x1056f63e1 in uv__get_cpu_speed darwin.c:267 libuv#6 0x1056f491e in uv_cpu_info darwin.c:338 ```
ERROR: LeakSanitizer: detected memory leaks ``` Direct leak of 432 byte(s) in 9 object(s) allocated from: #0 0x1062eedc2 in __sanitizer_mz_calloc+0x92 (libclang_rt.asan_osx_dynamic.dylib:x86_64+0x46dc2) #1 0x7fff20171eb6 in _malloc_zone_calloc+0x3a (libsystem_malloc.dylib:x86_64+0x1beb6) libuv#2 0x7fff203ac180 in _CFRuntimeCreateInstance+0x124 (CoreFoundation:x86_64h+0x4180) libuv#3 0x7fff203ab906 in __CFStringCreateImmutableFunnel3+0x84d (CoreFoundation:x86_64h+0x3906) libuv#4 0x7fff203ab0a1 in CFStringCreateWithCString+0x48 (CoreFoundation:x86_64h+0x30a1) libuv#5 0x1056f63e1 in uv__get_cpu_speed darwin.c:267 libuv#6 0x1056f491e in uv_cpu_info darwin.c:338 ```
ERROR: LeakSanitizer: detected memory leaks ``` Direct leak of 432 byte(s) in 9 object(s) allocated from: #0 0x1062eedc2 in __sanitizer_mz_calloc+0x92 (libclang_rt.asan_osx_dynamic.dylib:x86_64+0x46dc2) #1 0x7fff20171eb6 in _malloc_zone_calloc+0x3a (libsystem_malloc.dylib:x86_64+0x1beb6) libuv#2 0x7fff203ac180 in _CFRuntimeCreateInstance+0x124 (CoreFoundation:x86_64h+0x4180) libuv#3 0x7fff203ab906 in __CFStringCreateImmutableFunnel3+0x84d (CoreFoundation:x86_64h+0x3906) libuv#4 0x7fff203ab0a1 in CFStringCreateWithCString+0x48 (CoreFoundation:x86_64h+0x30a1) libuv#5 0x1056f63e1 in uv__get_cpu_speed darwin.c:267 libuv#6 0x1056f491e in uv_cpu_info darwin.c:338 ```
ERROR: LeakSanitizer: detected memory leaks ``` Direct leak of 432 byte(s) in 9 object(s) allocated from: #0 0x1062eedc2 in __sanitizer_mz_calloc+0x92 (libclang_rt.asan_osx_dynamic.dylib:x86_64+0x46dc2) #1 0x7fff20171eb6 in _malloc_zone_calloc+0x3a (libsystem_malloc.dylib:x86_64+0x1beb6) #2 0x7fff203ac180 in _CFRuntimeCreateInstance+0x124 (CoreFoundation:x86_64h+0x4180) #3 0x7fff203ab906 in __CFStringCreateImmutableFunnel3+0x84d (CoreFoundation:x86_64h+0x3906) #4 0x7fff203ab0a1 in CFStringCreateWithCString+0x48 (CoreFoundation:x86_64h+0x30a1) #5 0x1056f63e1 in uv__get_cpu_speed darwin.c:267 #6 0x1056f491e in uv_cpu_info darwin.c:338 ``` PR-URL: #3098 Reviewed-By: Colin Ihrig <cjihrig@gmail.com> Reviewed-By: Ben Noordhuis <info@bnoordhuis.nl> Reviewed-By: Santiago Gimeno <santiago.gimeno@gmail.com>
ERROR: LeakSanitizer: detected memory leaks ``` Direct leak of 432 byte(s) in 9 object(s) allocated from: #0 0x1062eedc2 in __sanitizer_mz_calloc+0x92 (libclang_rt.asan_osx_dynamic.dylib:x86_64+0x46dc2) #1 0x7fff20171eb6 in _malloc_zone_calloc+0x3a (libsystem_malloc.dylib:x86_64+0x1beb6) #2 0x7fff203ac180 in _CFRuntimeCreateInstance+0x124 (CoreFoundation:x86_64h+0x4180) #3 0x7fff203ab906 in __CFStringCreateImmutableFunnel3+0x84d (CoreFoundation:x86_64h+0x3906) libuv#4 0x7fff203ab0a1 in CFStringCreateWithCString+0x48 (CoreFoundation:x86_64h+0x30a1) libuv#5 0x1056f63e1 in uv__get_cpu_speed darwin.c:267 libuv#6 0x1056f491e in uv_cpu_info darwin.c:338 ``` PR-URL: libuv#3098 Reviewed-By: Colin Ihrig <cjihrig@gmail.com> Reviewed-By: Ben Noordhuis <info@bnoordhuis.nl> Reviewed-By: Santiago Gimeno <santiago.gimeno@gmail.com>
Proof-of-concept to demonstrate the API of being able to queue work from any thread, without needing to be attached to a specific event loop. There's a problem when running the test threadpool_task. It shows uv_library_shutdown() running even though it's never called. Which is causing it to abort. Here's the stack: * thread #1, name = 'uv_run_tests_a', stop reason = signal SIGABRT frame libuv#6: 0x00000000004ce768 uv_run_tests_a`uv__threadpool_cleanup at threadpool.c:192:3 frame libuv#7: 0x00000000004d5d56 uv_run_tests_a`uv_library_shutdown at uv-common.c:956:3 frame libuv#8: 0x00007ffff7fc924e frame libuv#9: 0x00007ffff7c45495 libc.so.6`__run_exit_handlers(status=0, listp=<unavailable>, run_list_atexit=true, run_dtors=true) at exit.c:113:8 Can't figure out why this is happening. Another problem is when you remove the uv_sleep() from task_cb() it'll cause another abort when trying to lock the mutex. Here's the stack: frame libuv#8: 0x00000000004ed375 uv_run_tests_a`uv_mutex_lock(mutex=0x0000000001460c50) at thread.c:344:7 frame libuv#9: 0x00000000004cebdf uv_run_tests_a`uv__queue_work(w=0x0000000001462378) at threadpool.c:342:5 (lldb) f 9 frame libuv#9: 0x00000000004cebdf uv_run_tests_a`uv__queue_work(w=0x0000000001462378) at threadpool.c:342:5 341 if (w->done == NULL) { -> 342 uv_mutex_lock(&mutex); 343 w->work = NULL; Still not sure if this is a race condition.
Proof-of-concept to demonstrate the API of being able to queue work from any thread, without needing to be attached to a specific event loop. There's a problem when running the test threadpool_task. It shows uv_library_shutdown() running even though it's never called. Which is causing it to abort. Here's the stack: * thread #1, name = 'uv_run_tests_a', stop reason = signal SIGABRT frame libuv#6: 0x00000000004ce768 uv_run_tests_a`uv__threadpool_cleanup at threadpool.c:192:3 frame libuv#7: 0x00000000004d5d56 uv_run_tests_a`uv_library_shutdown at uv-common.c:956:3 frame libuv#8: 0x00007ffff7fc924e frame libuv#9: 0x00007ffff7c45495 libc.so.6`__run_exit_handlers(status=0, listp=<unavailable>, run_list_atexit=true, run_dtors=true) at exit.c:113:8 Can't figure out why this is happening. Another problem is when you remove the uv_sleep() from task_cb() it'll cause another abort when trying to lock the mutex. Here's the stack: frame libuv#8: 0x00000000004ed375 uv_run_tests_a`uv_mutex_lock(mutex=0x0000000001460c50) at thread.c:344:7 frame libuv#9: 0x00000000004cebdf uv_run_tests_a`uv__queue_work(w=0x0000000001462378) at threadpool.c:342:5 (lldb) f 9 frame libuv#9: 0x00000000004cebdf uv_run_tests_a`uv__queue_work(w=0x0000000001462378) at threadpool.c:342:5 341 if (w->done == NULL) { -> 342 uv_mutex_lock(&mutex); 343 w->work = NULL; Still not sure if this is a race condition.
EDIT: the proposal is now here: libuv/leps#3
Please read the README for information on the LEP process: https://github.com/libuv/leps/blob/master/README.md
The text was updated successfully, but these errors were encountered: