Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: use the tokio task instead of the native thread to poll each isolate #244

Merged
merged 39 commits into from
Jan 21, 2024

Conversation

nyannyacha
Copy link
Collaborator

@nyannyacha nyannyacha commented Jan 13, 2024

What kind of change does this PR introduce?

Refactor

Description

This is a continuation of #241 .

The PR refactors the part that spawns threads for the isolate into Tokio-based lightweight threads1.

In my observation, the PR results in significant improvements in terms of memory consumption and request throughput compared to v1.30.0.

However, the memory leak issue will continue to exist as long as #240 is not merged, so expect OOM to be unavoidable over time.

Minor changes

  • PR improves the stability of cpu_timer crate. This resulted in running tests being more stabilized.
  • Added tests for various request failure scenarios.
  • A request cancellation error by the supervisor can be distinguished clearly as a WorkerRequestCancelled error instead of an InvalidWorkerResponse error. It is only raised when the script execution reaches the CPU/Wall-clock/Memory limit.2

Footnotes

  1. The thread pool size for the isolates can be adjusted through EDGE_RUNTIME_WORKER_POOL_SIZE environment variable. The default thread pool size follows the return value of std::thread::available_parallelism.

  2. If it is too small to maintain the isolation in stable, It can raise various errors including the WorkerRequestCancelled.

@nyannyacha nyannyacha marked this pull request as draft January 13, 2024 06:33
@nyannyacha
Copy link
Collaborator Author

As I said in the conversation of the previous PR, I've modified integration tests to be compatible with the new policies.

Especially, I've made changes for the tests to be run on the oneshot policy that's the same as the per_request policy but guarantees the exit of the isolate after the request.

@nyannyacha
Copy link
Collaborator Author

I do not have permission to restart the github test action, but on my local machine, the flakes (that's the same flake with the github runner), which occurred if I ran tests with Valgrind are gone.

@nyannyacha
Copy link
Collaborator Author

nyannyacha commented Jan 15, 2024

NOTE: message Conditional jump or move depends on uninitialised value(s) in the location available_parallelism is false positive. link

vscode ➜ /workspaces/edge-runtime/crates/base (perf-use-green-thread) $ valgrind -- ../../target/debug/deps/main_worker_tests-7be358cee9f87db6

==1538680== Memcheck, a memory error detector
==1538680== Copyright (C) 2002-2022, and GNU GPL'd, by Julian Seward et al.
==1538680== Using Valgrind-3.19.0 and LibVEX; rerun with -h for copyright info
==1538680== Command: ../../target/debug/deps/main_worker_tests-7be358cee9f87db6
==1538680== 
==1538680== Conditional jump or move depends on uninitialised value(s) 
==1538680==    at 0x51FA02C: drop_in_place<core::option::Option<(alloc::vec::Vec<u8, alloc::alloc::Global>, std::sys::unix::thread::cgroups::Cgroup)>> (mod.rs:498)
==1538680==    by 0x51FA02C: {closure#1} (thread.rs:548)
==1538680==    by 0x51FA02C: fold<core::slice::iter::Split<u8, std::sys::unix::thread::cgroups::quota::{closure_env#0}>, core::option::Option<(alloc::vec::Vec<u8, alloc::alloc::Global>, std::sys::unix::thread::cgroups::Cgroup)>, std::sys::unix::thread::cgroups::quota::{closure_env#1}> (iterator.rs:2640)
==1538680==    by 0x51FA02C: quota (thread.rs:526)
==1538680==    by 0x51FA02C: available_parallelism (thread.rs:331)
==1538680==    by 0x51FA02C: std::thread::available_parallelism (mod.rs:1783)
==1538680==    by 0x520A43: test::helpers::concurrency::get_concurrency (concurrency.rs:12)
==1538680==    by 0x510AAF: call_once<fn() -> usize, ()> (function.rs:250)
==1538680==    by 0x510AAF: unwrap_or_else<usize, fn() -> usize> (option.rs:976)
==1538680==    by 0x510AAF: test::console::run_tests_console (console.rs:305)
==1538680==    by 0x529647: test::test_main (lib.rs:143)
==1538680==    by 0x52A2FF: test::test_main_static (lib.rs:162)
==1538680==    by 0x4E5D0F: main_worker_tests::main (main_worker_tests.rs:1)
==1538680==    by 0x4E7E27: core::ops::function::FnOnce::call_once (function.rs:250)
==1538680==    by 0x4E6AAB: std::sys_common::backtrace::__rust_begin_short_backtrace (backtrace.rs:154)
==1538680==    by 0x4FD87F: std::rt::lang_start::{{closure}} (rt.rs:167)
==1538680==    by 0x51F9027: call_once<(), (dyn core::ops::function::Fn<(), Output=i32> + core::marker::Sync + core::panic::unwind_safe::RefUnwindSafe)> (function.rs:284)
==1538680==    by 0x51F9027: do_call<&(dyn core::ops::function::Fn<(), Output=i32> + core::marker::Sync + core::panic::unwind_safe::RefUnwindSafe), i32> (panicking.rs:552)
==1538680==    by 0x51F9027: try<i32, &(dyn core::ops::function::Fn<(), Output=i32> + core::marker::Sync + core::panic::unwind_safe::RefUnwindSafe)> (panicking.rs:516)
==1538680==    by 0x51F9027: catch_unwind<&(dyn core::ops::function::Fn<(), Output=i32> + core::marker::Sync + core::panic::unwind_safe::RefUnwindSafe), i32> (panic.rs:142)
==1538680==    by 0x51F9027: {closure#2} (rt.rs:148)
==1538680==    by 0x51F9027: do_call<std::rt::lang_start_internal::{closure_env#2}, isize> (panicking.rs:552)
==1538680==    by 0x51F9027: try<isize, std::rt::lang_start_internal::{closure_env#2}> (panicking.rs:516)
==1538680==    by 0x51F9027: catch_unwind<std::rt::lang_start_internal::{closure_env#2}, isize> (panic.rs:142)
==1538680==    by 0x51F9027: std::rt::lang_start_internal (rt.rs:148)
==1538680==    by 0x4FD84F: std::rt::lang_start (rt.rs:166)
==1538680==    by 0x4E5D43: main (in /workspaces/edge-runtime/target/debug/deps/main_worker_tests-7be358cee9f87db6)
==1538680== 

running 5 tests
==1538680== Conditional jump or move depends on uninitialised value(s)
==1538680==    at 0x51FA02C: drop_in_place<core::option::Option<(alloc::vec::Vec<u8, alloc::alloc::Global>, std::sys::unix::thread::cgroups::Cgroup)>> (mod.rs:498)
==1538680==    by 0x51FA02C: {closure#1} (thread.rs:548)
==1538680==    by 0x51FA02C: fold<core::slice::iter::Split<u8, std::sys::unix::thread::cgroups::quota::{closure_env#0}>, core::option::Option<(alloc::vec::Vec<u8, alloc::alloc::Global>, std::sys::unix::thread::cgroups::Cgroup)>, std::sys::unix::thread::cgroups::quota::{closure_env#1}> (iterator.rs:2640)
==1538680==    by 0x51FA02C: quota (thread.rs:526)
==1538680==    by 0x51FA02C: available_parallelism (thread.rs:331)
==1538680==    by 0x51FA02C: std::thread::available_parallelism (mod.rs:1783)
==1538680==    by 0x520A43: test::helpers::concurrency::get_concurrency (concurrency.rs:12)
==1538680==    by 0x511307: call_once<fn() -> usize, ()> (function.rs:250)
==1538680==    by 0x511307: unwrap_or_else<usize, fn() -> usize> (option.rs:976)
==1538680==    by 0x511307: run_tests<test::console::run_tests_console::{closure_env#2}> (lib.rs:336)
==1538680==    by 0x511307: test::console::run_tests_console (console.rs:329)
==1538680==    by 0x529647: test::test_main (lib.rs:143)
==1538680==    by 0x52A2FF: test::test_main_static (lib.rs:162)
==1538680==    by 0x4E5D0F: main_worker_tests::main (main_worker_tests.rs:1)
==1538680==    by 0x4E7E27: core::ops::function::FnOnce::call_once (function.rs:250)
==1538680==    by 0x4E6AAB: std::sys_common::backtrace::__rust_begin_short_backtrace (backtrace.rs:154)
==1538680==    by 0x4FD87F: std::rt::lang_start::{{closure}} (rt.rs:167)
==1538680==    by 0x51F9027: call_once<(), (dyn core::ops::function::Fn<(), Output=i32> + core::marker::Sync + core::panic::unwind_safe::RefUnwindSafe)> (function.rs:284)
==1538680==    by 0x51F9027: do_call<&(dyn core::ops::function::Fn<(), Output=i32> + core::marker::Sync + core::panic::unwind_safe::RefUnwindSafe), i32> (panicking.rs:552)
==1538680==    by 0x51F9027: try<i32, &(dyn core::ops::function::Fn<(), Output=i32> + core::marker::Sync + core::panic::unwind_safe::RefUnwindSafe)> (panicking.rs:516)
==1538680==    by 0x51F9027: catch_unwind<&(dyn core::ops::function::Fn<(), Output=i32> + core::marker::Sync + core::panic::unwind_safe::RefUnwindSafe), i32> (panic.rs:142)
==1538680==    by 0x51F9027: {closure#2} (rt.rs:148)
==1538680==    by 0x51F9027: do_call<std::rt::lang_start_internal::{closure_env#2}, isize> (panicking.rs:552)
==1538680==    by 0x51F9027: try<isize, std::rt::lang_start_internal::{closure_env#2}> (panicking.rs:516)
==1538680==    by 0x51F9027: catch_unwind<std::rt::lang_start_internal::{closure_env#2}, isize> (panic.rs:142)
==1538680==    by 0x51F9027: std::rt::lang_start_internal (rt.rs:148)
==1538680==    by 0x4FD84F: std::rt::lang_start (rt.rs:166)
==1538680==    by 0x4E5D43: main (in /workspaces/edge-runtime/target/debug/deps/main_worker_tests-7be358cee9f87db6)
==1538680== 
==1538680== Thread 5 test_main_worke:
==1538680== Conditional jump or move depends on uninitialised value(s)
==1538680==    at 0x51FA02C: drop_in_place<core::option::Option<(alloc::vec::Vec<u8, alloc::alloc::Global>, std::sys::unix::thread::cgroups::Cgroup)>> (mod.rs:498)
==1538680==    by 0x51FA02C: {closure#1} (thread.rs:548)
==1538680==    by 0x51FA02C: fold<core::slice::iter::Split<u8, std::sys::unix::thread::cgroups::quota::{closure_env#0}>, core::option::Option<(alloc::vec::Vec<u8, alloc::alloc::Global>, std::sys::unix::thread::cgroups::Cgroup)>, std::sys::unix::thread::cgroups::quota::{closure_env#1}> (iterator.rs:2640)
==1538680==    by 0x51FA02C: quota (thread.rs:526)
==1538680==    by 0x51FA02C: available_parallelism (thread.rs:331)
==1538680==    by 0x51FA02C: std::thread::available_parallelism (mod.rs:1783)
==1538680==    by 0x54B28F: dashmap::default_shard_amount::{{closure}} (lib.rs:69)
==1538680==    by 0x54A837: once_cell::sync::OnceCell<T>::get_or_init::{{closure}} (lib.rs:1122)
==1538680==    by 0x54A693: once_cell::imp::OnceCell<T>::initialize::{{closure}} (imp_std.rs:72)
==1538680==    by 0x51C2FFB: core::ops::function::impls::<impl core::ops::function::FnMut<A> for &mut F>::call_mut (function.rs:294)
==1538680==    by 0x51C489B: once_cell::imp::initialize_or_wait (imp_std.rs:196)
==1538680==    by 0x54A64B: once_cell::imp::OnceCell<T>::initialize (imp_std.rs:68)
==1538680==    by 0x54A8FF: once_cell::sync::OnceCell<T>::get_or_try_init (lib.rs:1163)
==1538680==    by 0x54A803: once_cell::sync::OnceCell<T>::get_or_init (lib.rs:1122)
==1538680==    by 0x54B26F: dashmap::default_shard_amount (lib.rs:68)
==1538680==    by 0x537767: dashmap::DashMap<K,V,S>::with_capacity_and_hasher (lib.rs:229)
==1538680==    by 0x5376B7: dashmap::DashMap<K,V,S>::with_hasher (lib.rs:212)
==1538680== 
==1538680== Conditional jump or move depends on uninitialised value(s)
==1538680==    at 0x51FA02C: drop_in_place<core::option::Option<(alloc::vec::Vec<u8, alloc::alloc::Global>, std::sys::unix::thread::cgroups::Cgroup)>> (mod.rs:498)
==1538680==    by 0x51FA02C: {closure#1} (thread.rs:548)
==1538680==    by 0x51FA02C: fold<core::slice::iter::Split<u8, std::sys::unix::thread::cgroups::quota::{closure_env#0}>, core::option::Option<(alloc::vec::Vec<u8, alloc::alloc::Global>, std::sys::unix::thread::cgroups::Cgroup)>, std::sys::unix::thread::cgroups::quota::{closure_env#1}> (iterator.rs:2640)
==1538680==    by 0x51FA02C: quota (thread.rs:526)
==1538680==    by 0x51FA02C: available_parallelism (thread.rs:331)
==1538680==    by 0x51FA02C: std::thread::available_parallelism (mod.rs:1783)
==1538680==    by 0x6985EF: <base::rt_worker::worker_pool::WorkerPoolPolicy as core::default::Default>::default (worker_pool.rs:71)
==1538680==    by 0x4FB4A7: base::rt_worker::worker_pool::WorkerPoolPolicy::new (worker_pool.rs:90)
==1538680==    by 0x4FFECB: main_worker_tests::integration_test_helper::test_user_worker_pool_policy (integration_test_helper.rs:159)
==1538680==    by 0x4D64B7: main_worker_tests::test_main_worker_options_request::{{closure}}::{{closure}} (main_worker_tests.rs:29)
==1538680==    by 0x4FB31B: <core::pin::Pin<P> as core::future::future::Future>::poll (future.rs:125)
==1538680==    by 0x4FB3EF: <core::pin::Pin<P> as core::future::future::Future>::poll (future.rs:125)
==1538680==    by 0x4CBB83: tokio::runtime::scheduler::current_thread::CoreGuard::block_on::{{closure}}::{{closure}}::{{closure}} (mod.rs:665)
==1538680==    by 0x4CBADF: with_budget<core::task::poll::Poll<()>, tokio::runtime::scheduler::current_thread::{impl#8}::block_on::{closure#0}::{closure#0}::{closure_env#0}<core::pin::Pin<&mut core::pin::Pin<&mut dyn core::future::future::Future<Output=()>>>>> (coop.rs:107)
==1538680==    by 0x4CBADF: budget<core::task::poll::Poll<()>, tokio::runtime::scheduler::current_thread::{impl#8}::block_on::{closure#0}::{closure#0}::{closure_env#0}<core::pin::Pin<&mut core::pin::Pin<&mut dyn core::future::future::Future<Output=()>>>>> (coop.rs:73)
==1538680==    by 0x4CBADF: tokio::runtime::scheduler::current_thread::CoreGuard::block_on::{{closure}}::{{closure}} (mod.rs:665)
==1538680==    by 0x4CAA0B: tokio::runtime::scheduler::current_thread::Context::enter (mod.rs:410)
==1538680==    by 0x4CB503: tokio::runtime::scheduler::current_thread::CoreGuard::block_on::{{closure}} (mod.rs:664)
==1538680==    by 0x4CB30F: tokio::runtime::scheduler::current_thread::CoreGuard::enter::{{closure}} (mod.rs:743)
==1538680== 
==1538680== Warning: set address range perms: large range [0xe4d1000, 0x1e510000) (noaccess)
main function started
serving the request with ./test_cases/std_user_worker
test test_main_worker_options_request ... ok
==1538680== Warning: set address range perms: large range [0x29f80000, 0x39fbf000) (noaccess)
main function started
serving the request with ./test_cases/std_user_worker
DOMException: The signal has been aborted
test test_main_worker_abort_request ... ok
test test_main_worker_boot_error ... ok
==1538680== Warning: set address range perms: large range [0x29f80000, 0x39fbf000) (noaccess)
main function started
serving the request with ./test_cases/std_user_worker
test test_main_worker_post_request_with_transfer_encoding ... ok
==1538680== Warning: set address range perms: large range [0x29f80000, 0x39fbf000) (noaccess)
main function started
serving the request with ./test_cases/std_user_worker
test test_main_worker_post_request ... ok

test result: ok. 5 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 48.26s

==1538680== 
==1538680== HEAP SUMMARY:
==1538680==     in use at exit: 498,255 bytes in 4,196 blocks
==1538680==   total heap usage: 256,153 allocs, 251,957 frees, 137,212,210 bytes allocated
==1538680== 
==1538680== LEAK SUMMARY:
==1538680==    definitely lost: 110,408 bytes in 1,008 blocks
==1538680==    indirectly lost: 96,928 bytes in 1,976 blocks
==1538680==      possibly lost: 9,212 bytes in 33 blocks
==1538680==    still reachable: 281,707 bytes in 1,179 blocks
==1538680==         suppressed: 0 bytes in 0 blocks
==1538680== Rerun with --leak-check=full to see details of leaked memory
==1538680== 
==1538680== Use --track-origins=yes to see where uninitialised values come from
==1538680== For lists of detected and suppressed errors, rerun with: -s
==1538680== ERROR SUMMARY: 8 errors from 4 contexts (suppressed: 0 from 0)

@nyannyacha
Copy link
Collaborator Author

Caused by:
  process didn't exit successfully: `/home/runner/work/edge-runtime/edge-runtime/target/debug/deps/oak_user_worker_tests-6b8d17e15f6475c6` (signal: 11, SIGSEGV: invalid memory reference)
Error: Process completed with exit code 101.

🙄😅

@nyannyacha
Copy link
Collaborator Author

It's weird 🧐
I tried running oak_user_worker_tests with Valgrind tens of times, but I can't reproduce SIGSEGV.

@nyannyacha
Copy link
Collaborator Author

Okay, I think I got it. This might be a PKU problem like I submitted the PKU-related PR before.
Initialize the V8 platform never been performed in the integration tests.

@nyannyacha
Copy link
Collaborator Author

This PKU-related SIGSEGV only happens when JIT compilation is enabled, so the tests that aren't hot enough when V8 to see may have been lucky enough to pass.

@nyannyacha
Copy link
Collaborator Author

Invocation timing must be adjusted so that V8 platform initialization function does not depend on the CLI.

@nyannyacha
Copy link
Collaborator Author

Caused by:
  process didn't exit successfully: `/home/runner/work/edge-runtime/edge-runtime/target/debug/deps/oak_user_worker_tests-6b8d17e15f6475c6` (signal: 11, SIGSEGV: invalid memory reference)

🙄

@nyannyacha
Copy link
Collaborator Author

nyannyacha commented Jan 15, 2024

Okay, I found another point that might have triggered the SIGSEGV. (and able to reproduce on my x86_64 machine, but not aarch64 machine)

@nyannyacha
Copy link
Collaborator Author

🎄

@nyannyacha nyannyacha force-pushed the perf-use-green-thread branch 2 times, most recently from c239377 to 0788117 Compare January 15, 2024 08:00
@nyannyacha nyannyacha marked this pull request as ready for review January 15, 2024 08:24
Copy link
Collaborator

@andreespirela andreespirela left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So far so good. I'm looking at the integration tests it seems like it's only testing successful scenearios? can you correct me if im wrong? If that's the case though, it would be good to test the failure scenearios like when the timeout is called for example..

@nyannyacha
Copy link
Collaborator Author

@andreespirela That's a good point! I'll write the tests for various failure scenarios.

@nyannyacha
Copy link
Collaborator Author

I have not written any integration tests, as I mentioned in the previous PR, I've just added the flow control routines to existing integration tests 😋

@andreespirela
Copy link
Collaborator

I have not written any integration tests, as I mentioned in the previous PR, I've just added the flow control routines to existing integration tests 😋

Yep that's all good, maybe it doesn't need to be a whole new test completely, maybe you can part from existing ones, just as long as we're testing the successful and failure scenearios, it's all good.

@nyannyacha
Copy link
Collaborator Author

@andreespirela
Can you think of any special failure scenarios other than timeout scenarios? If so, I'd love to hear about them.

@nyannyacha
Copy link
Collaborator Author

BTW, I have an idea for the failure scenarios because I experienced various failures while stress testing 😋

@nyannyacha
Copy link
Collaborator Author

nyannyacha commented Jan 15, 2024

Note: After this PR all tests that make the worker or worker pool inside the body must run serial. This PR makes the tokio runtime global for spinning up the tasks. The Isolates are very sensitive using thread local storage, so they should run serially.

@nyannyacha
Copy link
Collaborator Author

nyannyacha commented Jan 15, 2024

In the morning in my local time (South Korea, KST), I'll be writing tests for timeout, intentional connection reset by the peer, and terminating the request due to CPU time being exhausted (also including wall clock timeout).

@nyannyacha
Copy link
Collaborator Author

I just finished writing the validation tests for various request failure scenarios.

@nyannyacha
Copy link
Collaborator Author

nyannyacha commented Jan 16, 2024

Characteristics per_worker (v1.30.0 vs PR-244) / 3 min1

Running environment

Hardware: Macbook Air M1 2020, 16GB
Container: Docker/Colima(0.6.7, vz, no rosetta 2)
Guest OS: Debian/Bookworm
ARCH: aarch64/arm64
VCPU: 8
RAM: 8GiB (16GiB Swap)

Code

Deno.serve(async (req: Request) => {
	let start = performance.now();
	let resp = mySlowFunction(2);
	let end = performance.now();

	const data = {
		test: 'foo',
		time: end - start,
		resp,
	};

	return Response.json(data);
});

function mySlowFunction(baseNumber) {
	let result = 0;
	for (var i = Math.pow(baseNumber, 7); i >= 0; i--) {
		result += Math.atan(i) * Math.tan(i);
	};
	return result;
}

Limits

const workerTimeoutMs = 20 * 1000;
const cpuTimeSoftLimitMs = 500;
const cpuTimeHardLimitMs = 600;

Command (v1.30.0)

start --main-service /home/deno/functions/main -p 8888

Command (PR-244)

start --main-service /home/deno/functions/main --max-parallelism=20 --request-wait-timeout=1000000 -p 8888

Command

vegeta attack -rate=0 -duration=3m -max-workers=12

Max-RSS / Request Throughput / Latency

v1.30.0

Before
before v130

Middle (Max)
middle v130

After
after v130

Report

Requests      [total, rate, throughput]         335246, 1862.32, 1857.90
Duration      [total, attack, wait]             3m0s, 3m0s, 178.789ms
Latencies     [min, mean, 50, 90, 95, 99, max]  415.111µs, 5.345ms, 1.484ms, 4.533ms, 5.965ms, 134.48ms, 1.734s
Bytes In      [total, mean]                     16435999, 49.03
Bytes Out     [total, mean]                     7375412, 22.00
Success       [ratio]                           99.86%
Status Codes  [code:count]                      200:334783  500:463
Error Set:
500 Internal Server Error

Plot
v130

PR-244

Before
before meow

Middle (Max)
middle meow

After
after meow

Report

Requests      [total, rate, throughput]         638162, 3545.35, 3545.33
Duration      [total, attack, wait]             3m0s, 3m0s, 978.813µs
Latencies     [min, mean, 50, 90, 95, 99, max]  500.949µs, 2.143ms, 1.627ms, 4.188ms, 5.046ms, 7.096ms, 125.404ms
Bytes In      [total, mean]                     31269938, 49.00
Bytes Out     [total, mean]                     14039564, 22.00
Success       [ratio]                           100.00%
Status Codes  [code:count]                      200:638162
Error Set:

Plot
meow

Footnotes

  1. benchmark was run in the same host, so the request per sec may not be accurate

@nyannyacha
Copy link
Collaborator Author

334,783(1857) -> 638,162(3545)

🧐

Request throughput is nearly 2x improved 😋

@andreespirela
Copy link
Collaborator

@nyannyacha Insane PR. I'm very happy with this. I will approve it but let's wait for @laktek to also approve it. I've tested it locally and everything makes sense, but @laktek is our CPU timer rockstar .

Copy link
Collaborator

@andreespirela andreespirela left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@nyannyacha
Copy link
Collaborator Author

rebased
cc @laktek

At the time of invoking `DenoRuntime::new()`, it is a time that already detached
from the main thread.

The function initializes the V8 platform should be invoked only from the main
thread, so it's not right.

(cherry picked from commit 989867a)
Copy link
Contributor

@laktek laktek left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks again for this contribution and taking time to leave elaborate comments the changes. Sorry for the delay in getting it merged 😓

@laktek laktek changed the title perf: use the tokio task instead of the native thread to poll each isolate feat: use the tokio task instead of the native thread to poll each isolate Jan 21, 2024
@laktek
Copy link
Contributor

laktek commented Jan 21, 2024

Renamed the prefix to feat, so we can cut a new minor release.

@laktek laktek merged commit 3d218b5 into supabase:main Jan 21, 2024
3 checks passed
Copy link

🎉 This PR is included in version 1.32.0 🎉

The release is available on GitHub release

Your semantic-release bot 📦🚀

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants