test(node): add regression test for EADDRINUSE handling#183
Conversation
EADDRINUSE not handled
|
Caution Review failedPull request was closed or merged during review 📝 WalkthroughWalkthroughA new test case was added to verify that when a server encounters an EADDRINUSE error (port already in use), the Changes
Estimated code review effort🎯 2 (Simple) | ⏱️ ~8 minutes Poem
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (2)
test/unit.test.ts (1)
86-91: Exit handler doesn't clear timeout on successful exit (code 0).When the worker exits successfully with code 0 after sending a message, the timeout remains scheduled. While it will likely be cleared by the message handler first, if the exit event fires before the message event (edge case), the timeout continues running unnecessarily.
This is a minor edge case that won't affect test correctness since the message handler should fire first.
🔧 Optional: Clear timeout on any exit
worker.once("exit", (code) => { + clearTimeout(timeout); if (code !== 0) { - clearTimeout(timeout); reject(new Error(`worker exited with code ${code}`)); } });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/unit.test.ts` around lines 86 - 91, The exit handler in worker.once("exit", (code) => { ... }) only clears the timeout on non-zero exit; update this handler to always clearTimeout(timeout) immediately when the exit event fires (before any conditional), then keep the existing logic that rejects for non-zero codes and does nothing else for code === 0 so the test still relies on the message handler to resolve. This ensures the timeout is cancelled even in the edge case where "exit" arrives before "message".test/fixture/node-port-conflict-worker-helper.ts (1)
8-18: Unhandled promise fromjiti.import()may silently swallow errors.
jiti.import()returns a Promise, but it's not awaited in the inline script. If the imported module throws during initialization, the error becomes an unhandled rejection rather than propagating to the worker's error handling.🔧 Proposed fix to await the import
const worker = /* JavaScript */ ` import { createRequire } from "node:module"; import { workerData } from "node:worker_threads"; const filename = "${import.meta.url}"; const require = createRequire(filename); const { createJiti } = require("jiti"); const jiti = createJiti(workerData.__ts_worker_filename); - jiti.import(workerData.__ts_worker_filename); + await jiti.import(workerData.__ts_worker_filename); `;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/fixture/node-port-conflict-worker-helper.ts` around lines 8 - 18, The inline worker script calls jiti.import(workerData.__ts_worker_filename) but doesn't await it, causing any initialization errors to become unhandled rejections; change the script to await the promise (e.g., wrap the import in an async IIFE or use top-level await) so that jiti.import(...) is awaited and any thrown errors propagate to the worker's error handling; update the string that defines worker (references: createJiti, jiti.import, workerData.__ts_worker_filename) to perform await jiti.import(workerData.__ts_worker_filename).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/server/node.ts`:
- Around line 20-62: The startupPromise can hang if nodeServer is already
listening; modify the serve() startup logic around startupPromise/nodeServer to
check nodeServer.listening immediately after obtaining nodeServer and before
attaching listeners or waiting for events, and if true call resolve(server) (and
skip attaching onListening/onError) so startupPromise resolves synchronously;
keep the existing error handling path (onError/onListening, cleanup,
originalServe()) for the non-listening case and avoid attaching the "upgrade"
logic or event handlers unnecessarily when you detect the already-listening
state.
---
Nitpick comments:
In `@test/fixture/node-port-conflict-worker-helper.ts`:
- Around line 8-18: The inline worker script calls
jiti.import(workerData.__ts_worker_filename) but doesn't await it, causing any
initialization errors to become unhandled rejections; change the script to await
the promise (e.g., wrap the import in an async IIFE or use top-level await) so
that jiti.import(...) is awaited and any thrown errors propagate to the worker's
error handling; update the string that defines worker (references: createJiti,
jiti.import, workerData.__ts_worker_filename) to perform await
jiti.import(workerData.__ts_worker_filename).
In `@test/unit.test.ts`:
- Around line 86-91: The exit handler in worker.once("exit", (code) => { ... })
only clears the timeout on non-zero exit; update this handler to always
clearTimeout(timeout) immediately when the exit event fires (before any
conditional), then keep the existing logic that rejects for non-zero codes and
does nothing else for code === 0 so the test still relies on the message handler
to resolve. This ensures the timeout is cancelled even in the edge case where
"exit" arrives before "message".
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 1399cca7-286d-433e-8445-6e1e661de85a
📒 Files selected for processing (4)
src/server/node.tstest/fixture/node-port-conflict-worker-helper.tstest/fixture/node-port-conflict-worker.tstest/unit.test.ts
| if (!startupPromise) { | ||
| const nodeServer = server.node?.server; | ||
| startupPromise = new Promise<Server>((resolve, reject) => { | ||
| if (!nodeServer) { | ||
| resolve(server); | ||
| return; | ||
| } | ||
|
|
||
| if (!upgradeAttached) { | ||
| upgradeAttached = true; | ||
| nodeServer.on("upgrade", (req, socket, head) => { | ||
| ws.handleUpgrade( | ||
| req, | ||
| socket, | ||
| head, | ||
| // @ts-expect-error (upgrade is not typed) | ||
| new NodeRequest({ req, upgrade: { socket, head } }), | ||
| ); | ||
| }); | ||
| } | ||
|
|
||
| const cleanup = () => { | ||
| nodeServer.off("error", onError); | ||
| nodeServer.off("listening", onListening); | ||
| }; | ||
| const onError = (error: Error) => { | ||
| cleanup(); | ||
| reject(error); | ||
| }; | ||
| const onListening = () => { | ||
| cleanup(); | ||
| resolve(server); | ||
| }; | ||
|
|
||
| nodeServer.once("error", onError); | ||
| nodeServer.once("listening", onListening); | ||
|
|
||
| try { | ||
| Promise.resolve(originalServe()).catch(onError); | ||
| } catch (error) { | ||
| onError(error as Error); | ||
| } | ||
| }); |
There was a problem hiding this comment.
Potential hang if nodeServer is already listening when serve() is called.
If server.node?.server is already in the listening state (e.g., the underlying HTTP server was started by another mechanism before this plugin's serve() is invoked), the 'listening' event will never fire, and startupPromise will remain pending indefinitely.
Consider checking nodeServer.listening and resolving immediately in that case:
🛡️ Proposed fix to handle already-listening server
if (!upgradeAttached) {
upgradeAttached = true;
nodeServer.on("upgrade", (req, socket, head) => {
ws.handleUpgrade(
req,
socket,
head,
// `@ts-expect-error` (upgrade is not typed)
new NodeRequest({ req, upgrade: { socket, head } }),
);
});
}
+ // If server is already listening, resolve immediately
+ if (nodeServer.listening) {
+ resolve(server);
+ return;
+ }
+
const cleanup = () => {📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if (!startupPromise) { | |
| const nodeServer = server.node?.server; | |
| startupPromise = new Promise<Server>((resolve, reject) => { | |
| if (!nodeServer) { | |
| resolve(server); | |
| return; | |
| } | |
| if (!upgradeAttached) { | |
| upgradeAttached = true; | |
| nodeServer.on("upgrade", (req, socket, head) => { | |
| ws.handleUpgrade( | |
| req, | |
| socket, | |
| head, | |
| // @ts-expect-error (upgrade is not typed) | |
| new NodeRequest({ req, upgrade: { socket, head } }), | |
| ); | |
| }); | |
| } | |
| const cleanup = () => { | |
| nodeServer.off("error", onError); | |
| nodeServer.off("listening", onListening); | |
| }; | |
| const onError = (error: Error) => { | |
| cleanup(); | |
| reject(error); | |
| }; | |
| const onListening = () => { | |
| cleanup(); | |
| resolve(server); | |
| }; | |
| nodeServer.once("error", onError); | |
| nodeServer.once("listening", onListening); | |
| try { | |
| Promise.resolve(originalServe()).catch(onError); | |
| } catch (error) { | |
| onError(error as Error); | |
| } | |
| }); | |
| if (!startupPromise) { | |
| const nodeServer = server.node?.server; | |
| startupPromise = new Promise<Server>((resolve, reject) => { | |
| if (!nodeServer) { | |
| resolve(server); | |
| return; | |
| } | |
| if (!upgradeAttached) { | |
| upgradeAttached = true; | |
| nodeServer.on("upgrade", (req, socket, head) => { | |
| ws.handleUpgrade( | |
| req, | |
| socket, | |
| head, | |
| // `@ts-expect-error` (upgrade is not typed) | |
| new NodeRequest({ req, upgrade: { socket, head } }), | |
| ); | |
| }); | |
| } | |
| // If server is already listening, resolve immediately | |
| if (nodeServer.listening) { | |
| resolve(server); | |
| return; | |
| } | |
| const cleanup = () => { | |
| nodeServer.off("error", onError); | |
| nodeServer.off("listening", onListening); | |
| }; | |
| const onError = (error: Error) => { | |
| cleanup(); | |
| reject(error); | |
| }; | |
| const onListening = () => { | |
| cleanup(); | |
| resolve(server); | |
| }; | |
| nodeServer.once("error", onError); | |
| nodeServer.once("listening", onListening); | |
| try { | |
| Promise.resolve(originalServe()).catch(onError); | |
| } catch (error) { | |
| onError(error as Error); | |
| } | |
| }); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/server/node.ts` around lines 20 - 62, The startupPromise can hang if
nodeServer is already listening; modify the serve() startup logic around
startupPromise/nodeServer to check nodeServer.listening immediately after
obtaining nodeServer and before attaching listeners or waiting for events, and
if true call resolve(server) (and skip attaching onListening/onError) so
startupPromise resolves synchronously; keep the existing error handling path
(onError/onListening, cleanup, originalServe()) for the non-listening case and
avoid attaching the "upgrade" logic or event handlers unnecessarily when you
detect the already-listening state.
|
(ai generated) Thanks for the detailed PR! After digging in, I believe the root cause has already been fixed upstream in srvx now. Upstream fix in srvx 0.11.14. Through srvx 0.11.13, crossws' current pin lets users hit the old behavior. Empirical check. I reverted Suggested path: bump the minimum srvx instead: - "srvx": "^0.11.8",
+ "srvx": "^0.11.14",That's a one-line fix and avoids layering a second error-handling wrapper on top of srvx's own (which can race on the shared If you'd still like regression coverage in crossws, a ~10-line in-process test is enough — no worker_threads, no jiti-in-worker shim: test("ready() rejects on EADDRINUSE", async () => {
const port = await getRandomPort("localhost");
const blocker = createServer().listen(port, "127.0.0.1");
await once(blocker, "listening");
const server = serve({ port, hostname: "127.0.0.1", fetch: () => new Response("ok"), websocket: {} });
await expect(server.ready()).rejects.toMatchObject({ code: "EADDRINUSE" });
blocker.close();
}); |
243e3e1 to
7ced408
Compare
EADDRINUSE not handledEADDRINUSE handling
pi0
left a comment
There was a problem hiding this comment.
Thanks! Since srvx is fixing it i have updated your PR to add regression test.
Summary
Close #182
I tried my best to include the reproducible worker code, if you wish to have dedicated Node.js process for this to be reproduced and checked, please tell me.
This error was long existed for
srvxtoo, we had to https://github.com/moeru-ai/airi/blob/main/patches/srvx.patch patch theerrorhandler all the way down, I found that recentlycrosswssomehow thrown this error too (probably because we just bump-ed the version).I am not sure how to design and handle the
Promisereturn for server.serve, please review it and suggest me if needed.Summary by CodeRabbit