Description
Hello friends of the community, how are you?
First of all, I always like to make it clear that I simply love the Express project. I have been using it in my projects for over 8 years. Therefore, this post seeks to help improve the project even more. I recently opened an inquiry regarding the performance of Express compared to other HTTP server options available. The focus on the release of version 5 was mentioned first. I fully agree that there are several priority issues in the project. However, on my own initiative, I began to study the codes more deeply to try to understand what causes Express to have a lower performance than Fasfity or Koa, for example. So I started a reinterpretation by implementing the same Express functions in a new project. In my specific case, my focus is on integrating Vite functionalities into my HTTP server and creating other layers of abstraction such as decorators. However, using Express as a base, during the development of this project I realized that the biggest performance problem that Express faces is related to the use of 'Object.defineProperty', I'll explain why.
Both Koa and Fastify use an approach of creating a new Request and Response object, defining getters and applying the objects generated by HTTP and HTTP2 from nodejs as a 'raw', assigning them as a simple property, through the getters retrieving the necessary data such as headers, body, params and queries, Express assigns getters dynamically in both the request and response using the "defineGetter" function, I understand that the way it was done takes advantage of the entire structure of original properties and functions, but the processing cost to dynamically add these getters is very high, even using Reflect.defineProperty as an alternative there is still a delay that considerably reduces the amount of requests that Express can process per second.
To make it clearer, I'll leave a simple test comparing the two approaches:
For the test I am using my personal computer a Core i9 10980XE, with 256GB DDR4, Sansung NVME SSD, on Windows 10, using WLS from Ubuntu 22.04, Node in version 20.17.0, Autocannon in version 7.15.0
autocannon -w 8 -d 10 -c 1024 http://localhost:3000
Object Define Property
const http = require('http');
const server = http.createServer((req, res) => {
Object.defineProperty(req, 'xhr', {
configurable: true,
enumerable: true,
get: function () {
var val = this.headers["X-Requested-With"] || '';
return val.toLowerCase() === 'xmlhttprequest';
}
});
res.writeHead(200, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({ xhr: req.xhr }));
});
server.listen(3000, () => {
console.log('Server with Object.defineProperty is running on http://localhost:3000');
});
Result:
Stat | 1% | 2.5% | 50% | 97.5% | Avg | Stdev | Min |
---|---|---|---|---|---|---|---|
Req/Sec | 18,447 | 18,447 | 21,551 | 23,135 | 21,518.4 | 1,202.72 | 18,441 |
Bytes/Sec | 3.43 MB | 3.43 MB | 4.01 MB | 4.3 MB | 4 MB | 224 kB | 3.43 MB |
Object Create
const http = require('http');
const request = {
get xhr() {
var val = this.req.headers["X-Requested-With"] || '';
return val.toLowerCase() === 'xmlhttprequest';
}
}
const server = http.createServer((req, res) => {
let obj = Object.create(request);
obj.req = req;
res.writeHead(200, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({ xhr: obj.xhr }));
});
server.listen(3001, () => {
console.log('Server with Object.create is running on http://localhost:3001');
});
Result:
Stat | 1% | 2.5% | 50% | 97.5% | Avg | Stdev | Min |
---|---|---|---|---|---|---|---|
Req/Sec | 20,447 | 20,447 | 25,359 | 26,383 | 24,836.8 | 1,658.27 | 20,440 |
Bytes/Sec | 3.8 MB | 3.8 MB | 4.72 MB | 4.91 MB | 4.62 MB | 309 kB | 3.8 MB |
Note that in the examples above we are defining only 1 getter in the request, but this action occurs several times in both the request and the response, greatly reducing the number of requests per second that Express can serve. With this change, Express will have the same performance as Koa and Fastify. Basically, I know because I have already tested it by basically rewriting the request/response with the same functions currently present. I did not send a PR for the change because I was waiting for version 5 to be officially available to check if this point was changed. However, checking the version code I found that it is apparently the same as version 4.
I hope I have helped improve the project, and if you want my help to change the code I am available, see you later =)
Activity
cesco69 commentedon Sep 30, 2024
@andrehrferreira v5 is available https://www.npmjs.com/package/express/v/5.0.0
defineGetter still in version 5
https://github.com/expressjs/express/blob/5.0/lib/request.js#L509
andrehrferreira commentedon Sep 30, 2024
@cesco69 Yes, I checked earlier before posting, I will wait for the project maintainers to respond to the post to see if I implement the resolution and send it to them or if they will make the change themselves.
cesco69 commentedon Sep 30, 2024
And also... While
Object.defineProperty()
provides flexibility, it adds overhead compared to directly setting properties on an object. Consider directly adding values to the request object instead of defining getters for every property, particularly if these values won't change during a request's lifecycle.instead of
tjdav commentedon Sep 30, 2024
Would it be beneficial to extend the res properties using the Object.create second argument?
https://github.com/expressjs/express/blob/344b022fc7ed95cf07b46e097935e61151fd585f/lib/request.js#L30
for example
nigrosimone commentedon Sep 30, 2024
benchmark
VS
RESULT
defineGetter: 950,260 ops/s
Object.create: 305,490 ops/s
defineGetter wins (3x FASTER)!
andrehrferreira commentedon Sep 30, 2024
@tjdav From what I've seen, the problem is precisely extending based on the request that already has many parameters/functions. Object.create and Object.defineProperty probably perform some type of validation that ends up weighing down the process. That's why the alternative used by Koa and Fastify is to use req and res as a direct property and create getters by calling the property.
nigrosimone commentedon Sep 30, 2024
@tjdav & @andrehrferreira
With
Object.defineProperty
the engine (Node/V8) has to handle the possibility of custom getters, setters, and property descriptors, which can prevent certain optimizations that would otherwise make property access faster.For example, consider the difference between these two methods:
Direct assignment: 4,937,680 ops/s
Object.defineProperty: 1,191,010 ops/s
what I don't understand is why some functions have been assigned to the request with direct assignment (eg.: https://github.com/expressjs/express/blob/5.0/lib/request.js#L257) while others are with Object.defineProperty
see https://humanwhocodes.com/blog/2015/11/performance-implication-object-defineproperty/ and https://v8.dev/blog/fast-properties
Using direrect assignment make all a bit faster
cesco69 commentedon Oct 1, 2024
Hi, I have tried (in PR expressjs/express#6004) to extend the request
ad set the getter without
Object.defineProperty
, eg.:all 1227 tests passing
I'm on Windows and the express benchmark works only on linux. Can someone run this for me and show me if my PR improves performance?
faulpeltz commentedon Oct 1, 2024
Run on Linux Mint 22, on a Ryzen 3900X, with Node 22.9.0
The default benchmark only runs for 3sec per run which gave very inconsistent results, this is with 30sec/run.
Compared versions:
left 3 columns: vanilla express from expressjs/express:master
right 2 columns: your patch commit 8e3c005
Unfortunately there were no significant changes:

Flame chart using flame with default settings and one 30sec run with 10 middleware, 100 conn

andrehrferreira commentedon Oct 1, 2024
I'm testing my project using the same parameters and functions that exist in Express and the result is well balanced. I'll leave it here for reference. Obviously, it needs to be adapted to the reality of Express, which uses pure JavaScript.
https://github.com/andrehrferreira/cmmv-server/blob/main/packages/server/lib/request.ts
https://github.com/andrehrferreira/cmmv-server/blob/main/packages/server/lib/response.ts
It is not yet 100% implemented but preliminary results have been:
https://github.com/andrehrferreira/cmmv-server/blob/main/tools/benchmarks/benchmarks-allservers.js
cesco69 commentedon Oct 1, 2024
@andrehrferreira Wow! cmmv is really close to fastify!
@faulpeltz thanks!
andrehrferreira commentedon Oct 1, 2024
@cesco69 I honestly want Express to solve the performance problem so I don't have to use my project, it's a lot of work, but since my focus is SEO and keeping the application's TTFB low, I need the request latency to be as low as possible.
IamLizu commentedon Oct 1, 2024
Interesting, I was looking at router module the other day and I think using something smarter than a liner search approach for route matching might make it a little bit fast.
Maybe it could improve the benchmarks for requests handled per second but I am skeptical.
cc: @wesleytodd
40 remaining items
eddyw commentedon Feb 15, 2025
The biggest performance problem in express isn't
defineProperty
, it'sObject.setPrototypeOf
. Changing the prototype of an object is by its nature the fastest way to de-optimize a program.As MDN puts it nicely in a red box:
You can increase ExpressJS throughput and performance by roughly close to ~300% by just not doing
Object.setPrototypeOf
in the handler.Instead, you could require for the user to create an http server providing the ExpressJS request & response constructors, kinda like this:
So there won't be a need to mutate the prototype because the http(s) server will create an instance of ExpressJS requests/responses.
PoC (>280% performance boost in Node.js):
https://github.com/eddyw/express/pull/1/files
Benchmarks
Run on MacBook Pro M4 Max, Node.js v22.10
imho v5 shouldn't port this mistake of mutating the prototype from v4.
andrehrferreira commentedon Feb 16, 2025
@eddyw
Hi friend, how are you?
Can I make a suggestion? Make a pocket for testing and use the Fastify benchmarks for testing https://github.com/fastify/benchmarks. I have been doing this to be more certain of the effective change in performance compared to other frameworks, since the basis for acceptance is to be an HTTP server based on pure node HTTP. This will help to support the acceptance of a future PR for the project, I believe. There is a group being formed to deal with this topic and this type of consideration is super relevant to the discussion.
eddyw commentedon Feb 16, 2025
Hi @andrehrferreira
Sure, I run the fastify benchmarks on Node v23.8.0
andrehrferreira commentedon Feb 16, 2025
@eddyw
I was taking a look at your implementation, even though it fixes the problem in an alternative way to the one I implemented, a good part of the tests are broken from this change, making it totally unfeasible to submit a PR with this suggestion, to test run
npm run test
on your change and you will see that there are dozens of expected behaviors of the application that are lost, but anyway thank you very much for the tests =)@UlisesGascon @wesleytodd
eddyw commentedon Feb 16, 2025
@andrehrferreira
Yes, my PR isn't ready. It's a PoC I put together so this discussion can happen.
I'm not sure how to proceed because it's not possible to make it 100% backwards compatible. For instance, there are these cases where an app can be mounted and it again calls 'setPrototypeOf' to mutate and restore.
In Koa, you have this "context" instance. My proposal is to do the same but extending the http IncomingMessage/ServerResponse classes and it'd pass it down to other mounted apps rather than mutating the prototype, so it can't be backwards compatible 100%.
wesleytodd commentedon Feb 17, 2025
Both of these things are bottlnecks and both need to go, so I don't think there is a need to be too critical of which is worse. The main things we should be concerned with are the semver-ness of the changes and how easy it will be to land the changes. Hence why I have been waiting to engage too deeply on this work because I think we are unlikely to land this work before we start v6 preparation.
slagiewka commentedon May 8, 2025
It looks to me like Express got 3x faster. Just by using node v24.0.0 (or V8 13.6 in general?). I still don't believe it, despite rerunning benchmarks multiple times. You're welcome to run these yourself 😅
This is running on M1 Pro.
slagiewka commentedon May 8, 2025
I'm including comparison to the changes made by @eddyw. Seems like the benefits of avoiding
setPrototypeOf
are gone. It might've improved all of this. It doesn't get faster if its avoided.Snafuh commentedon Jun 3, 2025
I can confirm your results @slagiewka
About a 3x improvement in the benchmark running on WSL, 22.13.0 vs 24.1.0
wesleytodd commentedon Jun 4, 2025
I transferred this over to our Perf Working Group repo. If anyone in this thread would like to help get this going please take a look at the other issues in here and start to pitch in. The next WG meeting is on wednesday next week. We will talk about two topics:
cesco69 commentedon Jun 5, 2025
There are some well-known public benchmarks comparing Express with other Node.js web servers. However, some implementations of Express used in these benchmarks are outdated — they rely on old versions of Node.js and Express itself, which results in unfairly poor performance outcomes.
I believe the Perf Working Group repo could help maintain and update these benchmarks to better reflect Express's actual performance and promote more accurate results.
Here are some of the benchmarks I’m referring to:
wesleytodd commentedon Jun 5, 2025
We discussed on the initial call and documented here that we do not intend to benchmark against other projects. Not only do other projects already do that (as you have pointed out) we do not think there is added value in it for Express.js user or maintainers. Instead we will strictly be benchmarking (and load testing) across versions of our libraries.
All that above said, I think that this would be great to include in the WG plans. We can use our own benchmarks to show how those are incorrectly configured. Then we can work toward getting them fixed.
Would you be willing to make your comment over on this issue instead? That way we can include it in the WG charter.