Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fail to reproduce benchmark results with a bigger payload #178

Closed
abenhamdine opened this issue Sep 7, 2017 · 6 comments
Closed

Fail to reproduce benchmark results with a bigger payload #178

abenhamdine opened this issue Sep 7, 2017 · 6 comments
Labels
performances Everything related to performances question General question about the project

Comments

@abenhamdine
Copy link
Contributor

abenhamdine commented Sep 7, 2017

Hi, and congrats for this new project,

however, I fail to see any significant performance gain of this module over Express :

express 4.15.4

npm run test-perf

> frameworks-benchmark@0.0.0 test-perf C:\node-projects\frameworks-benchmark
> autocannon -c 100 -d 5 -p 10 localhost:3000/big && autocannon -c 100 -d 5 -p 10 localhost:3000/small

Running 5s test @ http://localhost:3000/big
100 connections with 10 pipelining factor

Stat         Avg    Stdev   Max
Latency (ms) 171.68 540.17  3275
Req/Sec      473.2  17.48   497
Bytes/Sec    81 MB  3.14 MB 88.1 MB

2k requests in 5s, 410 MB read
Running 5s test @ http://localhost:3000/small
100 connections with 10 pipelining factor

Stat         Avg     Stdev   Max
Latency (ms) 9.17    33.28   757
Req/Sec      10700.4 2661.33 12487
Bytes/Sec    10 MB   2.49 MB 12.1 MB

53k requests in 5s, 50.1 MB read

fastify 0.26.2

$ npm run test-perf

> frameworks-benchmark@0.0.0 test-perf C:\node-projects\frameworks-benchmark
> autocannon -c 100 -d 5 -p 10 localhost:3000/big && autocannon -c 100 -d 5 -p 10 localhost:3000/small

Running 5s test @ http://localhost:3000/big
100 connections with 10 pipelining factor

Stat         Avg     Stdev   Max
Latency (ms) 243.24  418.34  2218
Req/Sec      358     257.2   645
Bytes/Sec    62.3 MB 44.7 MB 113 MB

2k requests in 5s, 310 MB read
Running 5s test @ http://localhost:3000/small
100 connections with 10 pipelining factor

Stat         Avg     Stdev   Max
Latency (ms) 7.54    38.47   935
Req/Sec      12670.4 5699.75 16319
Bytes/Sec    10.8 MB 4.87 MB 14.2 MB

63k requests in 5s, 54.2 MB read

Pc : Intel(R) Core i7-6500U CPU@2.50GHZ 2.59GHZ 8;00Go RAM
OS : Windows 10

My sample code is minimalist and exactly the same as https://github.com/fastify/fastify/blob/master/benchmarks/express.js
and only differ by the amount of data sent back :

  • the first test send quite big json data (354Ko)
  • the second test ('small') send only 2 Ko of json data.

The two JSON files of data are loaded in memory with require() at the top of the module.

What do you think of these figures ?

@allevo
Copy link
Member

allevo commented Sep 7, 2017

Hi!
thank you for trying fastify!

I'm trying to run the suggested benchmarks but I've some problem to reproduce them.

ExpressJs code: see here

Fastify code:

'use strict'

const fastify = require('../')()

fastify.get('/', function (req, reply) {
  reply.send({ hello: 'world' })
})

fastify.listen(3000, function (err) {
  console.log('Listen at localhost:3000')
})

ExpressJs result

$ autocannon -c 100 -d 5 -p 10 localhost:3000/ 
Running 5s test @ http://localhost:3000/
100 connections with 10 pipelining factor

Stat         Avg     Stdev   Max     
Latency (ms) 16.03   48.43   335     
Req/Sec      6120.4  159.24  6319    
Bytes/Sec    1.41 MB 41.4 kB 1.51 MB 

31k requests in 5s, 7.01 MB read

Fastify result

$ autocannon -c 100 -d 5 -p 10 localhost:3000/ 
Running 5s test @ http://localhost:3000/
100 connections with 10 pipelining factor

Stat         Avg     Stdev   Max     
Latency (ms) 8.98    16.11   197     
Req/Sec      10711.2 569.49  11543   
Bytes/Sec    1.59 MB 86.9 kB 1.77 MB 

54k requests in 5s, 7.98 MB read

NodeJS 6.9.4
Npm 3.10.10
Fastify commit ae092f9

Can you share the exactly code you're benchmarking?
Particularly the code with the big response body.
Which nodejs version are you using?

We care about performance so any reports are welcome!

@abenhamdine
Copy link
Contributor Author

Thx for you quick response !
I've created a repo with the code and more details about used versions : https://github.com/abenhamdine/frameworks-benchmark

@mcollina
Copy link
Member

mcollina commented Sep 8, 2017

Confirmed, the bottleneck in the big case is JSON.stringify, and how we treat the content before sending it. The problem relies on the fact that fastify sanitizes to avoid circular dependencies before sending with https://github.com/davidmarkclements/fast-safe-stringify.

IMHO this should not happen, and we should also disable it in fast-json-stringify when used with this library. We should throw an exception if there are circular dependencies.

I'll prepare a PR.

Given the overall performance of the "big" case is limited by JSON.stringify, I would recommend to try specifying a schema, so that we can use fast-json-stringify. You should get better result with this patch:

diff --git a/fastify.js b/fastify.js
index 8071910..e92b8ef 100644
--- a/fastify.js
+++ b/fastify.js
@@ -1,8 +1,73 @@
 const dataBig = require('./data/dataBig');
 const dataSmall = require('./data/dataSmall');
 const fastify = require('fastify')();
+
+const optsBig = {
+  schema: {
+    response: {
+      200: {
+        type: 'array',
+        items: {
+          type: 'object',
+          properties: {
+            type: {
+              type: 'string'
+            },
+            tid: {
+              type: 'number'
+            },
+            action: {
+              type: 'string'
+            },
+            method: {
+              type: 'string'
+            },
+            result: {
+              type: 'object',
+              properties: {
+                data: {
+                  type: 'array',
+                  items: {
+                    type: 'object',
+                    properties: {
+                      sal_id: {
+                        type: 'number'
+                      },
+                      sal_prenom: {
+                        type: 'string'
+                      },
+                      sal_nom_usage: {
+                        type: 'string'
+                      },
+                      sal_nom_famille: {
+                        type: 'string'
+                      },
+                      sal_nom_matricule: {
+                        type: 'string'
+                      },
+                      pag_id: {
+                        type: 'number'
+                      },
+                      nb_contracts: {
+                        type: 'number'
+                      },
+                      last_cnt_desc: {
+                        type: 'string'
+                      }
+                    }
+                  }
+                }
+              }
+            }
+          }
+        }
+      }
+    }
+  }
+}
+
 // Declare a route
-fastify.get('/big', function(_req, res) {
+fastify.get('/big', optsBig, function(_req, res) {
     res.send(dataBig);
 });
 fastify.get('/small', function(_req, res) {
@@ -15,4 +80,4 @@ fastify.get('/hello', function(_req, res) {
 fastify.listen(3000, function(err) {
     if (err) { throw err; }
     console.log(`server listening on ${fastify.server.address().port}`);
-});
\ No newline at end of file
+});
diff --git a/package.json b/package.json
index 286155d..7eeac84 100644
--- a/package.json
+++ b/package.json
@@ -14,4 +14,4 @@
     "fastify": "^0.26.2",
     "uws": "^8.14.1"
   }
-}
\ No newline at end of file
+}

@allevo
Copy link
Member

allevo commented Sep 8, 2017

See reference here

@abenhamdine
Copy link
Contributor Author

abenhamdine commented Sep 8, 2017

Thx for having investigated it !

Indeed, specifying a schema for the 'big' test, I see an increase of 25% performance vs without schema, and thus fastify match Express performance :

fastify

====== NO SCHEMA ======
Running 5s test @ http://localhost:3000/big
100 connections

Stat         Avg     Stdev   Max
Latency (ms) 216.26  16.48   246
Req/Sec      453.2   11.91   465
Bytes/Sec    76.8 MB 1.68 MB 79.7 MB

2k requests in 5s, 387 MB read

====== WITH SCHEMA ======
Running 5s test @ http://localhost:3000/bigschema
100 connections

Stat         Avg     Stdev   Max
Latency (ms) 169.94  11.74   196
Req/Sec      580     13.85   600
Bytes/Sec    69.4 MB 2.34 MB 75.5 MB

3k requests in 5s, 345 MB read

express

Running 5s test @ http://localhost:3000/big
100 connections

Stat         Avg     Stdev   Max
Latency (ms) 174.82  30.09   239
Req/Sec      561.21  5.6     567
Bytes/Sec    96.9 MB 2.05 MB 101 MB

3k requests in 5s, 479 MB read

@abenhamdine
Copy link
Contributor Author

abenhamdine commented Sep 8, 2017

IMHO this should not happen, and we should also disable it in fast-json-stringify when used with this library. We should throw an exception if there are circular dependencies.

Good idea, circular dependencies are not common cases.

Another caveout is to define the schema and pass it to the route.
In our application, the responses to the routes are generated dynamically, based on data model passed by the client (it's a custom api which is a mix of REST and graphQL like). So it's not possible to know the schema before the route is called (the same route could use different schemas).
It would be possible for us to generate the schema dynamically, once the route is hit, but then I don't see how to set the options inside the route.

Is there something like ? :
```js
fastify.get('/big', function(req, res) {
// build options dynamically
const options = myFunctionWhichGenerateOptionsAccordingToRequestParams(req)
// use options on response
res.setFastifyOptionsToBeAbleToUseSchema(options)
res.send(dataBig);
});

question moved to #190

@delvedor delvedor added the question General question about the project label Sep 8, 2017
@abenhamdine abenhamdine changed the title Fail to reproduce benchmark results Fail to reproduce benchmark results with a bigger payload Sep 10, 2017
@delvedor delvedor added the performances Everything related to performances label Sep 10, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
performances Everything related to performances question General question about the project
Projects
None yet
Development

No branches or pull requests

4 participants