Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Investigate faf results on Plaintext #7402

Closed
sebastienros opened this issue Jun 7, 2022 · 48 comments
Closed

Investigate faf results on Plaintext #7402

sebastienros opened this issue Jun 7, 2022 · 48 comments

Comments

@sebastienros
Copy link
Contributor

It's currently over 8M RPS which is above the theorical limit of the network.

@sebastienros
Copy link
Contributor Author

I think the date is empty as the variable is never initialized: https://github.com/TechEmpower/FrameworkBenchmarks/blob/master/frameworks/Rust/faf/src/main.rs#L43

The initialization seems to have been removed in this PR: https://github.com/TechEmpower/FrameworkBenchmarks/pull/6523/files#diff-e828d6a53c980b3107862cfa2530ba397b76f56d2a0bb3d555465a6f6217f4feL52-L53

That would explain the network bandwidth difference.

@sebastienros
Copy link
Contributor Author

sebastienros commented Jun 7, 2022

/cc @errantmind

@sumeetchhetri
Copy link
Contributor

Please look at the source for the framework here, the date value is passed along from the main event loop, also the date header validation does exist in the tfb toolset.

@sebastienros
Copy link
Contributor Author

Thanks for confirming. Do you know how it's getting over the "physical" limits then? If not then I will try to check the full payload it's sending so I can understand.

@fafhrd91
Copy link
Contributor

fafhrd91 commented Jun 9, 2022

could process priority affect throughput?

@sebastienros
Copy link
Contributor Author

@fafhrd91 historically all plaintext results have been capped at 7M rps because when you look at the minimum request size and the network packets, over a 10Gb/s network, it can't be faster than that.

This was confirmed by:

  • stripping the /paintext url to /p (and triggered a new rule in the specs)
  • using smaller Server: ... headers (and triggered a new rule in the specs)
  • using a faster network card (in our Microsoft's lab) which improved all plaintext results for top frameworks (up to 14M rps) without changing anything else.

I am not excluding any magic trick though, even at the network level, which could be a totally fine explanation.

@billywhizz
Copy link
Contributor

billywhizz commented Jun 15, 2022

@sebastienros @fafhrd91 @sumeetchhetri @errantmind

i've done a little investigation here and it seems each faf request from wrk is 87 bytes and each response back is 126 bytes.

so, if we assume for argument sake an average ethernet packet size of 8192 bytes, that would allow 64 responses per packet plus 66 bytes of protocol overhead for TCP/IP/Ethernet, giving us an overhead of ~1 byte per response for an overall average including protocol overhead of 127 bytes per response.

if we divide 10,000,000,000 (10Gb) bits by (127 * 8 = 1016) bits we get a theoretical max throughput of 9,842,519 responses per second on a 10Gb ethernet interface.

so, i think it is possible that the faf solution, due to it's extreme low level and focus on shaving every possible microsecond off the server side processing overhead could be significantly faster than all the other frameworks no? it does seem to skew the results on the TE charts though so maybe it should be tagged as stripped instead of realistic?
Screenshot from 2022-06-15 21-27-31

i also can confirm that the responses are adhering to the rules and the date field does change at least every second.

@billywhizz
Copy link
Contributor

billywhizz commented Jun 16, 2022

@sebastienros are my calculations above correct? as far as i understand, they would assume full duplex communication between web server and benchmark server so that would mean theoretical throughput of 10Gb up and 10Gb down simultaneously yes? if not, then i can't understand how we could even reach 7m RPS. 🤷‍♂️

@errantmind
Copy link
Contributor

I didn't see this right away, so sorry for the delay.

There is no single secret that accounts for the difference in performance between faf and other projects. Each component in faf has been hand optimized to some degree.

If anyone is extra curious they can test each component individually against what they have. The source code is a little dense but has decent separation-of-concerns.

@billywhizz
Copy link
Contributor

@errantmind i will take a look. i find rust syntax so horrific to look at though. i'll see if i can put together something in plain old C that would achieve the same throughput.

@sebastienros @nbrady-techempower can anyone confirm re. my questions/calculations above? i had thought, as sebastien asserted above, that 7m RPS was some kind of hard limit based on the hardware but it seems not. 🤷‍♂️

@sebastienros
Copy link
Contributor Author

Sorry, haven't had time to look into it, but I had done the same calculation 3 years ago (we recorded it I think) and we confirmed the 7M. I had looked at wireshark traces too because we didn't understand why all results capped at 7M. Then we ordered new NICs (40gb/s) and magically the benchmarks became much faster. All these benchmarks that are currently at 7M reach 10M+ when using a faster card, with the same hardware (we own the same physical machines as TE), so I don't think faf is just faster because it's "more" optimized than other fast frameworks. But until proven otherwise this is what it is, and I totally understand that there are some tests to validate the results. Hope to find some time to investigate more soon.

@sebastienros
Copy link
Contributor Author

@billywhizz

if we assume for argument sake an average ethernet packet size of 8192 bytes

MTU is 1500 on these NICs, including the ones we use.

@billywhizz
Copy link
Contributor

@sebastienros ok, so if my quick calculation is correct that would make it a theoretical max RPS of ~9.4m.

each packet = (11 * 126) + 66 = 1452 bytes = 11616 bits
10bn bits / 11616 = 860881 packets
860881 packets * 11 = 9,469,696 RPS

assuming full duplex and that downstream link can use the full 10Gb. does that sound correct?

@sebastienros
Copy link
Contributor Author

I took some tcp dumps and all looks fine. Compared to ULib (c++) also which used to be the fastest on our faster network.

The important things I noted are

  • The headers Server : F vs Server: ULib, which is saving 3 bytes.
  • The packets are smaller for ULib, so it's more common to have the responses split in two packets vs. FAF
  • All pipelined requests fit in one packet of 2720B, for both. Even though the MTU is 1500. I read there are some network optimizations that make it possible.

But something that you should re-calculate, is that the requests are 170B, so these should be the network bottleneck (if it is one):

From the client each packet is 2720B (16 requests of 170B) plus the preamble (66B) which makes 2786B to send 16 requests.
The NICs are full-duplex so 10Gb up AND down.
10^9/2786 = 358937 groups of 16 requests per second
358937 * 16 = 5,742,992 RPS

And this is assuming there is no lost/empty packets (and there are)

@billywhizz
Copy link
Contributor

@sebastienros

thanks for getting back. that's odd - when i run locally and do a single http request the http payload is only 126 bytes. not sure where 170 bytes is coming from. thanks for confirming re. full duplex. i think my calculation is correct then if we can resolve the 126 bytes versus 170 bytes you are seeing for http payload?
Screenshot from 2022-06-17 20-40-45

@sebastienros
Copy link
Contributor Author

I am talking about the request payload, not the response.

@billywhizz
Copy link
Contributor

@sebastienros
ah sorry - i missed that. yes - request payload when running with the 'Connection' and 'Accept' headers in tfb is indeed bigger - i had assumed it would be smaller than response. sorry for confusion. 🙏

the numbers above are a little off. it works out as max ~7m request per second as we had initially assumed.

Screenshot from 2022-06-17 21-12-59

so, we no closer to having an answer then? 🤔 it must be 🪄 😄

@billywhizz
Copy link
Contributor

if you wanted to, could improve the baseline significantly by removing the redundant 'Connection: keep-alive' header in the request - in HTTP/1.1 keep-alive is assumed in the absence of this header.

you could also drop the 'Accept' header altogether. removing these two headers would knock 120 bytes off each request - they are most likely ignored anyway by most if not all the frameworks in the test. 🤷‍♂️

@errantmind
Copy link
Contributor

errantmind commented Jun 17, 2022

Did we ever get an up-to-date hard-confirmation that each (pipelined) request is received is 2720 bytes? It has been a while since I've done any testing but I have a recollection the requests were a lot smaller than that. Either way, can you provide the exact bytes you are seeing in each pipelined request?

@billywhizz
Copy link
Contributor

billywhizz commented Jun 18, 2022

@errantmind

i just ran the pipelined test locally and you can see each pipelined http request is 165 bytes. on my machine the packets all seem to be 2706 bytes - 2640 payload and 66 bytes for Eth/IP/TCP preamble - so 16 http requests per packet as expected for pipeline level of 16.
Screenshot from 2022-06-18 01-53-53

so, that gives us a theoretical max requests per second of (10,000,000,000 / (2706 * 8)) * 16 = 7,390,983, excluding other protocol overhead for TCP (e.g. 40 bytes for ACK on each packet), so likely ~7.2/7.3m request per second excluding that overhead.

@sebastienros
Copy link
Contributor Author

so likely ~7.2/7.3m request per second

The same initial conclusion we got to a few years ago. (sorry I forgot about bits/bytes)
Which is why I am skeptical about the numbers, and validated by more than 10 other frameworks results.

But from what I have seen so far there is nothing abnormal with FAF, this is getting very interesting.

@errantmind
Copy link
Contributor

@billywhizz Interesting, thank you for posting the info.

I'm going to have to think about this for a while. I've gone through the code multiple times today and tested it in various ways, and I don't see any issues there, so I'm at a bit of a loss. Although not totally relevant, I'm also using FaF in a couple production environments that serve up a decent chunk of traffic and have also not noticed any issues over the months it has been in use.

@billywhizz
Copy link
Contributor

@errantmind

yes - i think both myself and @sebastienros are in agreement that nothing seems to be wrong with the code or what we are seeing on the wire. am wondering could it be some kind of issue/bug with wrk and how it's counting pipelined responses? i'm kinda stumped tbh. 😕

@sebastienros
Copy link
Contributor Author

There is one thing that is different for faf. I assume most frameworks use port 8080, but faf is using 8089. @errantmind would you like to submit a PR that makes it use 8080 instead, I believe it's only in 3 locations (main.rs, docker file expose, and benchamrksconfig.json). There is no reason it would change anything, right? But wrk ... who knows

@billywhizz
Copy link
Contributor

@errantmind @sebastienros

one thing i am thinking on here.

  • we have established that max rps from client to server is ~7.2m
  • we have established that max rps from server to client is ~9.4m

so, i think it might be possible that faf, due to a bug, is writing more responses than requests it receives and that wrk terminates early when it receives expected number of responses to requests it has sent (some of which could actually still be in flight, or possibly not even sent yet depending on how wrk "works" internally). this seems to me the only thing that could explain the numbers we are seeing. there's no system level "trick" afaict that would allow sending more than ~7.2m requests per second if the numbers we discussed above are correct.

does that make sense?

i'll see if can reproduce this and/or have a dig into the source code of wrk to see if it's possible/likely.

@errantmind
Copy link
Contributor

@billywhizz let me know what you find. If there is a bug, then I'll submit a PR to fix it. If not, I'll change the port.

@billywhizz
Copy link
Contributor

@errantmind @sebastienros

i did some testing here with a proxy between wrk and faf and always see same amount of responses sent out of faf as requests that were received so doesn't look like my theory is correct. 🤷‍♂️

@errantmind
Copy link
Contributor

errantmind commented Jun 21, 2022

I'm happy there doesn't appear to be a bug from your tests, but I'd still like an explanation why it exceeds the calculated limits described earlier.

It seems like it is either an even harder to replicate bug, or something is off with the calculation, e.g., the official tfb benchmark isn't sending as many bytes in each request as appears in the repo.. or something along those lines.

Of course, it could be something else entirely as well. I don't have time at the moment to mount a full-scale investigation of my own, but if anyone eventually provides a full explanation, it will be appreciated, whatever the reason is.

kant2002 added a commit to kant2002/FrameworkBenchmarks that referenced this issue Jul 5, 2022
I decide to implement this after this fascinating read.
TechEmpower#7402
@kant2002
Copy link
Contributor

kant2002 commented Jul 5, 2022

I did look at wrk and seems to be counting of request per thread can be a problem.
Reading of data from request can have potential issues, if record_rate would be called with some interleaving.
https://github.com/wg/wrk/blob/a211dd5a7050b1f9e8a9870b95513060e72ac4a0/src/wrk.c#L273-L289
as well as request counting
https://github.com/wg/wrk/blob/a211dd5a7050b1f9e8a9870b95513060e72ac4a0/src/wrk.c#L330-L331

For example stats_record itself protected from race conditions.
https://github.com/wg/wrk/blob/a211dd5a7050b1f9e8a9870b95513060e72ac4a0/src/stats.c#L22-L31

From what I understand if that would be the case, then RPS would increase (which is strange). I can have fix for wrk if that's appropriate, but I have no means to run TB suite to validate changes.

@bhauer
Copy link
Contributor

bhauer commented Jul 5, 2022

As I posted on Twitter in reply to Egor Bogatov, my first thought is that there may be an error either in the composition of responses or in the way wrk counts responses, as @kant2002 discusses above.

I'd recommend reviewing the total received bytes reported by wrk versus the expected total received size of N responses multiplied by M bytes per response. My hunch is that these two values won't align.

@kant2002
Copy link
Contributor

kant2002 commented Jul 6, 2022

I'm looking at numbers which wrk displays for local testing on my very old laptop. Admittedly not best target for testing.
What I observe in output

Running 15s test @ http://tfb-server:8089/plaintext
  4 threads and 4096 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    27.20ms   12.84ms 127.56ms   56.39%
    Req/Sec   177.73k    64.61k  309.83k    60.81%
  Latency Distribution
     50%   36.82ms
     75%   53.86ms
     90%    0.00us
     99%    0.00us
  10375504 requests in 15.09s, 1.22GB read
  Socket errors: connect 0, read 0, write 5024, timeout 0
Requests/sec: 687757.44
Transfer/sec:     82.64MB
STARTTIME 1657083529
ENDTIME 1657083544

I cannot connect 3 numbers in any meaningful way, Requests count, total execution time and Requests/per second, and Request/per second per thread. All calculations seems to be off.

10375504 requests in 15.09s mean that Requests/sec is 10_375_504 / 15.09 = 687_574.818 which is different from reported number of 687_757.44 significantly (183 PRS missing somehow).
10375504 requests in 15.09s also mean that average per thread RPS is 10_375_504 / ( 15.09 * 4 ) = 171_893.704 which is significantly less then 177.73k reported by wrk.
If I try connect reported Requests/sec: 687757.44 to avg thread RPS naively 687_757.44 / 4 = 171_938.61.

I may attribute number dfferences to https://github.com/wg/wrk/blob/a211dd5a7050b1f9e8a9870b95513060e72ac4a0/src/wrk.c#L277-L278

but I would like that somebody find a flaw in my reasoning, or point on the error in interpretation wrk results.

@franz1981
Copy link
Contributor

franz1981 commented Jul 6, 2022

@kant2002 the Requests/sec metric includes the time required to receive responses as well or is just considering the "test duration time"?

@joanhey
Copy link
Contributor

joanhey commented Jul 6, 2022

In reality no fw reaches 7M req/s, only faf with 8.6M.

If we check these numbers with such precision, we must know where they come from.

TechEmpower toolset numbers

Historically the numbers in the benchmark graphs are a bit higher, than in reality. But it affects all fws similarly.
Wrk return the Requests/sec, but they are different from the ones shown in the charts.

Why?

In wrk req/s are included also the non-2xx or 3xx responses or socket errors (in plain text with pipeline).
We want only the correct req/s, but wrk don't have this option.
An Nginx can return very fast 500 or 502, with a bad app behind.

So they take the total number of requests and subtract the errors, and divide it by 15 seconds to show the charts numbers.
( total requests - errors) / 15 seconds
If you check the results.json, only exist totalRequests and 5xx(if there are errors) but not req/s from wrk.

It seems correct, but wrk never run exactly 15 seconds (~ from 15.00 - 15.14s).

image
The STARTTIME and ENDTIME, are only cosmetic and not used. They have a second precision only.

Diff

When there are no errors, it is easy to see the difference.

Framework TechEmpower req/s Real Wrk req/s Total req/15s Link
Faf 8,672,182 8,614,646.29 130,082,744/15 = 8,672,182.93 link
Pico.v 7,028,146 6,981,694.59 105,422,192/15 = 7,028,146.13 link
may-minihttp 7,023,484 6,977,067.30 105,352,272/15 = 7,023,484.8 link
asp.net core [platform] 7,023,107 6,976,690.42 105,346,616/15 = 7,023,107.73 link

Wrk

@kant2002 About the numbers returned by wrk
You need to calculate them in another way.
(Total requests / Requests/sec ) = Seconds (rounded)

10375504 / 687757.44 = 15.08599310826794, and rounded is 15.09s.

@kant2002
Copy link
Contributor

kant2002 commented Jul 7, 2022

@joanhey I did find yestedday that bottom numbers are really slightly different because of precision display, but was too tired to get back. Still difference between total RPS and Per Thread RPS is still not clear for me. The total test duration is time between starting first thread, and time when last thread process last request.

Also if follow links above wrk report for all frameworks that it receive 12Gb for ~15 sec, while FAF receive ~15Gb of data, and I would like to examine that. wrk seems to be report only number of received data and not number of sent data, even if it collects that. I become suspicious when numbers are not adds up to direct calculations.

Okay, Let's take a look at reply of @billywhizz #7402 (comment)

On the screen size of the response is 149 bytes and not 165, so that leads to 16*149+66 = 2450 for pipeline of level 16 and thus 10_000_000_000/(2450*8)*16 = 8_163_265.30612245 which seems to be closer to the numbers which all we are seeing. And somehow network cards use 1024bits instead of 1000 for the base then we have number 10*1024*1024*1024/(2450*8)*16 = 8_765_239.37959184. Last calculation is purely speculation, but that way it may explain things.

@kant2002
Copy link
Contributor

kant2002 commented Jul 7, 2022

Seems to be I should not do anything too early in the morning. Ignore last message, except part with Per Thread RPS. I do not understand why number are bigger then Total PRS / Threads count.

@joanhey
Copy link
Contributor

joanhey commented Jul 7, 2022

@kant2002
You will get more information directly from the wrk repo.
wg/wrk#259 (comment)

@billywhizz
Copy link
Contributor

one thing i have found when testing with wrk is the number of requests it reports is literally never the actual number of requests the server received. from tests i have done is undercounting by ~3%. i'll see if i can spend some time over next week or two investigating this issue further - it's definitely a weird one!
Screenshot from 2022-07-09 08-00-15

this script is useful for seeing RPS across threads but it seems to undercount requests in same way, so something unusual seems to be happening inside wrk itself.

local counter = 1
local threads = {}

--wrk.method = "PUT"
--wrk.headers["Content-Type"] = "application/json"

function setup(thread)
  thread:set("id", counter)
  table.insert(threads, thread)
  counter = counter + 1
end

response = function(status, headers, body) 
  if statuses[status] == nil then
    statuses[status] = 1
  else
    statuses[status] = statuses[status] + 1
  end
end

done = function(summary, latency, requests)
  for index, thread in ipairs(threads) do
    local statuses = thread:get("statuses")
    local id = thread:get("id")
    for key, value in pairs(statuses) do 
      io.write(string.format("Thread: %d, %s: %d\n", id, key, value))
    end
  end
  for _, p in pairs({ 50, 75, 90, 95, 99, 99.999 }) do
    n = latency:percentile(p)
    io.write(string.format("%g%%,%d\n", p, n))
  end	
end

init = function(args)
  statuses = {}
  local r = {}
  local depth = tonumber(args[1]) or 1
  for i=1,depth do
    r[i] = wrk.format()
  end
  req = table.concat(r)
end

request = function()
  return req
end

@errantmind
Copy link
Contributor

errantmind commented Jul 17, 2022

General FYI: I am actively looking into this today as it is annoying not to know what is going on here. I have a few suspicions and will post an update here today/tomorrow with whatever I find.

In the meantime I ask that FaF is excluded from this round of the official results until we have an answer here. @nbrady-techempower

NateBrady23 pushed a commit that referenced this issue Jul 25, 2022
I decide to implement this after this fascinating read.
#7402
@errantmind
Copy link
Contributor

Sorry, I had some other work come up I had to prioritize.

It appears there was a fairly niche bug after all. I patched the code and will write an explanation after the next continuous run completes.

@errantmind
Copy link
Contributor

errantmind commented Nov 6, 2022

I broke my build in the process of upgrading between Rust versions so I'll have to wait before seeing the updated results, but I'll go ahead and explain the bug.

The bug affected HTTP 1.1 pipelined requests outside of local networks, in situations where the pipelined requests exceeded the network interface's MTU. In short, FaF was sending more responses than there were requests.

On Linux, by default, loopback interfaces often use very high MTU. Because of this, the bug did not manifest over loopback which made it harder to pinpoint initially, only after I set my loopback MTU to a more standard 1500 could I reproduce it.

As the benchmark was sending nearly double the MTU bytes as pipelined requests, the socket reads usually contained a partial request at the end of the segment (totalling 1500 bytes). FaF was not updating a pointer to a read buffer where it needed and was responding to the 9 complete pipelined requests, then responding again to the full 16 requests after reading the remaining data from the socket in a subsequent loop. Wrk is counting all these responses as a part of the results. I wrote some simple tests to ensure FaF is now behaving as expected so I'm pretty confident the issue is resolved.

@joanhey
Copy link
Contributor

joanhey commented Nov 15, 2022

I said it before, in this issue #6967
But it's still a issue and not a rule.

And again I say it with this PR #7701

For me is NOT realistic, and also not fair play.

@joanhey
Copy link
Contributor

joanhey commented Nov 15, 2022

We need to add more tests.

@joanhey
Copy link
Contributor

joanhey commented Nov 15, 2022

All the frameworks can only check for the first chart, and not the exact route.
Also all can use nice -20.

@errantmind
Copy link
Contributor

errantmind commented Nov 15, 2022

I am checking the full route, and have been for a very long time now. You appear to be looking at an older commit.

It is pretty strange to me that you are fixated on setting process priority as this keeps popping up. It is neither unusual nor unrealistic as scheduling has an effect on performance, and this is a benchmark. Even if it wasn't a benchmark it wouldn't be unusual for any latency-sensitive application (e.g. audio, games, etc).

Please refer to the following for the current commit and diff:

https://github.com/TechEmpower/FrameworkBenchmarks/tree/master/frameworks/Rust/faf

https://github.com/TechEmpower/FrameworkBenchmarks/pull/7701/files

@volyrique
Copy link
Contributor

@sebastienros We now have 3 runs in which faf has behaved as expected (i.e. in line with the other top performers), and a detailed explanation of the issue by @errantmind, so shall we consider the matter settled?

@sebastienros
Copy link
Contributor Author

sebastienros commented Dec 10, 2022

Absolutely, we should close the issue and put faf back on the board. Another great TechEmpower story.

@remittor
Copy link
Contributor

remittor commented Mar 24, 2023

@sebastienros

It's currently over 8M RPS which is above the theorical limit of the network.

Server: HP Z6 G4 Dual Xeon Gold 5120 (TurboBoost off), Net link 10Gbps, Debian 11, Python 3.9
framework: libreactor (builded from manual: link )

Client: Intel 12700K (20 threads), Net link 10Gbps, Debian 11

$ wrk -t20 -c512 -d15 -H 'Connection: keep-alive' http://10.12.0.2:8080/plaintext -s pipeline.lua -- 16
Running 15s test @ http://10.12.0.2:8080/plaintext
  20 threads and 512 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   540.07us  326.54us  11.18ms   66.87%
    Req/Sec   429.86k    25.02k  674.46k    80.56%
  129135108 requests in 15.10s, 16.24GB read
Requests/sec: 8552362.62
Transfer/sec:      1.08GB

On server 28 cores loaded ~ 25%...30%
Result: 8552362 req/sec

$ curl -w '\n%{size_header} %{size_download}\n' http://10.12.0.2:8080/plaintext
Hello, World!
122 13

122 + 13 = 135 bytes
135 * 129135108 = 17433239580 bytes => (17433239580 * 8) / (1000 * 1000 * 1000) = 139.46591664 Gbit
139.46591664 / 15.1 = 9.236 Gbps


Bonus: Net 25Gbps

$ wrk -t20 -c512 -d15 -H 'Connection: keep-alive' http://10.13.0.2:8080/plaintext -s pipeline.lua -- 16
Running 15s test @ http://10.13.0.2:8080/plaintext
  20 threads and 512 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   208.87us  167.10us  12.36ms   83.52%
    Req/Sec     1.07M    50.41k    1.53M    70.88%
  322932770 requests in 15.10s, 40.60GB read
Requests/sec: 21387110.40
Transfer/sec:      2.69GB

@remittor
Copy link
Contributor

@joanhey

Faf | 8,672,182 | 8,614,646.29 | 130,082,744/15 = 8,672,182.93

And why do the TFB scripts themselves calculate the speed as much as dividing by a hardcoded constant 15?

For example, what's the difference if we take the number of requests and time from my test:
129135108 / 15.0 = 8609007 req/sec (FAKE result)
129135108 / 15.1 = 8551993 req/sec

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests