New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added go-fasthttp benchmark #14

Merged
merged 1 commit into from Feb 2, 2017

Conversation

Projects
None yet
5 participants
@heppu
Copy link
Contributor

heppu commented Feb 1, 2017

Go benchmark using fasthttp which was created for same purpose as japronto was.

@squeaky-pl

This comment has been minimized.

Copy link
Owner

squeaky-pl commented Feb 1, 2017

Cool, fasthttp was mentioned several times on reddit and it is supposedly much faster than stdlib. I wish I knew about it before.

@heppu

This comment has been minimized.

Copy link
Contributor Author

heppu commented Feb 1, 2017

Ok this is interesting. On my machine the fasthttp was over 5x faster than go's own http library. The japronto was 1.17x faster than the fasthttp but I'm not sure if the python setup was correct so I would like you to confirm that test. =)

@heppu

This comment has been minimized.

Copy link
Contributor Author

heppu commented Feb 1, 2017

If I don't limit GOMAXPROCS like no one in real life use case would the fasthttp version is actually little faster than the japronto on my laptop.

@frnkvieira

This comment has been minimized.

Copy link

frnkvieira commented Feb 1, 2017

@heppu are you using multiple workers with japronto also ? I would like too see a "real" world battle with unlimited cores on GO and Japronto...

@heppu

This comment has been minimized.

Copy link
Contributor Author

heppu commented Feb 1, 2017

@jeanfrancisroy I'm not that familiar with python that I would now best way to setup multiple workers with japronto. I come from go world so we get the full power by default 😉
If you can tell me how to do that I can test it.

@heppu

This comment has been minimized.

Copy link
Contributor Author

heppu commented Feb 1, 2017

I would also like mention that Transfer/sec rate was 2x greater with fasthttp than with japronto.

@frnkvieira

This comment has been minimized.

Copy link

frnkvieira commented Feb 1, 2017

@heppu Not sure if this is indeed the best way but it gets a lot faster on my machine. (Full CPU cores usage)

import multiprocessing
import japronto
import ujson


def home(request):
    return request.Response(ujson.dumps({'Hello': 1}))


if __name__ == '__main__':
    app = japronto.Application()
    app.router.add_route('/', home)
    app.run(worker_num=multiprocessing.cpu_count())
@heppu

This comment has been minimized.

Copy link
Contributor Author

heppu commented Feb 1, 2017

There seeems to be quite lot variance in the results so these are the median records out of ten runs:

Japronto 1 worker
Requests/sec: 536297.90
Transfer/sec: 47.05MB

Japronto 4 workers
Requests/sec: 614251.03
Transfer/sec: 53.89MB

fasthttp GOMAXPROCS=1
Requests/sec: 444446.32
Transfer/sec: 62.31MB

fasthttp GOMAXPROCS=4
Requests/sec: 522654.64
Transfer/sec: 73.27MB

@squeaky-pl

This comment has been minimized.

Copy link
Owner

squeaky-pl commented Feb 1, 2017

@heppu you can have some luck with minimizing variance by switching your CPU governor to fixed frequency and sending SIGSTOP to all the noisy processes (usually browsers, spotify etc).You can also completely isolate CPUs by tweaking your kernel boot parameters so the OS doesnt use them and manually placing workers on those CPUs. You might also want to disable Intel Turbo Boost - there is a bug in the kernel that makes it jitter CPUs in a weird way

@heppu

This comment has been minimized.

Copy link
Contributor Author

heppu commented Feb 1, 2017

@squeaky-pl I think I leave that for the next person, just wanted to see how these compare roughly and looks like fasthttp comes pretty close to japronto but still not quite there =) Great work!

@squeaky-pl

This comment has been minimized.

Copy link
Owner

squeaky-pl commented Feb 1, 2017

@heppu thanks for your time as well. I am gonna include fasthttp it in the next benchmarking round. I know many areas in Japronto that could be faster, it's just refactoring C is not fun at all and it's very very fragile. I might actually invetigate rewriting parts of Japronto in Rust.

@squeaky-pl squeaky-pl merged commit f3311fa into squeaky-pl:master Feb 2, 2017

@squeaky-pl squeaky-pl referenced this pull request Feb 2, 2017

Closed

Equalent go benchmark #11

@dom3k

This comment has been minimized.

Copy link

dom3k commented Mar 8, 2017

It should be noted that the response header golang fasthttp / stdlib is two times higher because it contains the current date. By the way good job :)

@rebootcode

This comment has been minimized.

Copy link

rebootcode commented Sep 4, 2018

@squeaky-pl - can you update the benchmark chat on the homepage with fast HTTP benchmark too. It is very misleading at the moment.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment