New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add RAM Usage #381
Comments
@ohenepee Thanks for your idea; There is 2 thinks here :
|
They mean that the table that contains the header |
ah ok, a display problem, I see |
Is there any ETA for the "running on cloud" ? @waghanza What what cloud specs are we looking at? |
Excellent question @ohenepee.
|
I'm very enthusiastic about this 😃 . I would suggest that you add/use the lowest droplet ( 1 GB RAM, 1 vCPU, $5/mo ) since most developers use that and don't go for full-blown 4-core / 8-core / 16-core droplets.. this way we can all know the performance of the frameworks on minimal resources. |
sure The main idea is to create a benchmarking |
Like omg there is no parallelism on that whatsoever?! o.O |
@OvermindDL1 I disagree... you are mixing concurrency with parallelism... concurrency happens on single cores all the time. Also the fact that most frameworks shine on high load is why there's the need to benchmark on a low spec server. Performance has little to do with server specs, it has more to do with performant frameworks in this context. |
Parallelism* Which is what I meant* ^.^; Most frameworks shine on high load on parallelism. Just pushing simple "" back is not much of a test, but having endpoints that 'actually' do work will make or break a site on how well they run in parallel. Why do you think every single-core language, like say Ruby or Python, is put behind front-loaders that spawn multiple children to run the code, which breaks their ability to share data properly, which is not as much of an issue for sites programmed in such a way to not need that, but in doing so you can lose shared atomic caching or you have to slave out to something else for things that don't need to be persisted such as redis or memcached, etc... etc... Which is why languages like Rust, C++, Go, Nim, even the interpreted Elixir can absolutely blow away just about any other framework in actual real-world comparisons (even under load elixir can blow away even C++/Go builds due to how it functions). Just pushing data over a socket is not testing a framework. |
I get your point now... but while we're literally on the same topic, my point is about raw performance in a constrained environment (basically not every dev has extra bucks to spend) Do you remember there was once a POPULAR tag on the 1GB RAM 1 vCPU droplet on the DO Pricing Page? I think VULTR even still has theirs. DId it ever occur to you why that was popular? Not everyone is a startup, not everyone has extra bucks to budget for something they don't really need. Go fasthttp does fine serving 20K concurrent requests on a single core server that's over 100 million hits per day on a single core... WHAT IF you could have something that achieved 50K on this same server and still not max out CPU and RAM? Btw the said Go app is a production app doing real world work. |
Please add memory usage... and there's one more issue with the README.md... there's a title for the section Latency but no title for Responses Per Second
The text was updated successfully, but these errors were encountered: