Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add RAM Usage #381

Closed
ohenepee opened this issue Sep 18, 2018 · 12 comments
Closed

Add RAM Usage #381

ohenepee opened this issue Sep 18, 2018 · 12 comments
Assignees

Comments

@ohenepee
Copy link

Please add memory usage... and there's one more issue with the README.md... there's a title for the section Latency but no title for Responses Per Second

@waghanza
Copy link
Collaborator

waghanza commented Sep 18, 2018

@ohenepee Thanks for your idea; There is 2 thinks here :

  • memory => It could be done, but I'll before work on running on cloud (but if you wish to contribute, I'll be ❤️ )
  • Responses per second : what do you mean ?

@OvermindDL1
Copy link
Collaborator

Responses per second : what do you mean ?

They mean that the table that contains the header Requests / s and Throughput does not have a markdown ## header tag like the higher Latency section does. :-)

@waghanza
Copy link
Collaborator

ah ok, a display problem, I see
@ohenepee thanks for catch up

@ohenepee
Copy link
Author

ohenepee commented Sep 27, 2018

Is there any ETA for the "running on cloud" ? @waghanza

What what cloud specs are we looking at?

@waghanza
Copy link
Collaborator

Excellent question @ohenepee.

  • I will target DigitalOcean as cloud provider (since its first target are
    developers)
  • I do not have precise ETA for this as I prefer taking time to learn / have solid foundations. However, I can plan that at the end of the year I will have this feature

@ohenepee
Copy link
Author

ohenepee commented Sep 27, 2018

I'm very enthusiastic about this 😃 . I would suggest that you add/use the lowest droplet ( 1 GB RAM, 1 vCPU, $5/mo ) since most developers use that and don't go for full-blown 4-core / 8-core / 16-core droplets.. this way we can all know the performance of the frameworks on minimal resources.

@waghanza
Copy link
Collaborator

sure

The main idea is to create a benchmarking tools with options. I mean by default using the smallest droplets (to economize) and display here a result set produced automatically with 8 cores droplet (the price will be afforded by http://digitalocean themselves)

@OvermindDL1
Copy link
Collaborator

OvermindDL1 commented Sep 27, 2018

GB RAM, 1 vCPU, $5/mo

Like omg there is no parallelism on that whatsoever?! o.O
If you can even run on such a thing at all, then why do you care about performance as you'd apparently have little load as it is... Most frameworks shine on high load, not low load...

@ohenepee
Copy link
Author

@OvermindDL1 I disagree... you are mixing concurrency with parallelism... concurrency happens on single cores all the time. Also the fact that most frameworks shine on high load is why there's the need to benchmark on a low spec server. Performance has little to do with server specs, it has more to do with performant frameworks in this context.

@OvermindDL1
Copy link
Collaborator

Parallelism* Which is what I meant* ^.^;

Most frameworks shine on high load on parallelism.

Just pushing simple "" back is not much of a test, but having endpoints that 'actually' do work will make or break a site on how well they run in parallel. Why do you think every single-core language, like say Ruby or Python, is put behind front-loaders that spawn multiple children to run the code, which breaks their ability to share data properly, which is not as much of an issue for sites programmed in such a way to not need that, but in doing so you can lose shared atomic caching or you have to slave out to something else for things that don't need to be persisted such as redis or memcached, etc... etc... Which is why languages like Rust, C++, Go, Nim, even the interpreted Elixir can absolutely blow away just about any other framework in actual real-world comparisons (even under load elixir can blow away even C++/Go builds due to how it functions).

Just pushing data over a socket is not testing a framework.

@ohenepee
Copy link
Author

ohenepee commented Sep 27, 2018

I get your point now... but while we're literally on the same topic, my point is about raw performance in a constrained environment (basically not every dev has extra bucks to spend) Do you remember there was once a POPULAR tag on the 1GB RAM 1 vCPU droplet on the DO Pricing Page? I think VULTR even still has theirs. DId it ever occur to you why that was popular? Not everyone is a startup, not everyone has extra bucks to budget for something they don't really need. Go fasthttp does fine serving 20K concurrent requests on a single core server that's over 100 million hits per day on a single core... WHAT IF you could have something that achieved 50K on this same server and still not max out CPU and RAM? Btw the said Go app is a production app doing real world work.

@kokizzu
Copy link

kokizzu commented Jul 24, 2019

hi @waghanza do you need a VPS? i have one container free for < 3 months..
the hypervisor specs are: 8x E5-2630 / 30GB / 859.0GB
contact me trough telegram @kokizzu if you need one..

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants