Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Re-implementation of the go scheduler #2

Closed
noonien opened this issue Jul 6, 2017 · 3 comments
Closed

Re-implementation of the go scheduler #2

noonien opened this issue Jul 6, 2017 · 3 comments

Comments

@noonien
Copy link

noonien commented Jul 6, 2017

The go goroutine scheduler already uses epoll/kqueue to decide when goroutines need to be scheduled.

Why is there a need to re-implement this, am I missing something?

@tidwall
Copy link
Owner

tidwall commented Jul 6, 2017

It’s not so much a reimplementation, just a different approach.

While the standard Go net package also uses kqueue/epoll under the hood, it requires that you demultiplex incoming connections by firing up one goroutine per connection. This concurrency is desirable in most cases, and it requires that shared data is protected from race conditions with a mutex, atomic, or perhaps channels.

The shiny package uses the Reactor pattern where all events fire from the same goroutine that the server started on. There’s no concurrency at all, it’s totally single-threaded.

This single-threaded event model eliminates all context-switching and can often speed up execution. This allows for Go to be used as a language for creating very lightweight network services such as proxies, load balancers, and caching engines. Think Haproxy, Nginx, Redis, Memcached, etc.

My desire is to use Go for ultra lightweight appliance-like services that run well on hardware with limited resources.

@noonien
Copy link
Author

noonien commented Jul 6, 2017

You're confusing concurrency with parallelism. The go runtime does not schedule one goroutine per thread. A heavy concurrent Go program can run on a single thread with no issue.

Check out runtime.GOMAXPROCS.

Goroutines are made to be really lightweight, actually, you can implement the reactor pattern using them and you would probably have the same performance. Of course, there's no way of knowing for sure if there are no proper benchmarks for comparison.

@tidwall
Copy link
Owner

tidwall commented Jul 6, 2017

You're confusing concurrency with parallelism. The go runtime does not schedule one goroutine per thread. A heavy concurrent Go program can run on a single thread with no issue.

I'm not confusing the two. I understand the difference. I never mentioned using goroutines equates to using more than one thread. I did say that using Shiny equates to using one thread.

Check out runtime.GOMAXPROCS

GOMAXPROCS makes the entire Go application run on one CPU. I'm not interested in making my services single-threaded, I'm interested in making a single-threaded networking event model.

Goroutines are made to be really lightweight

True they are made to be really lightweight.

you can implement the reactor pattern using them

True you could implement the pattern.

and you would probably have the same performance

False. In the lab Shiny is faster and more memory efficient.

Of course, there's no way of knowing for sure if there are no proper benchmarks for comparison.

Of course there's a way to know. I know. I've been goofing with this stuff for quite a while. And if you want to know you can goof with it too.

I have plenty of proper benchmarks for this (experimental, work-in-progress) project. And if and when this project is something that I consider stable, I may add them to the repo.

@tidwall tidwall closed this as completed Jul 6, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants