Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

net: add mechanism to wait for readability on a TCPConn #15735

Open
bradfitz opened this issue May 18, 2016 · 119 comments
Open

net: add mechanism to wait for readability on a TCPConn #15735

bradfitz opened this issue May 18, 2016 · 119 comments
Labels
NeedsDecision Feedback is required from experts, contributors, and/or the community before a change can be made. Thinking v2 A language change or incompatible library change
Milestone

Comments

@bradfitz
Copy link
Contributor

bradfitz commented May 18, 2016

EDIT: this proposal has shifted. See #15735 (comment) below.

Old:

The net/http package needs a way to wait for readability on a TCPConn without actually reading from it. (See #15224)

http://golang.org/cl/22031 added such a mechanism, making Read(0 bytes) do a wait for readability, followed by returning (0, nil). But maybe that is strange. Windows already works like that, though. (See new tests in that CL)

Reconsider this for Go 1.8.

Maybe we could add a new method to TCPConn instead, like WaitRead.

@bradfitz bradfitz added this to the Go1.8 milestone May 18, 2016
@bradfitz bradfitz self-assigned this May 18, 2016
@bradfitz
Copy link
Contributor Author

/cc @ianlancetaylor @rsc

@gopherbot
Copy link

CL https://golang.org/cl/23227 mentions this issue.

gopherbot pushed a commit that referenced this issue May 19, 2016
Updates #15735

Change-Id: I42ab2345443bbaeaf935d683460fc2c941b7679c
Reviewed-on: https://go-review.googlesource.com/23227
Reviewed-by: Ian Lance Taylor <iant@golang.org>
gopherbot pushed a commit that referenced this issue May 19, 2016
Updates #15735.
Fixes #15741.

Change-Id: Ic4ad7e948e8c3ab5feffef89d7a37417f82722a1
Reviewed-on: https://go-review.googlesource.com/23199
Run-TryBot: Mikio Hara <mikioh.mikioh@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
@RalphCorderoy
Copy link

read(2) with a count of zero may be used to detect errors. Linux man page confirms, as does POSIX's read(3p) here. Mentioning it in case it influences this subverting of a Read(0 bytes) not calling syscall.Read.

@quentinmit quentinmit added the NeedsDecision Feedback is required from experts, contributors, and/or the community before a change can be made. label Oct 7, 2016
@bradfitz
Copy link
Contributor Author

I found a way to do without this in net/http, so punting to Go 1.9.

@bradfitz bradfitz modified the milestones: Go1.9, Go1.8 Oct 21, 2016
@bradfitz
Copy link
Contributor Author

Actually, the more I think about this, I don't even want my idle HTTP/RPC goroutines to stick around blocked in a read call. In addition to the array memory backed by the slice given to Read, the goroutine itself is ~4KB of wasted memory.

What I'd really like is a way to register a func() to run when my *net.TCPConn is readable (when a Read call wouldn't block). By analogy, I want the time.AfterFunc efficiency of running a func in a goroutine later, rather than running a goroutine just to block in a time.Sleep.

My new proposal is more like:

package net

// OnReadable runs f in a new goroutine when c is readable;
// that is, when a call to c.Read will not block.
func (c *TCPConn) OnReadable(f func()) {
   // ...
}

Yes, maybe this is getting dangerously into event-based programming land.

Or maybe just the name ("OnWhatever") is offensive. Maybe there's something better.

I would use this in http, http2, and grpc.

/cc @ianlancetaylor @rsc

@ianlancetaylor
Copy link
Contributor

Sounds like you are getting close to #15021.

I'm worried that the existence of such a method will encourage people to start writing their code as callbacks rather than as straightforward goroutines.

@bradfitz
Copy link
Contributor Author

Yeah. I'm conflicted. I see the benefits and the opportunity for overuse.

@dvyukov
Copy link
Member

dvyukov commented Jan 6, 2017

If we do OnReadable(f func()), won't we need to fork half of standard library for async style? Compress, io, tls, etc readers all assume blocking style and require a blocked goroutine.
I don't see any way to push something asynchronously into e.g. gzip.Reader. Does this mean that I have to choose between no blocked goroutine + my own gzip impl and blocked goroutine + std lib?

@dvyukov
Copy link
Member

dvyukov commented Jan 6, 2017

Re 0-sized reads.
It should work with level-triggered notifications, but netpoll uses epoll in edge-triggered mode (and kqueue iirc). I am concerned if cl/22031 works in more complex cases: waiting for already ready IO, double wait, wait without completely draining read buffer first, etc?

@bradfitz
Copy link
Contributor Author

bradfitz commented Jan 6, 2017

@dvyukov, no, we would only use OnReadable in very high-level places, like the http1 and http2 servers where we know the conn is expected to be idle for long periods of time. The rest of the code underneath would remain in the blocking style.

@dvyukov
Copy link
Member

dvyukov commented Jan 6, 2017

This looks like a half-measure. An http connection can halt in the middle of request...

@bradfitz
Copy link
Contributor Author

bradfitz commented Jan 6, 2017

@dvyukov, but not commonly. This would be an optimization for the common case.

@dvyukov
Copy link
Member

dvyukov commented Jan 7, 2017

An alternative interface can be to register a channel that will receive readiness notifications. The other camp wants this for packet-processing servers, and there starting a goroutine for every packet will be too expensive. However, if at the end you want a goroutine, then the channel will introduce unnecessary overhead.
Channel has a problem with overflow handling (netpoll can't block on send, on the other hand it is not OK to lose notifications).
For completeness, this API should also handle writes.

@DemiMarie
Copy link

We need to make sure that this works with Windows IOCP as well.

@rsc
Copy link
Contributor

rsc commented Jan 10, 2017

Not obvious to me why the API has to handle writes. The thing about reads is that until the data is ready for reading, you can use the memory for other work. If you're waiting to write data, that memory is not reusable (otherwise you'd lose the data you are waiting to write).

@dvyukov
Copy link
Member

dvyukov commented Jan 11, 2017

@rsc If we do just 0-sized reads, then write support is not necessary. However, if we do Brad's "My new proposal is more like": func (c *TCPConn) OnReadable(f func()), then this equally applies to writes as well -- to avoid 2 blocked goroutines per connection.

@noblehng
Copy link

noblehng commented Feb 21, 2017

If memory usage is the concern, it is possible to make long parked G use less memory instead of changing programming style? One main selling point of Go to me is high efficiency network servers without resorting to callbacks.

Something like shrink the stack or move the stack to heap by the GC using some heuristics, that will be littile different from spinning up a new goroutine on callback memory usage wise, and scheduling wise a callback is not much different than goready(). Also I assume the liveness change in Go1.8 could help here too.

For the backing array, if it is preallocated buffer, than a callback doesn't make much different than Read(), maybe it will make some different if it is allocated per-callback and use a pool.

Edit:
Actually we could have some GC deadline or gopark time in runtime.pollDesc, so we could get a list of long parked G from the poller, then GC can kick in, but more dance is still needed to avoid race and make it fast.

@noblehng
Copy link

noblehng commented Feb 22, 2017

How about a epoll like interface for net.Listener:

type PollableListener interface {
   net.Listener
   // Poll will block till at least one connection been ready for read or write
   // reads and writes are special net.Conn that will not block on EAGAIN
   Poll() (reads []net.Conn, writes []net.Conn)
}

Then the caller of Poll() can has a small number of goroutines to poll for readiness and handle the reads and writes. This should also works well for packet-processing servers.

Note that this only needs to be implemented in the runtime for those Listeners that multiplexed in the kernel, like the net.TCPListener. For other protocol that multiplex in the userspace and doesn't attached to the runtime poller directly, like udp listener or multiplexing streams in a tcp connection, can be implemented outside the runtime. For example, for multiplexing in a tcp connection, we can implemented the epoll like behavior by read from/write to some buffers then poll from them or register callbacks on buffer size changed.

Edit:
To implement this, we can let users of the runtime poller, like socket and os.File, provide a callback function pointer when open the poller for a fd, to notify them the readiness of I/O. The callback should
looks like:

type IOReadyNotify func(mode int32)

And we store this in the runtime.pollDesc, then the runtime.netpollready() function should also call this callback if not nil besides give out the pending goroutine(s).

@aajtodd
Copy link

aajtodd commented Feb 27, 2017

I'm fairly new to Go but seeing the callback interface is a little grating given the blocking API exposed everywhere else. Why not expose a public API to the netpoll interfaces?

Go provides no standard public facing event loop (correct me if I'm wrong please). I have need to wait for readability on external FFI socket(s) (given through cgo). It would be nice to re-use the existing netpoll abstraction to also spawn FFI sockets onto rather than having to wrap epoll/IOCP/select. Also I'm guessing wrapping (e.g) epoll from the sys package does not integrate with the scheduler which would also be a bummer.

@mjgarton
Copy link
Contributor

For a number of my use cases, something like this :

package net

// Readable returns a channel which can be read from whenever a call to c.Read
// would not block.
func (c *TCPConn) Readable() <-chan struct{} {
        // ...
}

.. would be nice because I can select on it. I have no idea whether it's practical to implement this though.

Another alternative (for some of my cases at least) might be somehow enabling reads to be canceled by using a context.

@ianlancetaylor
Copy link
Contributor

We know that it is non-blocking io, but the currently provided interfaces such as conn.Read/Write are blocking interfaces for the application layer.

As you probably know, they block the goroutine that calls them. They don't block any threads. Goroutines are cheap. Programs that need to avoid blocking goroutines are a special case. And as noted above, it is already possible to use non-blocking I/O in Go.

@lesismal
Copy link

lesismal commented Sep 6, 2022

As you probably know, they block the goroutine that calls them. They don't block any threads. Goroutines are cheap. Programs that need to avoid blocking goroutines are a special case. And as noted above, it is already possible to use non-blocking I/O in Go.

Yes, I know that they don't block any threads.
But when using std and there are millions of connections, the huge num of goroutines are not as cheap as there are only thousands of connections. That's why there are many third libs to do it as @ivanjaros listed:
#15735 (comment)

@lesismal
Copy link

lesismal commented Sep 6, 2022

In a 1m connections test:
Golang needs at least 8G memory(8k*1m), and the CPU cost for both schedule and GC is also huge, so we can't deploy this service on a VM/hardware with low-middle level specifications, else we come across both OOM and STW problems.
Compared to other languages in this test, Golang performs even worse than Java-Netty and Nodejs, with multiple memory costs, that's such unexpected amazing.

As a cloud-native language, Golang will handle more and more connections as basic frameworks. The more we can reduce the num of goroutines, the more hardware, energy, and money we can save.

Not joking, but sincerely😂:
For environmental protection!
For the earth!

@lesismal
Copy link

lesismal commented Sep 6, 2022

My opinion is that if provide Readable, non-blocking Read/Write interface should be provided together, else Readable will not be useful.

@ivanjaros
Copy link

When PHP 7, i think, came out couple of years ago, Rasmus(PHP author) had a talk on some PHP conference where he showed how much electricity the better performance of this new version saved due to lower demand on hardware and hence lower economic costs in general. So this is not as outlandish as it might seem. As pointed above, the cost-savings are already impressive #15735 (comment) and with stdlib implementation we might get even better CPU performance.

@d2gr
Copy link

d2gr commented Sep 12, 2022

All of the implementations listed above use mostly a reactor pattern, not a pro-actor pattern. See proactor pattern. (boost::asio is a pro-actor lib).

If I am not mistaken the proposal here is to add readability which would be implemented using callbacks, which would make Go support some kind of semi-proactor interfaces. In my opinion this is fine, many people have been looking forward to this implementation, but I think the main problem is going to be the callbacks.
In a high-frequency environment where we call OnReadable more than 1M times per second, we are going to create a function object every time (because closures are structs), which will be allocated in the heap because it'll escape the stack.

I have implemented myself a library that uses a true pro-actor pattern (not open-sourced yet), and it performs well, but not better than Go. Mainly because the heap allocations increase and decrease per second (given the above callback problem).
Libraries that implement a pro-actor pattern might improve the TLS & HTTP/2 performance given that the tls package has locks everywhere and HTTP/2 in Go is full of channels and mutexes (not criticism, both libraries are very well done).

Summary: If this issue goes forward and a PR is presented, are we going to see closures allocated in the stack? In C++, lambdas are just objects (like in Go) but they are moved from stack to stack, afaik they are never allocated in the heap (unlike Go).
I know that callbacks might not allocate actual heap, they just reuse memory because if the object is less than 32KiB mallocgc takes memory that is available, but still reducing performance.

@ivanjaros
Copy link

ivanjaros commented Sep 13, 2022

I don't think we need callbacks. I think the following would do:

type HServer struct {
	addr string
	conns map[net.Conn]struct{}
	done context.Context
	addQueue chan net.Conn
	closeQueue chan net.Conn
}

func (s *HServer) Run() {
	lis, _ := net.Listen("tcp", s.addr)

       go s.loop()

	for {
		if conn, err := lis.Accept(); err != nil {
				s.addQueue <- conn
		}
	}
}

func (s *HServer) loop() {
	for {
		select {
			case <-s.ctx.Done():
				for conn := range s.conns {
					conn.Close()
				}
				s.conns = nil
				return
			case conn := <-s.addQueue:
				s.conns[conn] = struct{}{}
			case conn := <-s.closeQueue:
					conn.Close()
					delete(s.conns, conn)
			default:
				for conn := range s.conns {
					if conn.CanRead() { <----- this is what it's all about, right?
						go s.handle(conn)
					}
				}
		}
	}
}

func (s *HServer) handle(conn net.Conn) {
	buf := bytes.NewBuffer(nil)
	if _, err := buf.ReadFrom(conn); err != nil {
		println(err)
		s.closeQueue <- conn
		return
	}
	// ... do something with the data
}

This is the same pattern I used in https://github.com/ivanjaros/ijlibs/blob/master/notif/notif.go that I have used for online chat and notifications system(ie. no mutexes but a single loop that handles all state).

@d2gr
Copy link

d2gr commented Sep 13, 2022

I don't think we need callbacks. I think the following would do:

...

Why would you do that in an example? You don't want to condition your handle. In your code:

for conn := range s.conns {
	if conn.CanRead() { <----- this is what it's all about, right?
		go s.handle(conn)
	}
}

The conn.CanRead will condition the handling, so, you don't handle the conn? Ah no, you handle it but after some loops... Wasting CPU polling the channels and then falling through the default.

But that's why Go is slow. If you use one channel is fine, if you are polling multiple channels and expecting the software to be latency sensitive you are not going to achieve your goals. That's why libraries like fasthttp or evio were created. fasthttp uses standard Go but replaces the http package, and evio is just a replacement of net (with a reactor pattern). The problem that people want to solve is handling as many connections as possible with the lowest latency possible. And in standard Go that's not possible, mainly because you have a coroutine per client (maybe 2, one for reading and another for writing).

The idea is to have a proactor pattern, so:

func broadcast(msg []byte, conns []net.Conn) {
  for _, c := range conns {
    c.OnWritable(func() {
      c.Write(msg)
    })
  }
}

But the problem with the above is that c.Write might block. Because let's say msg is 65536 bytes, and the OS buffer is only able to hold 1024 bytes now. We'll write 1024 bytes and then... what? Some Go libraries might fail with an ErrShortWrite error (see bufio). Therefore the c.Write(msg) must block. UNLESS you can specify to not block with something like c.WriteNonblock(msg), but that would require A LOT of changes.

Given the above, and my previous comment, I think implementing this in std Go would be difficult. Not difficult, but it would require a lot of changes to the interfaces and existing data structures. It'd be nice to have a proactor patter library supported by Go std that uses Go data structures because they can make use of compiler directives to not be stored in the heap (See this). So suggesting a package like net/proactor wouldn't be that crazy, I think. I'd happily work on it

@ivanjaros
Copy link

ivanjaros commented Sep 13, 2022

You can simply put continue into the default statement to check the channels again once you handle all connections. But that was just an example. As for handling - the main point is to use goroutine only if there is data on the connection to be handled and not to have a dedicated goroutine per each connection. so if you have 100k connections but only 100 are active at any one moment, that is savings of 99 900 goroutines * 2kB(minimum) = 199 800 kB or roughly 200 MB memory they would take + context switching. not to mention that you can have a pool of them so you don't have to constantly create and destroy new goroutines to run the handler for the data. if you have 4 threads on the cpu, you can have one thread handle the loop and the remaining three threads handle the processing of those connections which would give you the best performance since the main loop would have "dedicated" thread = speed.

@lesismal
Copy link

@ivanjaros

If conn.CanRead() is blocking interface, one connection's blocking makes all the other connections wait for long in the loop, even if they have been readable already.
If conn.CanRead() is non-blocking interface, that for loop will cause cpu 100%.

@ivanjaros
Copy link

true

@ivanjaros
Copy link

ivanjaros commented Sep 13, 2022

..although you could have have a connection collection primitive that would handle this internally and would work in a channel-like manner. so handling a single connection could work as blocking Read() and non-blocking CanRead(), neither would break existing api, and this new collection would be a new primitive extending the net package and which would allow to add and remove connection and block, with timeout, until one of the connections is readable, in which case it would return it for reading.
...just thinking out loud here.

@d2gr
Copy link

d2gr commented Sep 13, 2022

..although you could have have a connection collection primitive that would handle this internally and would work in a channel-like manner. so handling a single connection could work as blocking Read() and non-blocking CanRead(), neither would break existing api, and this new collection would be a new primitive extending the net package and which would allow to add and remove connection and block, with timeout, until one of the connections is readable, in which case it would return it for reading. ...just thinking out loud here.

Too complex. You don't want some things explicitly blocking and others implicitly, although the doc makes a distinction. IMO what people want is 2 different approaches, one more pro-actor that allows you to have one goroutine to handle multiple connections, and another (that already exists) to handle a connection per goroutine (go net). But the problems presented above might be a blocker.

@ivanjaros
Copy link

Either way, i just wanted to keep the conversation going since I'd like to write a chat server in near future and this type of functionality will be a must so I will have to use one of the libraries I listed above, but would prefer this to be in the stdlib.

Also I bet proxy like Traefik would get a nice performance gain from this as well. I mean, gnet equals in performance the fastest code out there in c/cpp/rust/... so the performance is there, in Go. It just needs to be unlocked.

@lesismal
Copy link

lesismal commented Sep 14, 2022

I mean, gnet equals in performance the fastest code out there in c/cpp/rust/... so the performance is there, in Go

@ivanjaros
That's not right.
For HTTP, gnet uses a simple parser that does not implement full features of the HTTP protocol, so its testing code cost much less of cpu than a full-featured HTTP server. Here are the testing code: parser, response encoder
It seems not only gnet makes the test like this, but many frameworks join the github.com/TechEmpower/FrameworkBenchmarks like this. That's not the real performance reports, and it misleads many people about different frameworks' performance.

As we know, golang can't get the same performance as c/cpp/rust in most scenarios, but can get near to c/cpp/rust in some scenarios such as IO, and the most important is: goroutine and chan make us write code easier.

@d2gr
Copy link

d2gr commented Sep 14, 2022

Also I bet proxy like Traefik would get a nice performance gain from this as well. I mean, gnet equals in performance the fastest code out there in c/cpp/rust/... so the performance is there, in Go. It just needs to be unlocked.

@ivanjaros
Yeah, but the problem with frameworks like gnet or evio is that they are not usable in production. There might be cases where you can use it, even with codecs. They are too complicated to be used in serious environments. A pro-actor pattern library would be more suitable (like boost::asio).

Standard Go is quite performant, just take a look at fasthttp. It is built using standard Go. The only problem with Go is the amount of sync mechanisms that it requires. That's why I said that a pro-actor pattern might be faster for TLS & HTTP/2. Or any protocol that uses streams instead of one single stateless connection.

@d2gr
Copy link

d2gr commented Sep 14, 2022

As we know, golang can't get the same performance as c/cpp/rust in most scenarios, but can get near to c/cpp/rust in some scenarios such as IO, and the most important is: goroutine and chan make us write code easier.

@lesismal
I mean... Go is quite fast. I benchmarked fasthttp vs boost::beast quite a lot (in AWS 2xc5n.4xlarge, client and server in a placement group) and Go is able to handle 200K QPS below 5ms (100th percentile) and boost::beast was able to do 200K sometimes below 4.8ms other times below 5.5ms. So there's a lot of variance in boost::beast, whilst in Go I got the same result in all the benchmarks.
Now, the only problem with Go is that the more the connections the less performant the I/O becomes. And that's a limitation you should be aware of while building a system.

So it depends on how your Go program is structured. If it has locks, it doesn't. If uses channels, etc... In my benchmarks I had no locks, just plain HTTP (no TLS) with some caching system in both fasthttp & boost::beast.

I wouldn't say that Go is less performant than C++ and Rust. It also depends on the library that you use. boost::beast is ok, it does scale well, but it is not as latency sensitive as you might think (truth is that you can easily plug in solarflare's onload in boost::asio and get a performance improvement), and Rust is not as different from Go. They also use coroutines and they also need to lock in some scenarios but they have the advantage of not having a GC ("advantage").

Bechmarks are mostly a lie. People prepare their programs for the benchmark in question (like gnet's case). Production ready environments don't need to lie in their benchmarks (fasthttp)

@lesismal
Copy link

@d2gr
I think proactor is not the key point, but are the num of goroutines, and blocking or non-blocking.
I've tried a lot about a non-blocking based HTTP server, I think we can't gain both high performance and high online together using golang by now:

  1. For std-based frameworks that use net.Conn(blocking IO interface), including fasthttp, the cost of hardware grows fast as the num of connections increases, because they all use at least one goroutine for each connection, including fasthttp. It's hard to reduce the cost for gc, memory and schedule.
  2. For non-blocking frameworks, no matter whether it's reactor or proactor, we need to handle IO and logic in different goroutine pools. That needs more heap escape, more complex async parser, and we can not optimize the buffers as fasthttp does.

@d2gr
Copy link

d2gr commented Sep 14, 2022

@lesismal
I mention pro-actor because it's the easiest way to handle async (unlike async/await, or reactor). I agree that we need less heap and more stack-based structures. I don't know what do you mean with async parser.

@ivanjaros
Copy link

ivanjaros commented Sep 14, 2022

That's not right. For HTTP, gnet uses a simple parser that does not implement full features of the HTTP protocol, so its testing code cost much less of cpu than a full-featured HTTP server.

you are mixing apples and oranges here. nobody is talking about http servers here. gnet is merely networking framework(like ALL the projects mentioned before). what you build on top of it is up to you.

@lesismal
Copy link

@d2gr
We need to cache half-packet bytes because we can't use ReadFull. The parser and buffer usage logic are much more complex than net.Conn based frameworks.

@ivanjaros Please refer to the reasons I've mentioned here and in previous comments.

Here are some benchmark reports, you can run the test in your own env:
lesismal/go-net-benchmark#1
I get the same level of performance between nbio and gnet, but I support a lot more in nbio than gnet. I tried a lot to optimize the performance and reduce the cost, but I can achieve only balance between performance and cost reduce, can't gain both of them together.
That's not only about HTTP!
For simple IO logic, we do gain a good performance, but for product env, for the complex logic, the reasons I mentioned make the performance down.
You can try it, I will be glad to see if you can find some way to make more promotions.

@bcmills
Copy link
Member

bcmills commented Sep 14, 2022

@lesismal, @ivanjaros, @d2gr: it isn't clear to me how the above discussion relates to the feature proposed in this issue. For off-topic performance discussions, please start a thread on the golang-dev mailing list or a similar venue outside of the issue tracker.

@d2gr
Copy link

d2gr commented Sep 14, 2022

@bcmills Sorry for spamming a bit, but my comments where related to the issue. At least this one #15735 (comment)

@lesismal
Copy link

@bcmills
Sorry about that.
But I think the CanRead does related to non-blocking interfaces and performance. If the new feature doesn't consider these points, the new feature will not be useful and should not be added.
If the new feature provides only a CanRead but still blocking Read/Write, there will be problems like @ivanjaros 's for-loop or gobwas/ws.

My previous related comments:

My opinion is that if provide Readable, non-blocking Read/Write interface should be provided together, else Readable will not be useful.

If conn.CanRead() is blocking interface, one connection's blocking makes all the other connections wait for long in the loop, even if they have been readable already. If conn.CanRead() is non-blocking interface, that for loop will cause cpu 100%.

@lesismal
Copy link

lesismal commented Sep 15, 2022

@bcmills
Actually, CanRead interface is just the same thing that 1m-go-websockets and gobwas/ws did. That's the smallest import for event-driven on std's TCPConn, but as we've discussed, it leads to the problem:
gobwas/ws#143
That's the same paroblem as the for-loop-block we discussed in previous comments.
To solve this problem in gobwas/ws, we still need to serve each connection with at least 1 goroutine, which come back to the solution of the current std but is more complex. Then, there's no benefit, seems and performs even worse.
So, if Read/Write interfaces are still blocking mod, I would prefer you maintainers just keep it not changed rather than add a new CanRead interface.

Also that's why I hope if you add CanRead, please add non-blocking Read/Write interfaces together. And then, one more thing, the non-blocking interfaces and separate goroutine pool make the performance worse than the current std if there are not a lot of connections, and that needs a lot of changes to the current TCPConn, I think that should be also considered before this proposal is accepted.
But all right, I will stop discussing performance, I just focus on the CanRead, blocking or non-blocking.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
NeedsDecision Feedback is required from experts, contributors, and/or the community before a change can be made. Thinking v2 A language change or incompatible library change
Projects
None yet
Development

No branches or pull requests