-
Notifications
You must be signed in to change notification settings - Fork 57
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Huge performance gap compare to tcp #60
Comments
This library doesn't use golang runtime polling system. I think the performance gap is caused by that. |
Thank you. Thank you in advance |
I think we can make similar changes to this to use the runtime polling system. Unfortunately, I don't have enough time to work on this. PR is welcome! |
im not an expert but ill give it a shot, by a chance if you can help me where to watch/change, ill make the changes and run the tests then ill make the PR if it's ok |
hi @alzrck , thanks |
hi, not modifying your code, i took another sctp implementation, write some minor changes and added a new diameter implementation on top.
… On 8 Nov 2023, at 05:05, drv1234 ***@***.***> wrote:
hi @alzrck <https://github.com/alzrck> ,
how is it going?
Did you manage to make some progress?
thanks
—
Reply to this email directly, view it on GitHub <#60 (comment)>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/AAVXMJ7IVK5MNAVUPLUT7H3YDM4M3AVCNFSM6AAAAAAV2Y6JDOVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQMBRGI3TQMRZGU>.
You are receiving this because you were mentioned.
|
where can I find it? |
https://pkg.go.dev/github.com/pion/sctp
this is the sctp implementarion that i used.
The diameter on-top of this sctp is an implementation that i wrote from scratch and it’s under test/development right now, it’s not public yet
… On 8 Nov 2023, at 14:12, drv1234 ***@***.***> wrote:
where can I find it?
—
Reply to this email directly, view it on GitHub <#60 (comment)>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/AAVXMJ5SLUSLS6MERUP2KJTYDO4ODAVCNFSM6AAAAAAV2Y6JDOVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQMBSGMYTMNJXGA>.
You are receiving this because you were mentioned.
|
Is it fit to the golang standard "net/http" architecture? |
No, thats why im testing it, because they build it on top of udp, or we can
say the net architecture is udp and i dont like that.
It's causing me a lot of delays on my projects, im still using jDiameter
with their java sctp implementation and I would like to move everything to
golang but I have my time consumed in other projects.
go-diameter is a good starting point, but carriers don't use tcp or udp for
diameter, i need strong and fast sctp support and have no time to implement
it by now.
…On Thu, Nov 9, 2023 at 7:53 AM drv1234 ***@***.***> wrote:
Is it fit to the golang standard "net/http" architecture?
I cannot see i.e dial or listen functions
—
Reply to this email directly, view it on GitHub
<#60 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAVXMJ4GC5ES7NN5UZOV6E3YDSYZRAVCNFSM6AAAAAAV2Y6JDOVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQMBTGYYDGMRWHA>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
do you work in telco too? :-) |
Yes i do, started 31 years ago in minor dial up isps and ended up in tier 1 carriers, work by myself but all my customers are telcos (MSO’s)
… On 9 Nov 2023, at 12:29, drv1234 ***@***.***> wrote:
do you work in telco too? :-)
—
Reply to this email directly, view it on GitHub <#60 (comment)>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/AAVXMJ6ZLG2JOYK5FGTRTWLYDTZEDAVCNFSM6AAAAAAV2Y6JDOVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQMBUGA2DSMJQGE>.
You are receiving this because you were mentioned.
|
I've tried to reimplement underlying methods using approach from https://github.com/mdlayher/socket/ (look for Looks like accept and/or connect doesn't get the control call (Write poller doesn't re-call the lambda in PS does anyone have working benchmark to compare current and via-poll implementations? |
I think tcp does not use poll implementation. |
@drv1234 Could you share the code for throughput benchmark? |
I installed this package: there is an example folder in it but I slightly modified the files:
|
hi, I studied the solution of "mdlayher/netlink" a bit. What I understood the problem is that the current SCTP implementation uses "syscall.Recvmsg()" which not only bloks the current goroutine but the whole thread. https://pkg.go.dev/internal/poll Can somebody correct me if I am wrong? |
@drv1234 |
@drv1234 Can you try this branch? https://github.com/ishidawataru/sctp/tree/feat-non-blocking Still WIP. But at least the test is passing in my environment (Ubuntu 20.04). |
What i did (/home/alz/sctp is where i cloned) and alz@alzdell:~/sctp$ git branch
with tcp with sctp am i doing something wrong? @drv1234 did you have the chance to try? |
to double check im using the correct code and branch, in sctp_linux.go i commented out
and ran the test again and i get 2000 messages in 1.847410964s: 1082/s |
@alzrck Thanks for testing. Could you run the test with the master branch and compare the results? |
hi, With the new one: |
@ishidawataru with the master branch 2000 messages in 1.244879549s: 1606/s |
Thanks for testing. It seems my expectation was wrong. The use of the runtime polling system is not improving the performance. I found this old issue. Maybe the performance gap is caused by the OS stack level? |
this issue is really old :-)
|
Neither in Java, mean, we use a java combination of jdiameter + restcomm sctp stack and yes, theres a difference but it’s real small.
Ishida, question, how you tested? My question is if taking out the diameter app on top of your sctp stack will show different results?
Meaning, use only your stack, a completely plain test sending a chunk of bytes end to end and compare sctp vs tcp
… On 24 Jan 2024, at 04:38, drv1234 ***@***.***> wrote:
this issue is really old :-)
Butt according to the numbers the only the "min" value was double compare to the TCP, the "mean" is almost the same and the "99.99%" was better the TCP
Currently I use a C++ implementation of diameter/SCTP and there is no such an issue
—
Reply to this email directly, view it on GitHub <#60 (comment)>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/AAVXMJ7OYCHCJHANJ32BFF3YQC3ANAVCNFSM6AAAAAAV2Y6JDOVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSMBXGU3DGNBZGM>.
You are receiving this because you were mentioned.
|
I've written some (stupid and low quality) client/server bench implementations with SCTP only (no DIAMETER involved). client: package main
import (
"errors"
"flag"
"io"
"log"
"math/rand"
"net"
"time"
"github.com/ishidawataru/sctp"
)
func init() {
rand.Seed(time.Now().UnixNano())
}
func main() {
// go run client.go --addr localhost:3868 --clients 4 --count 10000 --network tcp
addr := flag.String("addr", "localhost:3868", "address in form of ip:port to connect to")
benchCli := flag.Int("clients", 1, "number of client connections")
benchMsgs := flag.Int("count", 1000, "number of ACR messages to send")
networkType := flag.String("network", "tcp", "protocol type tcp/sctp")
drainMode := flag.Int("drain-mode", 0, "Drain incoming messages mode (0 - disable, 1 - sync, 2 - async)")
flag.Parse()
if len(*addr) == 0 {
flag.Usage()
}
connect := func() (net.Conn, error) {
return dial(*networkType, *addr)
}
done := make(chan int, 16)
benchmark(connect, *benchCli, *benchMsgs, *drainMode, done)
}
func dial(network, addr string) (net.Conn, error) {
switch network {
case "sctp", "sctp4", "sctp6":
sctpAddr, err := sctp.ResolveSCTPAddr(network, addr)
if err != nil {
return nil, err
}
return sctp.DialSCTP(network, nil, sctpAddr)
case "tcp", "tcp4", "tcp6":
tcpAddr, err := net.ResolveTCPAddr(network, addr)
if err != nil {
return nil, err
}
return net.DialTCP(network, nil, tcpAddr)
}
return nil, net.UnknownNetworkError(network)
}
type dialFunc func() (net.Conn, error)
func sender(conn net.Conn, msgs int, drainMode bool, done chan int) {
rdbuf := make([]byte, 4096)
total := 0
payload := make([]byte, 1024)
_, _ = rand.Read(payload)
for i := 0; i < msgs; i += 1 {
n, err := conn.Write(payload)
if err != nil {
log.Fatal(err)
} else if n != len(payload) {
log.Fatal("not all bytes written")
}
if !drainMode {
if done != nil {
done <- 1
}
continue
}
// drain
received := 0
drain:
for {
n, err := conn.Read(rdbuf[total:])
if err != nil {
if errors.Is(err, net.ErrClosed) || errors.Is(err, io.EOF) {
log.Printf("connection closed")
return
}
log.Fatalf("read error: %e", err)
}
total += n
if total < 1024 {
continue
}
for total >= 1024 {
total = copy(rdbuf, rdbuf[1024:total])
received++
}
break drain
}
if done != nil && received > 0 {
done <- received
}
}
}
func receiver(conn net.Conn, done chan int) {
rdbuf := make([]byte, 16384)
total := 0
for {
n, err := conn.Read(rdbuf[total:])
if err != nil {
if errors.Is(err, net.ErrClosed) || errors.Is(err, io.EOF) {
log.Printf("connection closed")
return
}
log.Fatalf("read error: %e", err)
}
total += n
received := 0
for total >= 1024 {
total = copy(rdbuf, rdbuf[1024:total])
received++
}
if received > 0 {
done <- received
}
}
}
func benchmark(df dialFunc, ncli, msgs int, drainMode int, done chan int) {
var err error
c := make([]net.Conn, ncli)
log.Println("Connecting", ncli, "clients...")
for i := 0; i < ncli; i++ {
c[i], err = df() // Dial and do CER/CEA handshake.
if err != nil {
log.Fatal(err)
}
defer c[i].Close()
}
log.Println("Done. Sending messages...")
start := time.Now()
for _, cli := range c {
switch drainMode {
case 0:
go sender(cli, msgs, false, done)
case 1:
go sender(cli, msgs, true, done)
case 2:
go sender(cli, msgs, false, nil)
go receiver(cli, done)
}
}
count := 0
total := ncli * msgs
wait:
for {
select {
case n := <-done:
count += n
if count == total {
break wait
}
case <-time.After(100 * time.Second):
log.Fatal("Timeout waiting for messages.")
}
}
elapsed := time.Since(start)
log.Printf("%d messages in %s: %d/s", count, elapsed,
int(float64(count)/elapsed.Seconds()))
} server: package main
import (
"errors"
"flag"
"io"
"log"
"net"
"syscall"
_ "net/http/pprof"
"github.com/ishidawataru/sctp"
)
func main() {
// go run server.go --addr=localhost:3867 --network=tcp
addr := flag.String("addr", ":3868", "address in the form of ip:port to listen on")
networkType := flag.String("network", "tcp", "protocol type tcp/sctp")
echoMode := flag.Bool("echo", false, "Send back incoming messages")
flag.Parse()
err := listen(*networkType, *addr, *echoMode)
if err != nil {
log.Fatal(err)
}
}
func listen(network, addr string, echoMode bool) error {
log.Println("Starting server on", addr)
var listener net.Listener
switch network {
case "sctp", "sctp4", "sctp6":
sctpAddr, err := sctp.ResolveSCTPAddr(network, addr)
if err != nil {
return err
}
sctpListener, err := sctp.ListenSCTP(network, sctpAddr)
if err != nil {
return err
}
listener = sctpListener
case "tcp", "tcp4", "tcp6":
tcpAddr, err := net.ResolveTCPAddr(network, addr)
if err != nil {
return err
}
tcpListener, err := net.ListenTCP(network, tcpAddr)
if err != nil {
return err
}
listener = tcpListener
}
log.Printf("start listening on %s://%s", network, addr)
for {
conn, err := listener.Accept()
if err != nil {
log.Fatal("dead listener")
}
log.Printf("accepted incoming connection")
go reader(conn, echoMode)
}
}
func reader(conn net.Conn, echoMode bool) {
buf := make([]byte, 4096)
total := 0
totalPackets := 0
for {
n, err := conn.Read(buf[total:])
if err != nil {
log.Printf("read error: %s (processed %d packets)", err, totalPackets)
if errors.Is(err, net.ErrClosed) || errors.Is(err, syscall.ECONNRESET) || errors.Is(err, io.EOF) {
log.Printf("connection closed")
return
}
return
}
if echoMode {
total += n
for total >= 1024 {
payload := buf[:1024]
wn, err := conn.Write(payload)
if err != nil {
log.Printf("write error: %s (processed %d packets)", err, totalPackets)
if errors.Is(err, net.ErrClosed) || errors.Is(err, syscall.ECONNRESET) {
return
}
return
}
if wn != len(payload) {
log.Fatal("not all bytes written")
}
total = copy(buf, buf[1024:total])
totalPackets++
}
}
}
} Try running them as follows:
$ ./server --addr 127.0.0.1:3868 --network sctp --echo
$ ./client --addr 127.0.0.1:3868 --network sctp --clients 1 --count 100000 --drain-mode 1
$ ./server --addr 127.0.0.1:3868 --network sctp --echo
$ ./client --addr 127.0.0.1:3868 --network sctp --clients 1 --count 100000 --drain-mode 2
$ ./server --addr 127.0.0.1:3868 --network sctp
$ ./client --addr 127.0.0.1:3868 --network sctp --clients 1 --count 100000 --drain-mode 0 I've messed up with SCTP's sysctl settings, so I don't have results to share, but take a look at setups 1 and 2. The results for TCP vs SCTP in those cases are confusing at least. |
Hi all, |
Hi,
I am planning to use https://github.com/fiorix/go-diameter. In this project there is an example diameter client/server with benchmark.
With tcp: 259058/s diameter message can be reached (on localhost)
With sctp only 6938/s message.
cpuprof_tcp_vs_sctp.zip
The text was updated successfully, but these errors were encountered: