Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Performance much slower than Redis #10

Closed
simongui opened this issue Aug 16, 2016 · 3 comments
Closed

Performance much slower than Redis #10

simongui opened this issue Aug 16, 2016 · 3 comments

Comments

@simongui
Copy link

Even with empty handlers that basically NOOP the performance of this library is still lacking compared to Redis itself which is doing more work. I wonder if the bottleneck could be removed? I haven't looked into it yet but I thought I would report some benchmarks.

Redis

./redis-benchmark -t set -n 100000 -q -P 100
SET: 806451.62 requests per second

Redeo

./redis-benchmark -t set -n 100000 -q -P 100
SET: 216919.73 requests per second

Server code is as follows.

    srv := redeo.NewServer(&redeo.Config{Addr: "localhost:6379"})
    srv.HandleFunc("ping", func(out *redeo.Responder, _ *redeo.Request) error {
        out.WriteInlineString("PONG")
        return nil
    })

    srv.HandleFunc("set", func(out *redeo.Responder, _ *redeo.Request) error {
        out.WriteInlineString("OK")
        return nil
    })

    log.Fatal(srv.ListenAndServe())
}

Profiling shows the following.

(pprof) top10
12.40s of 13.13s total (94.44%)
Dropped 158 nodes (cum <= 0.07s)
Showing top 10 nodes out of 113 (cum >= 0.22s)
      flat  flat%   sum%        cum   cum%
     8.13s 61.92% 61.92%      8.15s 62.07%  syscall.Syscall
     1.54s 11.73% 73.65%      1.54s 11.73%  runtime.mach_semaphore_signal
     0.76s  5.79% 79.44%      0.76s  5.79%  runtime.kevent
     0.71s  5.41% 84.84%      0.71s  5.41%  runtime.usleep
     0.64s  4.87% 89.72%      0.64s  4.87%  runtime.mach_semaphore_wait
     0.29s  2.21% 91.93%      0.29s  2.21%  runtime.(*mcentral).grow
     0.11s  0.84% 92.76%      0.11s  0.84%  runtime.mach_semaphore_timedwait
     0.09s  0.69% 93.45%      0.09s  0.69%  nanotime
     0.07s  0.53% 93.98%      0.12s  0.91%  runtime.scanobject
     0.06s  0.46% 94.44%      0.22s  1.68%  runtime.scanstack
@dim
Copy link
Member

dim commented Aug 16, 2016

Redeo will be naturally slower than Redis, since Go is simply a few magnitudes slower than C. Looking at your profile, it doesn't look as if there is much to improve. Did you try it with Go 1.7?

@simongui
Copy link
Author

simongui commented Aug 16, 2016

Yeah that was run with Go 1.7. I'm not sure I agree that this performance is due to Go and that improvements can't be made. If you look at the Techempower HTTP benchmarks you'll see fasthttp which is implemented in Go and parses a much more complicated HTTP protocol is at the 3rd and 4th position among some of the fastest HTTP servers competing against C and C++.

https://www.techempower.com/benchmarks/#section=data-r12&hw=peak&test=plaintext

@dim
Copy link
Member

dim commented Aug 16, 2016

Here's a reasonably fast Go implementation of a redis ping server:

package main

import (
    "bufio"
    "bytes"
    "fmt"
    "io"
    "log"
    "net"
    "strconv"
)

var (
    ping = []byte("PING")
    pong = []byte("+PONG\r\n")
)

func main() {
    if err := run(); err != nil {
        log.Fatal(err.Error())
    }
}

func run() error {
    l, err := net.Listen("tcp", ":9736")
    if err != nil {
        return err
    }
    defer l.Close()

    for {
        conn, err := l.Accept()
        if err != nil {
            return err
        }
        go handle(conn)
    }
}

func handle(conn net.Conn) {
    defer conn.Close()

    buf := bufio.NewReader(conn)
    void := make([]byte, 0, 1024)
    for {
        if err := process(buf, void); err != nil {
            if err != io.EOF {
                log.Println(err.Error())
            }
            return
        }
        conn.Write(pong)
    }
}

func process(buf *bufio.Reader, void []byte) error {
    line, prefix, err := buf.ReadLine()
    if err != nil {
        return err
    } else if prefix || len(line) == 0 {
        return fmt.Errorf("bad request")
    }

    switch line[0] {
    case '*':
        n, err := strconv.Atoi(string(line[1:]))
        if err != nil {
            return err
        }
        for i := 0; i < n; i++ {
            if err := process(buf, void); err != nil {
                return err
            }
        }
    case '$':
        n, err := strconv.Atoi(string(line[1:]))
        if err != nil {
            return err
        }
        void = void[:n+2]
        if _, err := buf.Read(void); err != nil {
            return err
        }
    default:
        if !bytes.Equal(line, ping) {
            return fmt.Errorf("bad request: %s", strconv.Quote(string(line)))
        }
    }
    return nil
}

And here are my benchmarks, compared against native redis:

$ redis-benchmark -t ping -p 9736 -q -P 100
PING_INLINE: 787401.56 requests per second
PING_BULK: 675675.69 requests per second

$ redis-benchmark -t ping -p 6379 -q -P 100
PING_INLINE: 1408450.62 requests per second
PING_BULK: 2325581.25 requests per second

Redis is just really, really fast :)

@dim dim closed this as completed Aug 16, 2016
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants