-
Notifications
You must be signed in to change notification settings - Fork 2.5k
Description
Expected Behavior
The docs describe PoolSize and MaxActiveConns as follows:
// PoolSize is the base number of socket connections.
// Default is 10 connections per every available CPU as reported by runtime.GOMAXPROCS.
// If there is not enough connections in the pool, new connections will be allocated in excess of PoolSize,
// you can limit it through MaxActiveConns
...
// MaxActiveConns is the maximum number of connections allocated by the pool at a given time.
// When zero, there is no limit on the number of connections in the pool.
// If the pool is full, the next call to Get() will block until a connection is released.
There are two parts about this which are wrong, afaict:
Issue 1:
When zero, there is no limit on the number of connections in the pool.
Based on that, I'd expect PoolSize=2, MaxActiveConns=5 to produce a Redis client which can have up to 5 active connections.
Issue 2:
If the pool is full, the next call to Get() will block until a connection is released.
Based on that, if I have PoolSize=5, MaxActiveConns=2, I'd expect... well, I don't actually know what I'd expect here. It's not obvious to me if the docs intend "if the pool is full" to be understood as "if the number of connections exceeds PoolSize, or exceeds MaxActiveConns". But I'd expect the requests to block in some way.
Current Behavior
Issue 1:
The PoolSize=2, MaxActiveConns=5 does not allow the pool to grow beyond 2 active connections.
Issue 2:
The PoolSize=5, MaxActiveConns=2 configuration causes the requests to fail with ErrPoolExhausted after there are >2 connections in the pool.
Possible Solution
Short term, I think changing the documentation is sufficient; my understanding of the current behavior is this:
PoolSize controls the absolute maximum size of the pool.
If MaxActiveConns is greater than zero, and less than PoolSize, when the number of connections in the pool exceeds MaxActiveConns, the client returns an error immediately instead of blocking. If MaxActiveConns is 0, negative, or greater or equal to PoolSize, it has no effect.
Longer term, I think an easier to understand API would be to remove MaxActiveConns, and instead add an option which controls whether the client blocks or fails immediately when PoolSize is exceeded.
Steps to Reproduce
Issue 1 repro:
package main
import (
"context"
"fmt"
"sync"
"time"
"github.com/redis/go-redis/v9"
)
func main() {
client := redis.NewClient(&redis.Options{
Addr: "localhost:6379",
PoolSize: 2,
MaxActiveConns: 5,
})
defer client.Close()
// Create a stream for blocking reads
ctx := context.Background()
client.XAdd(ctx, &redis.XAddArgs{
Stream: "s", ID: "*", Values: []interface{}{"x", "y"},
}).Result()
// Fill pool with 2 blocked XREAD requests
var wg sync.WaitGroup
for i := 0; i < 2; i++ {
wg.Add(1)
go func() {
defer wg.Done()
client.XRead(context.Background(), &redis.XReadArgs{
Streams: []string{"s", "$"},
Block: 5 * time.Second,
}).Result()
}()
}
time.Sleep(100 * time.Millisecond)
// Try GET with timeout - should work since MaxActiveConns=5 > PoolSize=2
getCtx, cancel := context.WithTimeout(ctx, 2*time.Second)
defer cancel()
_, err := client.Get(getCtx, "k").Result()
if err == context.DeadlineExceeded {
fmt.Println("BUG: GET timed out. Pool didn't grow beyond PoolSize despite MaxActiveConns=5")
} else {
fmt.Println("OK: GET succeeded")
}
wg.Wait()
}
Issue 2 repro:
package main
import (
"context"
"fmt"
"sync"
"time"
"github.com/redis/go-redis/v9"
)
func main() {
client := redis.NewClient(&redis.Options{
Addr: "localhost:6379",
PoolSize: 5,
MaxActiveConns: 2,
})
defer client.Close()
ctx := context.Background()
client.XAdd(ctx, &redis.XAddArgs{
Stream: "s", ID: "*", Values: []interface{}{"x", "y"},
}).Result()
// Try to fill pool with 5 blocked XREAD requests
var wg sync.WaitGroup
for i := 0; i < 5; i++ {
wg.Add(1)
go func(id int) {
defer wg.Done()
_, err := client.XRead(context.Background(), &redis.XReadArgs{
Streams: []string{"s", "$"},
Block: 5 * time.Second,
}).Result()
if err != nil {
fmt.Printf("XREAD #%d: %v\n", id, err)
}
}(i)
time.Sleep(100 * time.Millisecond)
}
time.Sleep(500 * time.Millisecond)
wg.Wait()
}
Context (Environment)
This can be worked around by tuning MaxActiveConns / PoolSize - it's just that the options behave differently than the docs say they do, so it's confusing.