Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docs: add client connection mode #59

Merged
merged 1 commit into from
Oct 7, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
187 changes: 187 additions & 0 deletions docs/user_guide/client/connection_mode.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,187 @@
# tRPC-Go client connection mode

English | [中文](./connection_mode_zh_CN.md)

## Introduction

Currently, tRPC-Go client supports various connection modes for the initiator of requests, including short connections, connection pools, and connection multiplexing. The client uses a connection pool mode by default, and users can choose different connection modes according to their needs.

<font color="red">Note: The connection pool here refers to the connection pool implemented in tRPC-Go's transport layer. The database and HTTP plugins replace the transport with open-source libraries using a plugin mode, and do not use this connection pool.</font>

## Principle and Implementation

### Short connection

The client creates a new connection for each request, and the connection is destroyed after the request is completed. In the case of a large number of requests, the throughput of the service will be greatly affected, resulting in significant performance loss.

Use cases: it is suitable for one-time requests or when the called service is an old service that does not support receiving multiple requests on one connection.

### Connection pool

The client maintains a connection pool for each downstream IP, and each request first obtains an IP from the name service, then obtains the corresponding connection pool based on the IP, and retrieves a connection from the connection pool. After the request is completed, the connection is returned to the connection pool. During the request process, this connection is exclusive and cannot be reused. The connections in the connection pool are destroyed and newly created according to a certain strategy. Binding one connection for one invocation may result in a large number of network connections when both upstream and downstream have a large scale, which creates enormous scheduling pressure and computational overhead.

Use cases: This mode can be used in almost all scenarios.
Note: Since the strategy of the connection pool queue is Last In First Out (LIFO), if the backend uses VIP addressing, it is possible to cause uneven distribution of the number of connections among different instances. In this case, it is advisable to address based on the name service as much as possible.

### Connection multiplexing

The client sends multiple requests simultaneously on the same connection, and each request is distinguished by a serial number ID. The client establishes a long connection with each downstream service node, and by default all requests are sent to the server through this long connection. The server needs to support connection reuse mode. Connection multiplexing can greatly reduce the number of connections between services, but due to TCP header blocking, when the number of concurrent requests on the same connection is too high, it may cause some delay (in milliseconds). This problem can be alleviated to some extent by increasing the number of multiplexing connections (default two connections are established for one IP).

Use cases: This mode is suitable for scenarios with extreme requirements for stability and throughput. The server needs to support single-connection asynchronous concurrent processing and the ability to distinguish requests by serial number ID, which requires certain server capabilities and protocol fields.

Warning:

- Because connection multiplexing will only establish one connection for each backend node, if the backend is in vip addressing mode (only one instance from the perspective of the client), connection multiplexing cannot be used, and the connection pool mode must be used.
- The transferred server (note that it is not your current service, but the service called by you) must support connection multiplexing, that is, each request is processed asynchronously on one connection, multiple sending and multiple receiving, otherwise, there will be a large number of timeout failures on the client side.

## Example

### Short connection

```go
opts := []client.Option{
client.WithNamespace("Development"),
client.WithServiceName("trpc.app.server.service"),
// If the default connection pool is disabled, the short connection mode will be used
client.WithDisableConnectionPool(),
}

clientProxy := pb.NewGreeterClientProxy(opts...)
req := &pb.HelloRequest{
Msg: "hello",
}

rsp, err := clientProxy.SayHello(ctx, req)
if err != nil {
log.Error(err.Error())
return
}

log.Info("req:%v, rsp:%v, err:%v", req, rsp, err)
```

### Connection pool

```go
// The connection pool mode is used by default, no configuration is required
opts := []client.Option{
client.WithNamespace("Development"),
client.WithServiceName("trpc.app.server.service"),
}

clientProxy := pb.NewGreeterClientProxy(opts...)
req := &pb.HelloRequest{
Msg: "hello",
}

rsp, err := clientProxy.SayHello(ctx, req)
if err != nil {
log.Error(err.Error())
return
}

log.Info("req:%v, rsp:%v, err:%v", req, rsp, err)
```

custom connection pool

```go
import "trpc.group/trpc-go/trpc-go/pool/connpool"

/*
connection pool parameters
type Options struct {
MinIdle int // the minimum number of idle connections, periodically replenished by the background of the connection pool, 0 means no replenishment
MaxIdle int // the maximum number of idle connections, 0 means no limit, the default value of the framework is 65535
MaxActive int // the maximum number of concurrent connections available to users, 0 means no limit
Wait bool // whether to wait when the available connections reach the maximum number of concurrency, the default is false, do not wait
IdleTimeout time.Duration // idle connection timeout, 0 means no limit, the default value of the framework is 50s
MaxConnLifetime time.Duration // the maximum lifetime of the connection, 0 means no limit
DialTimeout time.Duration // establish connection timeout, the default value of the framework is 200ms
ForceClose bool // whether to forcibly close the connection after the user uses it, the default is false, and put it back into the connection pool
PushIdleConnToTail bool // the way to put it back into the connection pool, the default is false, using LIFO to get idle connections
}
*/

// The parameters of the connection pool can be set through option, please refer to the documentation of trpc-go for details, the connection pool needs to be set as a global variable
var pool = connpool.NewConnectionPool(connpool.WithMaxIdle(65535))
// The connection pool mode is used by default, no configuration is required
opts := []client.Option{
client.WithNamespace("Development"),
client.WithServiceName("trpc.app.server.service"),
// Set up a custom connection pool
client.WithPool(pool),
}

clientProxy := pb.NewGreeterClientProxy(opts...)
req := &pb.HelloRequest{
Msg: "hello",
}

rsp, err := clientProxy.SayHello(ctx, req)
if err != nil {
log.Error(err.Error())
return
}

log.Info("req:%v, rsp:%v, err:%v", req, rsp, err)
```

### I/O multiplexing

```go
opts := []client.Option{
client.WithNamespace("Development"),
client.WithServiceName("trpc.app.server.service"),
// Enable connection multiplexing
client.WithMultiplexed(true),
}

clientProxy := pb.NewGreeterClientProxy(opts...)
req := &pb.HelloRequest{
Msg: "hello",
}

rsp, err := clientProxy.SayHello(ctx, req)
if err != nil {
log.Error(err.Error())
return
}

log.Info("req:%v, rsp:%v, err:%v", req, rsp, err)
```

Set custom Connection multiplexing

```go
/*
type PoolOptions struct {
connectNumber int // set the number of connections per address
queueSize int // set the request queue length for each connection
dropFull bool // whether to discard when the queue is full
}
*/
// Connection multiplexing parameters can be set through option. For details, please refer to the documentation of trpc-go. Chengdu needs to be set as a global variable.
var m = multiplexed.New(multiplexed.WithConnectNumber(16))

opts := []client.Option{
client.WithNamespace("Development"),
client.WithServiceName("trpc.app.server.service"),
// Enable connection multiplexing
client.WithMultiplexed(true),
client.WithMultiplexedPool(m),
}

clientProxy := pb.NewGreeterClientProxy(opts...)
req := &pb.HelloRequest{
Msg: "hello",
}

rsp, err := clientProxy.SayHello(ctx, req)
if err != nil {
log.Error(err.Error())
return
}

log.Info("req:%v, rsp:%v, err:%v", req, rsp, err)
```
185 changes: 185 additions & 0 deletions docs/user_guide/client/connection_mode_zh_CN.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,185 @@
# tRPC-Go 客户端连接模式

[English](./connection_mode.md) | 中文

# 前言

目前 tRPC-Go client,也就是请求发起的一方支持多种连接模式,包括短连接,连接池以及连接多路复用。client 默认使用连接池模式,用户可以根据自己的需要选择不同的连接模式。
`注意:这里的连接池指的是 tRPC-Go 自己实现的 transport 里面的连接池,database 以及 http 都是使用插件模式将 transport 替换成开源库,不是使用这里的连接池。`

# 原理和实现

### 短连接

client 每次请求都会新建一个连接,请求完成后连接会被销毁。请求量很大的情况下,服务的吞吐量会受到很大的影响,性能损耗也很大。

使用场景:一次性请求或者请求的被调服务是老服务,不支持在一个连接上接受多个请求的情况下使用。

### 连接池

client 针对每个下游 ip 都会维护一个连接池,每次请求先从名字服务获取一个 ip,根据 ip 获取对应连接池,再从连接池中获取一个连接,请求完成后连接会被放回连接池,在请求过程中,这个连接是独占的,不可复用的。连接池内的连接按照策略进行销毁和新建。一次调用绑定一个连接,当上下游规模很大的情况下,网络中存在的连接数以 MxN 的速度扩张,带来巨大的调度压力和计算开销。

使用场景:基本所有的场景都可以使用。
注意:因为连接池队列的策略是先进后出,如果后端是 vip 寻址方式,有可能会导致后端不同实例连接数不均衡。此时应该尽可能基于名字服务进行寻址。

### 连接多路复用

client 在同一个连接上同时发送多个请求,每个请求通过序列号 ID 进行区分,client 与每个下游服务的节点都会建立一个长连接,默认所有的请求都是通过这个长连接来发送给服务端,需要服务端支持连接复用模式。IO 复用能够极大的减少服务之间的连接数量,但是由于 TCP 的头部阻塞,当同一个连接上并发的请求的数量过多时,会带来一定的延时(几毫秒级别),可以通过增加连接多路复用的连接数量(IO 复用默认一个 ip 建立两个连接)来一定程度上减轻这个问题。

使用场景:对稳定性和吞吐量有极致要求的场景。需要服务端支持单连接异步并发处理,和通过序列号 ID 来区分请求的能力,对 server 能力和协议字段都有一定的要求。
注意:

- 因为连接多路复用对每个后端节点只会建立 1 个连接,如果后端是 vip 寻址方式(从 client 角度看只有一个实例),不可使用连接多路复用,必须使用连接池模式。
- 被调 server(注意不是你当前这个服务,是被你调用的服务)必须支持连接多路复用,即在一个连接上对每个请求异步处理,多发多收,否则,client 这边会出现大量超时失败。

# 示例

### 短连接

```go
opts := []client.Option{
client.WithNamespace("Development"),
client.WithServiceName("trpc.app.server.service"),
// 禁用默认的连接池,则会采用短连接模式
client.WithDisableConnectionPool(),
}

clientProxy := pb.NewGreeterClientProxy(opts...)
req := &pb.HelloRequest{
Msg: "hello",
}

rsp, err := clientProxy.SayHello(ctx, req)
if err != nil {
log.Error(err.Error())
return
}

log.Info("req:%v, rsp:%v, err:%v", req, rsp, err)
```

### 连接池

```go
// 默认采用连接池模式,不需要任何配置
opts := []client.Option{
client.WithNamespace("Development"),
client.WithServiceName("trpc.app.server.service"),
}

clientProxy := pb.NewGreeterClientProxy(opts...)
req := &pb.HelloRequest{
Msg: "hello",
}

rsp, err := clientProxy.SayHello(ctx, req)
if err != nil {
log.Error(err.Error())
return
}

log.Info("req:%v, rsp:%v, err:%v", req, rsp, err)
```

自定义连接池

```go
import "trpc.group/trpc-go/trpc-go/pool/connpool"

/*
连接池参数
type Options struct {
MinIdle int // 最小空闲连接数量,由连接池后台周期性补充,0 代表不做补充
MaxIdle int // 最大空闲连接数量,0 代表不做限制,框架默认值 65535
MaxActive int // 用户可用连接的最大并发数,0 代表不做限制
Wait bool // 可用连接达到最大并发数时,是否等待,默认为 false, 不等待
IdleTimeout time.Duration // 空闲连接超时时间,0 代表不做限制,框架默认值 50s
MaxConnLifetime time.Duration // 连接的最大生命周期,0 代表不做限制
DialTimeout time.Duration // 建立连接超时时间,框架默认值 200ms
ForceClose bool // 用户使用连接后是否强制关闭,默认为 false, 放回连接池
PushIdleConnToTail bool // 放回连接池时的方式,默认为 false, 采用 LIFO 获取空闲连接
}
*/

// 连接池参数可以通过 option 设置,具体请查看 trpc-go 的文档,连接池需要设置成都是全局变量。
var pool = connpool.NewConnectionPool(connpool.WithMaxIdle(65535))
// 默认采用连接池模式,不需要任何配置
opts := []client.Option{
client.WithNamespace("Development"),
client.WithServiceName("trpc.app.server.service"),
// 设置自定义连接池
client.WithPool(pool),
}

clientProxy := pb.NewGreeterClientProxy(opts...)
req := &pb.HelloRequest{
Msg: "hello",
}

rsp, err := clientProxy.SayHello(ctx, req)
if err != nil {
log.Error(err.Error())
return
}

log.Info("req:%v, rsp:%v, err:%v", req, rsp, err)
```

###连接多路复用

```go
opts := []client.Option{
client.WithNamespace("Development"),
client.WithServiceName("trpc.app.server.service"),
// 开启连接多路复用
client.WithMultiplexed(true),
}

clientProxy := pb.NewGreeterClientProxy(opts...)
req := &pb.HelloRequest{
Msg: "hello",
}

rsp, err := clientProxy.SayHello(ctx, req)
if err != nil {
log.Error(err.Error())
return
}

log.Info("req:%v, rsp:%v, err:%v", req, rsp, err)
```

设置自定义连接多路复用

```go
/*
type PoolOptions struct {
connectNumber int // 设置每个地址的连接数
queueSize int // 设置每个连接请求队列长度
dropFull bool // 队列满是否丢弃
}
*/
// 连接多路复用参数可以通过 option 设置,具体请查看 trpc-go 的文档,需要设置成都是全局变量。
var m = multiplexed.New(multiplexed.WithConnectNumber(16))

opts := []client.Option{
client.WithNamespace("Development"),
client.WithServiceName("trpc.app.server.service"),
// 开启连接多路复用
client.WithMultiplexed(true),
client.WithMultiplexedPool(m),
}

clientProxy := pb.NewGreeterClientProxy(opts...)
req := &pb.HelloRequest{
Msg: "hello",
}

rsp, err := clientProxy.SayHello(ctx, req)
if err != nil {
log.Error(err.Error())
return
}

log.Info("req:%v, rsp:%v, err:%v", req, rsp, err)
```
Loading