Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feature: support raft cluster #5226

Merged
merged 372 commits into from Oct 25, 2023
Merged

feature: support raft cluster #5226

merged 372 commits into from Oct 25, 2023

Conversation

funky-eyes
Copy link
Contributor

@funky-eyes funky-eyes commented Jan 3, 2023

Ⅰ. Describe what this PR did

完整源码解析: https://blog.funkye.icu/2023/01/05/seata-raft-2023/
raft学习资料:
https://raft.github.io/
http://thesecretlivesofdata.com/raft/
支持http(不再扩充私有协议涉及范围)下长轮询获取metadata,最大延迟为1s,当集群发生变动时实时响应被挂起的请求,client再进行拉取集群信息(参考zk的listener通知变动,值需要自己再去获取的方式), 并优化multi-raft逻辑
该参数为节点raft选举和通信端口设置,本机调试使用,按照netty端口+1000自动算出raft选举端口
完整的raft地址
-Dseata.server.raft.server-addr=192.168.31.181:9091::100,192.168.31.181:9092::10,192.168.31.181:9093::10
这个节点,其中9093为raft端口,10为选举时的权重,当然你也可以不写权重::中间的为预留值
或者通过配置文件指定集群

seata:
  server:
    raft:
      group: default #此值代表该raft集群的group,client的事务分支对应的值要与之对应
      server-addr: 192.168.0.111:9091,192.168.0.112:9091,192.168.0.113:9091 # 3台节点的ip和端口,端口为该节点的netty端口+1000,默认netty端口为8091
      snapshot-interval: 600 # 600秒做一次数据的快照,以便raftlog的快速滚动,但是每次做快照如果内存中事务数据过多会导致每600秒产生一次业务rt的抖动,但是对于故障恢复比较友好,重启节点较快,可以调整为30分钟,1小时都行,具体按业务来,可以自行压测看看是否有抖动,在rt抖动和故障恢复中自行找个平衡点
      apply-batch: 32 # 最多批量32次动作做一次提交raftlog
      max-append-bufferSize: 262144 #日志存储缓冲区最大大小,默认256K
      max-replicator-inflight-msgs: 256 #在启用 pipeline 请求情况下,最大 in-flight 请求数,默认256
      disruptor-buffer-size: 16384 #内部 disruptor buffer 大小,如果是写入吞吐量较高场景,需要适当调高该值,默认 16384
      election-timeout-ms: 1000 #超过多久没有leader的心跳开始重选举
      reporter-enabled: false # raft自身的监控是否开启
      reporter-initial-delay: 60 # 监控的区间间隔
      serialization: jackson # 序列化方式,不要改动
      compressor: none # raftlog的压缩方式,如gzip,zstd等
      sync: true # raft日志的刷盘方式,默认是同步刷盘
  config:
    # support: nacos, consul, apollo, zk, etcd3
    type: file # 该配置可以选择不同的配置中心
  registry:
    # support: nacos, eureka, redis, zk, consul, etcd3, sofa
    type: file # raft模式下不允许使用非file的其他注册中心
  store:
    # support: file 、 db 、 redis 、 raft
    mode: raft # 使用raft存储模式
    file:
      dir: sessionStore # 该路径为raftlog及事务相关日志的存储位置,默认是相对路径,最好设置一个固定的位置

比如:
192.168.31.182:9091,192.168.31.181:9092,192.168.31.183:9093
1.x升级2.x采用raft集群模式(非存储模式)步骤

  1. 先将事务分组对应集群的group设置为defalut

  2. 先滚动升级2.x 如上配置几台seata集群的地址(继续保留第三方注册中心的配置registry.type=xxx),由于没有使用raft存储模式,该节点虽然会报选举相关的异常日志,但是还是可以对外服务,等都滚动完毕后,集群选举成功后异常日志自然会消失

  3. 再滚动升级client,将client上的注册中心改为seata,并将对应事务分组的grouplist配置上对应集群的4层lb,vip,或者直接是对应seata集群地址即可

    spring-boot如下:

    seata:
       enabled: true
       application-id: product-service
       tx-service-group: default_tx_group
       service:
          vgroup-mapping:
             default_tx_group: default
       registry:
          type: raft
          raft:
             server-addr: 192.168.31.182:9091,192.168.31.181:9092,192.168.31.183:9093
    

    配置中心如下:

    service.vgroupMapping.default_tx_group=default
    service.default.grouplist=192.168.31.181:8091
    

后续工作:

  1. 支持存储计算分离下的multi-raft,session以raft模式,lock以redis/db模式共存
  2. metadata支持增量/全量方式,减少集群过大后服务端的压力
  3. 待grpc合并后,考虑用grpc做推送metadata的更新
  4. 支持存储模式的迁移,db迁移至raft/redis,raft迁移db/redis,redis迁移db/raft
  5. 开发 扩缩容 open-api 及 集群状态等api
  6. server侧代理注册中心和配置中心功能,使用长轮询+主动拉方式更新集群信息及配置信息,避免client和server都需要配置相同的注册中心和配置中心的配置

debug server侧时建议把 election-timeout-ms 配置长一些,否者debug会造成进程阻塞,其它节点会重选举,默认是1s,调试可以设置成60s之类的,超过1s没有发心跳,其余节点会开始选举,所以调试时需要改大,生产时最好默认,调太小网络抖动会造成重选举对业务请求会有影响,太大当真正出现宕机网络分区时,触发重选举的时间太久,业务也会有影响,上生产时需要注意这个参数调优,或者直接采用默认

单机测试以99.9%的事务成功率压测,随机扣200件商品的库存 五分钟或50000个请求
Failed requests的值为压测时回滚的事务数

    @GlobalTransactional
    @RequestMapping("/reduceStock")
    public Boolean reduceStock() {
        productService.reduceStock(ThreadLocalRandom.current().nextInt(1, 200), 1);
        if (ThreadLocalRandom.current().nextInt(1000) == 5) {
            int i = 1 / 0;
        }
        return true;
    }
    @Override
    @Transactional(isolation = Isolation.READ_COMMITTED)
    public Boolean reduceStock(Integer id, Integer sum) {
        return update(Wrappers.<Product>lambdaUpdate().eq(Product::getId, id).ge(Product::getStock, sum)
            .setSql("stock=stock-" + sum));
    }

db模式(非实时刷盘)+raft集群模式
调优参数参考某些云数据库方式,相当于1s一刷盘,减少事务信息和client的增删改对数据库的压力
具体修改如下

innodb_flush_log_at_trx_commit=0
sync_binlog=1000
Benchmarking 127.0.0.1 (be patient)
Completed 5000 requests
Completed 10000 requests
Completed 15000 requests
Completed 20000 requests
Completed 25000 requests
Finished 29242 requests


Server Software:
Server Hostname:        127.0.0.1
Server Port:            8080

Document Path:          /reduceStock?id=1&sum=1
Document Length:        4 bytes

Concurrency Level:      32
Time taken for tests:   300.094 seconds
Complete requests:      29242
Failed requests:        42
   (Connect: 0, Receive: 0, Length: 42, Exceptions: 0)
Non-2xx responses:      42
Total transferred:      3193912 bytes
HTML transferred:       123502 bytes
Requests per second:    97.44 [#/sec] (mean)
Time per request:       328.397 [ms] (mean)
Time per request:       10.262 [ms] (mean, across all concurrent requests)
Transfer rate:          10.39 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.4      1       3
Processing:    92  327  83.9    309    1214
Waiting:       82  326  83.8    307    1211
Total:         92  328  83.9    309    1214
ERROR: The median and mean for the initial connection time are more than twice the standard
       deviation apart. These results are NOT reliable.

Percentage of the requests served within a certain time (ms)
  50%    309
  66%    336
  75%    356
  80%    371
  90%    422
  95%    485
  98%    569
  99%    656
 100%   1214 (longest request)

纯raft模式

Benchmarking 127.0.0.1 (be patient)
Completed 5000 requests
Completed 10000 requests
Completed 15000 requests
Completed 20000 requests
Completed 25000 requests
Completed 30000 requests
Completed 35000 requests
Completed 40000 requests
Completed 45000 requests
Completed 50000 requests
Finished 50000 requests


Server Software:
Server Hostname:        127.0.0.1
Server Port:            8080

Document Path:          /reduceStock?id=1&sum=1
Document Length:        4 bytes

Concurrency Level:      32
Time taken for tests:   282.720 seconds
Complete requests:      50000
Failed requests:        45
   (Connect: 0, Receive: 0, Length: 45, Exceptions: 0)
Non-2xx responses:      45
Total transferred:      5456177 bytes
HTML transferred:       206177 bytes
Requests per second:    176.85 [#/sec] (mean)
Time per request:       180.941 [ms] (mean)
Time per request:       5.654 [ms] (mean, across all concurrent requests)
Transfer rate:          18.85 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.7      0      21
Processing:    48  180  51.6    170    1384
Waiting:       37  179  51.7    168    1384
Total:         48  181  51.6    170    1385

Percentage of the requests served within a certain time (ms)
  50%    170
  66%    193
  75%    209
  80%    219
  90%    245
  95%    268
  98%    299
  99%    327
 100%   1385 (longest request)

redis+raft集群模式

Benchmarking 127.0.0.1 (be patient)
Completed 5000 requests
Completed 10000 requests
Completed 15000 requests
Completed 20000 requests
Completed 25000 requests
Completed 30000 requests
Finished 31088 requests


Server Software:
Server Hostname:        127.0.0.1
Server Port:            8080

Document Path:          /reduceStock?id=1&sum=1
Document Length:        4 bytes

Concurrency Level:      32
Time taken for tests:   300.050 seconds
Complete requests:      31088
Failed requests:        32
   (Connect: 0, Receive: 0, Length: 32, Exceptions: 0)
Non-2xx responses:      32
Total transferred:      3393092 bytes
HTML transferred:       128852 bytes
Requests per second:    103.61 [#/sec] (mean)
Time per request:       308.852 [ms] (mean)
Time per request:       9.652 [ms] (mean, across all concurrent requests)
Transfer rate:          11.04 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    1   0.4      1       4
Processing:    60  308  81.7    293    1595
Waiting:       47  306  81.7    292    1593
Total:         60  308  81.8    294    1595

Percentage of the requests served within a certain time (ms)
  50%    294
  66%    322
  75%    342
  80%    356
  90%    401
  95%    449
  98%    516
  99%    567
 100%   1595 (longest request)

纯空begin压测,3台tc 2c2g,tc之间延迟0.5ms左右,tm与tc之间延迟1ms左右,人工介入强行让测试时的leader都为同一台,确保压测结果准确性,进行20W次请求,50并发压测
jdk19 虚拟线程
ThreadPoolExecutor workingThreads = new ThreadPoolExecutor(NettyServerConfig.getMinServerPoolSize(), NettyServerConfig.getMaxServerPoolSize(), NettyServerConfig.getKeepAliveTime(), TimeUnit.SECONDS, new LinkedBlockingQueue<>(NettyServerConfig.getMaxTaskQueueSize()), Thread.ofVirtual().name("ServerHandlerThread").factory(), new ThreadPoolExecutor.CallerRunsPolicy());

Document Path:          /api/buy/test
Document Length:        4 bytes

Concurrency Level:      32
Time taken for tests:   351.786 seconds
Complete requests:      200000
Failed requests:        0
Write errors:           0
Total transferred:      25600000 bytes
HTML transferred:       800000 bytes
Requests per second:    568.53 [#/sec] (mean)
Time per request:       56.286 [ms] (mean)
Time per request:       1.759 [ms] (mean, across all concurrent requests)
Transfer rate:          71.07 [Kbytes/sec] received

jdk17

Document Path:          /api/buy/test
Document Length:        4 bytes

Concurrency Level:      32
Time taken for tests:   442.175 seconds
Complete requests:      200000
Failed requests:        0
Write errors:           0
Total transferred:      25600000 bytes
HTML transferred:       800000 bytes
Requests per second:    452.31 [#/sec] (mean)
Time per request:       70.748 [ms] (mean)
Time per request:       2.211 [ms] (mean, across all concurrent requests)
Transfer rate:          56.54 [Kbytes/sec] received

jdk8

Document Path:          /api/buy/test
Document Length:        4 bytes

Concurrency Level:      32
Time taken for tests:   567.178 seconds
Complete requests:      200000
Failed requests:        0
Write errors:           0
Total transferred:      25600256 bytes
HTML transferred:       800008 bytes
Requests per second:    352.62 [#/sec] (mean)
Time per request:       90.748 [ms] (mean)
Time per request:       2.836 [ms] (mean, across all concurrent requests)
Transfer rate:          44.08 [Kbytes/sec] received

Ⅱ. Does this pull request fix one issue?

Ⅲ. Why don't you add test cases (unit test/integration test)?

Ⅳ. Describe how to verify it

Ⅴ. Special notes for reviews

@funky-eyes funky-eyes changed the title feature: support raft cluster mode feature: support raft cluster and store mode Oct 23, 2023
/**
* The instance of DefaultCoordinator
*/
private static final DefaultCoordinator COORDINATOR = DefaultCoordinator.getInstance();

private static final boolean DELAY_HANDLE_SESSION = StoreConfig.getSessionMode() != SessionMode.FILE;
private static final String STORE_MODE =
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

todo

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

@@ -99,84 +96,133 @@ public static void init(SessionMode sessionMode) {
if (null == sessionMode) {
sessionMode = StoreConfig.getSessionMode();
}
String group = CONFIG.getConfig(ConfigurationKeys.SERVER_RAFT_GROUP, DEFAULT_SEATA_GROUP);
RaftServerFactory.getInstance().init();
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

todo

if (ROOT_SESSION_MANAGER instanceof Reloadable) {
((Reloadable) ROOT_SESSION_MANAGER).reload();
if (sessionMode == SessionMode.FILE) {
((Reloadable)ROOT_SESSION_MANAGER).reload();
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

todo

Copy link
Member

@slievrly slievrly left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@slievrly slievrly changed the title feature: support raft cluster and store mode feature: support raft cluster Oct 25, 2023
@slievrly slievrly merged commit e9919e4 into apache:2.x Oct 25, 2023
6 of 7 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
module/discovery discovery module module/server server module TC/store store mode theme: HA High Availability
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet