New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
TiDB cluster is much slower using sysbench #5328
Comments
@zzx8170 We do not encourage using virtual machine to run PD/TiKV. Please provide your hardware info. How many tables are there in your test and how many rows are there in a table? Please refer to our test result: https://github.com/pingcap/docs/blob/master/benchmark/sysbench.md |
tcount=16 虚拟机配置:cpu 10核 cpu MHz : 2299.996 内存:32G 硬盘:ssd |
Would you check the value of "innodb_flush_log_at_trx_commit" in MySQL configuration and the value of "sync-log" in TiKV configuration for a fair comparison? If the value of "innodb_flush_log_at_trx_commit" is "0" or "2", the value of "sync-log" should be "false".
If you don't want to change the default value of "sync-log", the value of "innodb_flush_log_at_trx_commit" should be "1".
|
tikv配置文件如下,sync-log=false: more tikv.toml # TiKV config template
# Human-readable big numbers:
# File size(based on byte): KB, MB, GB, TB, PB
# e.g.: 1_048_576 = "1MB"
# Time(based on ms): ms, s, m, h
# e.g.: 78_000 = "1.3m"
# log level: trace, debug, info, warn, error, off.
# log-level = "info"
# file to store log, write to stderr if it's empty.
# log-file = ""
[server]
labels = { host = "tikv2" }
[storage]
[pd]
# This section will be overwritten by command line parameters
[metric]
address = "10.1.13.210:9091"
interval = "15s"
job = "tikv"
[raftstore]
raftdb-path = ""
sync-log = false
[rocksdb]
wal-dir = ""
[rocksdb.defaultcf]
[rocksdb.lockcf]
[rocksdb.writecf]
block-cache-size = "1GB"
[raftdb]
[raftdb.defaultcf] mysql没有改默认设置innodb_flush_log_at_trx_commit=1,即使这样,也远远高于tidb的结果,改成0或2就差距更大了,tikv之前3个节点的时候和现在12个节点结果几乎没有任何差距,why?增加tidb server会有线性增加,但和mysql差距依然巨大,还会有其他原因吗?可以加qq细聊:41517897 |
有人支持吗? |
@zzx8170 Could I access to your grafana to see where is the bottle-neck of your cluster? |
@shenli 当然可以,不过是内网ip无法访问,你可以加我qq:41517897,我发监控截图给你 |
@shenli 看了那两篇文档,但是没有实质帮助,还有其他办法吗? |
@zzx8170 Some advices:
|
one of machine top result: sysbench oltp result: |
TiKV use raft to replicate data and use 2pc for distributed transaction. So in your scenario, TiDB could not be faster than single node MySQL. |
TiKV use raft to replicate data and use 2pc for distributed transaction. So in your scenario, TiDB could not be faster than single node MySQL。 @shenli 感谢回复,但上面的解释并不能让我理解我的场景为什么比单实例的慢?raft和2pc需要在什么环境下才能快起来? |
You data is not too much. MySQL could cache most of the data in memory. So it could beat any distributed database system on this scenario. |
ok,I‘ll retry on higher performance machine,thanks again! |
Please answer these questions before submitting your issue. Thanks!
What did you do?
参考文档:https://github.com/pingcap/docs-cn/blob/master/benchmark/sysbench.md
设置了:2 tidb server/3 pd/12 tikv(3台虚拟机,每个上4个tikv实例)
测试脚本:https://github.com/pingcap/tidb-bench/tree/cwen/not_prepared_statement/sysbench
new_oltp.sh
测试结果:并发1、2、4、8、16、32、164、128、256,最高tps 530,qps:10000,而虚拟机上一个单实例mysql 并发256时候tps 约3000,qps约50000,是tidb的5倍多,和你们的测试结果差异很大,测试期间io wait很低,io不是瓶颈,会是什么原因?
What did you expect to see?
What did you see instead?
What version of TiDB are you using (
tidb-server -V
)?1.0
The text was updated successfully, but these errors were encountered: