Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

performance reduced when upgrade to RocksDB 6.4 #5578

Open
siddontang opened this issue Oct 6, 2019 · 6 comments
Open

performance reduced when upgrade to RocksDB 6.4 #5578

siddontang opened this issue Oct 6, 2019 · 6 comments
Assignees

Comments

@siddontang
Copy link
Contributor

@siddontang siddontang commented Oct 6, 2019

Questions

#5324 introduces a performance regression for sysbench point get, we will need to investigate it ASAP.

@yiwu-arbug

This comment has been minimized.

Copy link
Contributor

@yiwu-arbug yiwu-arbug commented Oct 16, 2019

I'm not able to reproduce on IDC server. I run sysbench oltp_point_select with the following params. I run prepare once and run run with different tikv-server binary repeatedly.

threads=32
time=300
tables=32
table_size=1M

what I'm getting:

log-6.2-1:    queries:                             23150385 (77165.94 per sec.)
log-6.2-2:    queries:                             23123219 (77075.68 per sec.)
log-6.2-3:    queries:                             22915529 (76383.51 per sec.)
log-6.3-1:    queries:                             22940664 (76467.33 per sec.)
log-6.3-2:    queries:                             23007736 (76690.60 per sec.)
log-6.3-3:    queries:                             23083423 (76942.68 per sec.)
log-6.4-1:    queries:                             23495496 (78316.71 per sec.)
log-6.4-2:    queries:                             23629755 (78764.22 per sec.)

seems 6.4 even has ~2.5% higher QPS than 6.3. And 6.2 and 6.3 perform similarly. I'll try reproduce on benchbot host instead.

@yiwu-arbug

This comment has been minimized.

Copy link
Contributor

@yiwu-arbug yiwu-arbug commented Oct 17, 2019

confirmed benchbot use 256 threads and with tempfs as deploy directory. Will try the same setup.

@yiwu-arbug

This comment has been minimized.

Copy link
Contributor

@yiwu-arbug yiwu-arbug commented Oct 17, 2019

Run the same benchmark with 256 threads and tmpfs, run in order of 6.2, 6.3, 6.4 and 6.2 and 6.4 again, each 3 times:

log-6.2-1:    queries:                             30710967 (102346.27 per sec.)
log-6.2-2:    queries:                             30779147 (102580.73 per sec.)
log-6.2-3:    queries:                             30620468 (102051.88 per sec.)

log-6.3-1:    queries:                             28484981 (94923.17 per sec.)
log-6.3-2:    queries:                             28542322 (95119.59 per sec.)
log-6.3-3:    queries:                             28619801 (95383.47 per sec.)

log-6.4-1:    queries:                             27306904 (91004.28 per sec.)
log-6.4-2:    queries:                             28187468 (93942.26 per sec.)
log-6.4-3:    queries:                             28134926 (93765.36 per sec.)

log-6.2-4:    queries:                             28283283 (94258.44 per sec.)
log-6.2-5:    queries:                             28489094 (94942.40 per sec.)
log-6.2-6:    queries:                             28627407 (95409.18 per sec.)

log-6.4-4:    queries:                             30086395 (100269.60 per sec.)
log-6.4-5:    queries:                             30148394 (100477.40 per sec.)
log-6.4-6:    queries:                             30188486 (100613.67 per sec.)

The result seems not very stable.

@yiwu-arbug

This comment has been minimized.

Copy link
Contributor

@yiwu-arbug yiwu-arbug commented Nov 4, 2019

The regression can be reproduced on benchbot host. It seems it is related to block cache changes:

  • recent changes to cache only block bytes instead of c++ object incur c++ object construction cost;
  • recent addition of block cache tracer adds some overhead

Still trying to figure out what exactly cause the regression and how to fix it.

@siddontang

This comment has been minimized.

Copy link
Contributor Author

@siddontang siddontang commented Nov 5, 2019

do we need to tell this to the RocksDB team and let them investigate it?

@yiwu-arbug

This comment has been minimized.

Copy link
Contributor

@yiwu-arbug yiwu-arbug commented Nov 7, 2019

So far I'm reproducing using sysbench (benchbot). RocksDB team require db_bench result for them to investigate. I'll work on db_bench before filing a rocksdb issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
2 participants
You can’t perform that action at this time.