Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Document insert scaling of UUID (well distributed) vs Serial #4234

robert-s-lee opened this issue Jan 10, 2019 · 4 comments


Copy link

@robert-s-lee robert-s-lee commented Jan 10, 2019

Distribution of workload affecting scaling. This is a simple example of insert. UUID is well distributed. Serial is sequentially increasing. Serial will cause localized hotspot that will prevent scaling at high concurrency. Below is a short sample of the scaling.

unique_rowid and uuid

TODO: sysbench script procedure used to produce the result set above.


This comment has been minimized.

Copy link

@hueypark hueypark commented Jun 18, 2019

What workload is the performance degradation on the graph you shared? If you let me know, I'll take a closer look.

At first, I thought the speed of the 'GenerateUniqueInt' function was a problem, but I don't think so.


This comment has been minimized.

Copy link

@hueypark hueypark commented Jun 30, 2019

To reproduce the graph you shared, I changed the code and measured the performance as below.


func GenerateUniqueID(nodeID int32, timestamp uint64) tree.DInt {
id := (uint64(nodeID) << TimestampBits) ^ timestamp

But there has been no significant result, and @bdarnell's comment will remains valid.

Performance Measurement Results

BenchmarkIssue4234/100-16 2 769922108 ns/op
BenchmarkIssue4234/100#01-16 2 740884949 ns/op
BenchmarkIssue4234/100#02-16 2 794033482 ns/op
BenchmarkIssue4234/100#03-16 2 778814176 ns/op
BenchmarkIssue4234/100#04-16 2 734334384 ns/op
BenchmarkIssue4234/100#05-16 2 732287343 ns/op
BenchmarkIssue4234/100#06-16 2 718965703 ns/op
BenchmarkIssue4234/100#07-16 2 750783191 ns/op
BenchmarkIssue4234/100#08-16 2 717592265 ns/op
BenchmarkIssue4234/100#09-16 2 754199910 ns/op
BenchmarkIssue4234/100#10-16 2 779262461 ns/op
BenchmarkIssue4234/100#11-16 2 742781762 ns/op
BenchmarkIssue4234/100#12-16 2 755679509 ns/op
BenchmarkIssue4234/100#13-16 2 760276421 ns/op
BenchmarkIssue4234/100#14-16 2 752905362 ns/op
BenchmarkIssue4234/100#15-16 2 740751411 ns/op
BenchmarkIssue4234/100#16-16 2 753700862 ns/op
BenchmarkIssue4234/100#17-16 2 732161154 ns/op
BenchmarkIssue4234/100#18-16 2 762167341 ns/op
BenchmarkIssue4234/100#19-16 2 781000318 ns/op
BenchmarkIssue4234/100#20-16 2 728199898 ns/op

BenchmarkIssue4234/100-16 1 1003090031 ns/op
BenchmarkIssue4234/100#01-16 1 1061212066 ns/op
BenchmarkIssue4234/100#02-16 1 1041456399 ns/op
BenchmarkIssue4234/100#03-16 1 1021949572 ns/op
BenchmarkIssue4234/100#04-16 1 1038222748 ns/op
BenchmarkIssue4234/100#05-16 2 995622749 ns/op
BenchmarkIssue4234/100#06-16 1 1069704650 ns/op
BenchmarkIssue4234/100#07-16 1 1026090132 ns/op
BenchmarkIssue4234/100#08-16 1 1017747874 ns/op
BenchmarkIssue4234/100#09-16 1 1021918587 ns/op
BenchmarkIssue4234/100#10-16 1 1071479203 ns/op
BenchmarkIssue4234/100#11-16 1 1043181858 ns/op
BenchmarkIssue4234/100#12-16 1 1043871567 ns/op
BenchmarkIssue4234/100#13-16 1 1075590044 ns/op
BenchmarkIssue4234/100#14-16 1 1039865336 ns/op
BenchmarkIssue4234/100#15-16 1 1082923314 ns/op
BenchmarkIssue4234/100#16-16 1 1063735395 ns/op
BenchmarkIssue4234/100#17-16 1 1031141732 ns/op
BenchmarkIssue4234/100#18-16 1 1064227742 ns/op
BenchmarkIssue4234/100#19-16 1 1012070333 ns/op
BenchmarkIssue4234/100#20-16 1 1027658439 ns/op


This comment has been minimized.

Copy link
Contributor Author

@robert-s-lee robert-s-lee commented Jun 30, 2019

@hueypark yes, the issue is not the speed of generating the numbers, but the sequential nature of the generated numbers tend to create a rolling CPU hotspot when the row is inserted into the database. This behavior can be observed in AdminUI's hardware tab CPU usage. Even though the application is connected to all 3 nodes CRDB nodes, one node's CPU will be higher than other nodes. this can be seen when even using 4 vCPUs systems. All of the inserts will be performed on a single node.

the original curve was created by modifying sysbench to use uuid and serial.
#4221 has a small script to run sysbench. CRDB serial is mostly sequential.

When using uuid, alter table t1 split at select gen_random_uuid() from generate_series(1, 16); alter table t1 scatter; can be used to pre-split the ranges so that ranges can be evenly distributed.

When using serial, the split will still group the range of keys generated at the same time to a single node creating the hotspot. with the nodeid is the significant bits, the split can be run on the (uint64(nodeID) << TimestampBits) so that inserts going to that node are localized to that node.

For testing the modified serial, try the following alter table t1 split at select generate_series(1, 16)<<TimestampBits::bigint; alter table t1 scatter; . replace TimestampBits with the number of bits shifted to match the code. With the modified serial, all of the inserts will be performed on separate nodes and the CPU usage should be even across the nodes.

More advanced usage to try later is with geo-partitioned data. CRDB partition command can be used to control the ranges created by the node. This strategy is used to scale opensource YCSB Copy and paste shown below of YCSB. The partition command requires an enterprise license.

  PARTITION user1 VALUES FROM ('user001') TO ('user002'),
  PARTITION user2 VALUES FROM ('user002') TO ('user003'),
  PARTITION user3 VALUES FROM ('user003') TO ('user004'),
  PARTITION user4 VALUES FROM ('user004') TO ('user005'),
  PARTITION user5 VALUES FROM ('user005') TO ('user006'),
  PARTITION usermaxvalue VALUES FROM ('user006') TO (MAXVALUE)

This comment has been minimized.

Copy link

@hueypark hueypark commented Jun 30, 2019

@robert-s-lee I see! Your intention was not to change the implementation of unique_rowid, but to provide scaling of UUID(well distributed) vs Serial document. Thank you for your response.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
None yet
2 participants
You can’t perform that action at this time.