Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

a number ask on detiles of workloads #548

Closed
Hadi14 opened this issue Dec 21, 2015 · 67 comments
Closed

a number ask on detiles of workloads #548

Hadi14 opened this issue Dec 21, 2015 · 67 comments

Comments

@Hadi14
Copy link

Hadi14 commented Dec 21, 2015

Hi All
excusme i have a number of ask again :(
1-on workloads we have recordcount argument , what is different between it and operationcoun ? seem them are identical.

2- if we set readallfields=false and writeallfields=false what is going to happen?

3-what is fieldlengthdistribution, scanlengthdistribution,hotspotdatafraction arguments?

@kruthar
Copy link
Collaborator

kruthar commented Dec 22, 2015

1-on workloads we have recordcount argument , what is different between it and operationcoun ? seem them are identical.

  • recordcount - number of records to put into the database on load and number of records to possibly reference in run operations.
  • operationcount - number of total operations (read, update, insert, scan) to perform during run.

2- if we set readallfields=false and writeallfields=false what is going to happen?

  • readallfields=false - all read operations will specify a single field to read from the row of the operation (as opposed to reading all fields of the row)
  • writeallfields=false - all update operations will specify a single field to update for the row of the operation (as opposed to updating all fields of the row)

These settings will affect different databases differently.

3-what is fieldlengthdistribution, scanlengthdistribution,hotspotdatafraction arguments?

Writing this I realized that only some of these properties are listed on the core properties page: https://github.com/brianfrankcooper/YCSB/wiki/Core-Properties. I will open an issue to rectify that.

@kruthar
Copy link
Collaborator

kruthar commented Dec 22, 2015

For reference #550 has been created to update documentation on core properties.

@Hadi14
Copy link
Author

Hadi14 commented Dec 22, 2015

if we have 1000 record and if we want have 200 read,100 update,50 insert,250 scan
, so we will have recordcount=1000 , operationcount=600 ?

@cmccoy
Copy link
Collaborator

cmccoy commented Dec 22, 2015

Roughly, yes, assuming you've set the {read,update,insert,scan}proportions to match. But each operation is randomly selected according to those proportions, so you most likely won't get exactly 200, 100, 50, 250.

@Hadi14
Copy link
Author

Hadi14 commented Dec 22, 2015

on readallfields=true , how to specify one of fileds for read?

@kruthar
Copy link
Collaborator

kruthar commented Dec 23, 2015

You cannot specify which field to read or update. The field that gets read on each read operation, and also the field that gets updated on each update operation are randomly chosen for each operation.

@Hadi14
Copy link
Author

Hadi14 commented Dec 23, 2015

@kruthar ok
on ycsb can we get that how many of operation do on what time and other how many of operation on what time?
for example 100 of total 200 operation do on 10ms mean 100=10ms , 50=15ms , 50=20ms
(for get Standard deviation,...)

@kruthar
Copy link
Collaborator

kruthar commented Dec 23, 2015

I think you are asking if you can see the operation latencies in buckets? Yes, you can. Take a look at Core Properties for more details on these two properties https://github.com/brianfrankcooper/YCSB/wiki/Core-Properties.

  • measurementtype=histogram - this will show results in buckets of latency.
  • histogram.buckets=1000 - specifies the number of buckets

The default hdrhistogram measurementtype also has useful information such as latency min, max, avg and percentiles.

@Hadi14
Copy link
Author

Hadi14 commented Dec 24, 2015

when run YCSB on single machine can we define number of replications and get Their impact?

@kruthar
Copy link
Collaborator

kruthar commented Dec 24, 2015

Do you mean replications of data in your database? or replicating the workload via threads?

@Hadi14
Copy link
Author

Hadi14 commented Dec 24, 2015

my meaning is simulation replications of data on YCSB , unless when run a workloads all therads do it?

in "insertorder" unless all records are not the same? So what difference how is the reading?

@kruthar
Copy link
Collaborator

kruthar commented Dec 24, 2015

YCSB doesn't directly deal with replication of data. How your database will replicate data depends on how you configure the database. Some YCSB clients do come with write consistency settings which determine how replicated data has to be for an insert or update operation to be considered successful.

YCSB threads are just a way to increase load on the database by spinning up multiple YCSB client threads that act against the database at the same time.

insertorder just deals with what the record keys look like. insertorder=hashed will hash the key id before storing it so they if you were to line all the keys up they would not be 'ordered'. insertorder=ordered does not hash the keys and so they are ordered.

@Hadi14
Copy link
Author

Hadi14 commented Dec 24, 2015

meaning if we want to use replicate should setting up replicate cosistency on our database like cassandra?

when we use insertorder=hashed , first all key sorted next them inserted ?

@Hadi14
Copy link
Author

Hadi14 commented Dec 24, 2015

other question , in most of benchmark that talk about number of core , their Purpose is only real core or virtual core too?

@kruthar
Copy link
Collaborator

kruthar commented Dec 24, 2015

meaning if we want to use replicate should setting up replicate cosistency on our database like cassandra?

Yes, you should look into how to setup the type of replication you want on the database you are using. Then check the respective YCSB binding README for consistency settings.

when we use insertorder=hashed , first all key sorted next them inserted ?

They are not sorted perse. When you do your load, YCSB starts at 0 and counts up to whatever number of records you are inserting. Each value (0, 1, 2...) is then hashed and appended to 'user'. The hashed values will not be sorted.

other question , in most of benchmark that talk about number of core , their Purpose is only real core or virtual core too?

This I can't say for sure, it depends on who is publishing the information and what is written there.

@Hadi14
Copy link
Author

Hadi14 commented Dec 25, 2015

meaning that when threads want do insert , they select records randomly?

in YCSB if we want leave example for 50% read and 50% insert , what is use case for this? and also 50% insert ,50 readmodifywrite proportion ?

@Hadi14
Copy link
Author

Hadi14 commented Dec 25, 2015

with change values on workloadd like this , can i use it on YCSB with insert , read (50%,50%)?

workloadh.txt

should i use license for it?

@kruthar
Copy link
Collaborator

kruthar commented Dec 25, 2015

meaning that when threads want do insert , they select records randomly?

No. If you are doing load with multiple threads then each thread gets distributed a range of consecutive record keys to insert. Then it inserts that group in order.

in YCSB if we want leave example for 50% read and 50% insert , what is use case for this? and also 50% insert ,50 readmodifywrite proportion ?

workloada is 50% read, 50% update which is a common starting point for people. As for specific use cases, the predefined workloads are just starting points, you really need to identify what type of load use case you want to simulate then design a workload around that. The sample workload files have a short description of a possible use case for each one.

with change values on workloadd like this , can i use it on YCSB with insert , read (50%,50%)?

workloadh.txt

should i use license for it?

It looks like this is just a copy of workloadd with different percentage values? If so I don't see any issue with it.

@Hadi14
Copy link
Author

Hadi14 commented Dec 26, 2015

in Meteorology, data is heavy , right?
on your opinion what is proporation of operations on Meteorology?

@kruthar
Copy link
Collaborator

kruthar commented Dec 27, 2015

You'll really have to do your own research here to see what data use cases look like.

@Hadi14
Copy link
Author

Hadi14 commented Dec 27, 2015

ok,if i use cpu that have 4 thread , can i define thread=100 on load command ?

i do load and run this:
workloadh.txt

but it got this messages :

237 [Thread-1] INFO com.datastax.driver.core.NettyUtil - Did not find Netty's native epoll transport in the classpath, defaulting to NIO.
909 [Thread-1] INFO com.datastax.driver.core.policies.DCAwareRoundRobinPolicy - Using data-center name 'datacenter1' for DCAwareRoundRobinPolicy (if this is incorrect, please provide the correct datacenter name with DCAwareRoundRobinPolicy constructor)
911 [Thread-1] INFO com.datastax.driver.core.Cluster - New Cassandra host localhost/127.0.0.1:9042 added
Connected to cluster: Test Cluster

run:
YCSB Client 0.1
Command line: -db com.yahoo.ycsb.db.CassandraCQLClient -p hosts=localhost -P workloads/workloadh -t
Datacenter: datacenter1; Host: localhost/127.0.0.1; Rack: rack1
[OVERALL], RunTime(ms), 4796.0
[OVERALL], Throughput(ops/sec), 208.5070892410342
[CLEANUP], Operations, 1.0
[CLEANUP], AverageLatency(us), 21640.0
[CLEANUP], MinLatency(us), 21632.0
[CLEANUP], MaxLatency(us), 21647.0
[CLEANUP], 95thPercentileLatency(us), 21647.0
[CLEANUP], 99thPercentileLatency(us), 21647.0
[INSERT], Operations, 536.0
[INSERT], AverageLatency(us), 3408.8022388059703
[INSERT], MinLatency(us), 1216.0
[INSERT], MaxLatency(us), 214911.0
[INSERT], 95thPercentileLatency(us), 5807.0
[INSERT], 99thPercentileLatency(us), 19359.0
[INSERT], Return=OK, 536
[READ], Operations, 464.0
[READ], AverageLatency(us), 3552.4547413793102
[READ], MinLatency(us), 1262.0
[READ], MaxLatency(us), 93887.0
[READ], 95thPercentileLatency(us), 7435.0
[READ], 99thPercentileLatency(us), 22207.0
[READ], Return=OK, 464

and load:


YCSB Client 0.1
Command line: -db com.yahoo.ycsb.db.CassandraCQLClient -p hosts=localhost -P workloads/workloadh -load
Datacenter: datacenter1; Host: localhost/127.0.0.1; Rack: rack1
[OVERALL], RunTime(ms), 5141.0
[OVERALL], Throughput(ops/sec), 194.51468585878234
[CLEANUP], Operations, 1.0
[CLEANUP], AverageLatency(us), 27880.0
[CLEANUP], MinLatency(us), 27872.0
[CLEANUP], MaxLatency(us), 27887.0
[CLEANUP], 95thPercentileLatency(us), 27887.0
[CLEANUP], 99thPercentileLatency(us), 27887.0
[INSERT], Operations, 1000.0
[INSERT], AverageLatency(us), 2948.825
[INSERT], MinLatency(us), 1230.0
[INSERT], MaxLatency(us), 135423.0
[INSERT], 95thPercentileLatency(us), 5019.0
[INSERT], 99thPercentileLatency(us), 11367.0
[INSERT], Return=OK, 1000

@Hadi14
Copy link
Author

Hadi14 commented Jan 1, 2016

Hi
excusme , if i set operationcount=1000 but i set "-p recordcount=80000" on command line load , can it work as well? mean should them (operationcount and recordcount) the same value?

@cmccoy
Copy link
Collaborator

cmccoy commented Jan 1, 2016

Yes, operation count can be larger or smaller than record count.
On Jan 1, 2016 4:20 AM, "Hadi H" notifications@github.com wrote:

Hi
excusme , if i set operationcount=1000 but i set "-p recordcount=80000" on
command line load , can it work as well?


Reply to this email directly or view it on GitHub
#548 (comment)
.

@Hadi14
Copy link
Author

Hadi14 commented Jan 1, 2016

if operation count can be larger than record count how run it? it have less operation.

@kruthar
Copy link
Collaborator

kruthar commented Jan 2, 2016

@Hadi14 - operationcount and recordcount are unrelated. You can have operationcount be larger than recordcount, you would run that the same way as you normally would.

YCSB uses number generators to pick a new key value to perform each operation on. This means that the same key value may be chosen more than once. So, if your operationcount is higher than your recordcount than certain record keys will be operated on more than once.

There should be no issues with having an operationcount higher than the record count.

@Hadi14
Copy link
Author

Hadi14 commented Jan 3, 2016

ok, What is the benefit of set target while we always want throughput?

@kruthar
Copy link
Collaborator

kruthar commented Jan 4, 2016

I'm sorry, I don't understand the question.

@Hadi14
Copy link
Author

Hadi14 commented Jan 4, 2016

on YCSB we often want do benchmark for mesurement throughput ,
so use target switch what is advantage?

@kruthar
Copy link
Collaborator

kruthar commented Jan 4, 2016

Ah. So, yes you may be trying to measure throughput under certain workloads, and so throttling throughput seems counter productive in your case.

But YCSB can also conceivable be used as a constant load generator in which you would set your target ops/sec to send to your database. This could be useful to test other things like how well your database handles prolonged constant load, or maybe failover scenarios under load.

@Hadi14
Copy link
Author

Hadi14 commented Jan 4, 2016

than you
well , when we set for example target=100 , it must do 100 op/s overall variety loads. right?excuse me so what is meaning "prolonged constant load"?

2- what is 95,99thPercentileLatency ?i dont understand theme.

@Hadi14
Copy link
Author

Hadi14 commented Jan 5, 2016

@kruthar other Question , why is not the same number of Operations on read and update as Exactly?

@kruthar
Copy link
Collaborator

kruthar commented Jan 5, 2016

YCSB uses a discrete generator which takes in the different operation proportions you specify and probabilistically chooses which operation should happen next depending on the relative proportions.

What this means is that with properties such as:

readproportion=0.5
updateproportion=0.5

each operation has a 50-50 chance of being a read or an update. YCSB effectively flips a coin to decide what each operation should be in this case. You are not guaranteed that you will get a perfect 50-50 operation split. It will be close but not exact.

@Hadi14
Copy link
Author

Hadi14 commented Jan 6, 2016

if we set a parameter for example recordcount on command line , is priority with command line? or parameter on work load? is different that which be on last or first?

@kruthar
Copy link
Collaborator

kruthar commented Jan 6, 2016

Yes, command line takes priority.

Issues like this it doesn't hurt to just go ahead and try yourself?

@Hadi14
Copy link
Author

Hadi14 commented Jan 6, 2016

excusme i dont understand your question completely but i did benchmark with 1 thread and 2000 operation count :
[OVERALL], RunTime(ms), 5659.0
[OVERALL], Throughput(ops/sec), 353.4193320374624
and other benchmark with 2 thread and 2000 operation count :
[OVERALL], RunTime(ms), 8933.0
[OVERALL], Throughput(ops/sec), 223.88895108026418

@Hadi14
Copy link
Author

Hadi14 commented Jan 6, 2016

benchmarks with 2 thread have RunTime larger than 1 thread , is i natural?

@kruthar
Copy link
Collaborator

kruthar commented Jan 6, 2016

Yes, that seems possible. It all depends on your workload configuration and the performance of your database.

When you add a second thread you are doubling the load on your database. Each thread is going to perform the number of operations you specify in operationcount. The results seem to say that with double the workload it takes a little longer to complete all of the operations.

@Hadi14
Copy link
Author

Hadi14 commented Jan 6, 2016

ok , mean that if we set operationcount=1000 and -threads 2 any of thread run 1000 operation separately? so why in follow result with 4000 operation , total operation is the same of 4000? unless this result be per thread.

[OVERALL], RunTime(ms), 26733.0
[OVERALL], Throughput(ops/sec), 149.62780084539708
[CLEANUP], Operations, 2.0
[CLEANUP], AverageLatency(us), 22923.5
[CLEANUP], MinLatency(us), 7.0
[CLEANUP], MaxLatency(us), 45855.0
[CLEANUP], 95thPercentileLatency(us), 45855.0
[CLEANUP], 99thPercentileLatency(us), 45855.0
[READ], Operations, 1983.0
[READ], AverageLatency(us), 24254.0131114473
[READ], MinLatency(us), 1129.0
[READ], MaxLatency(us), 271103.0
[READ], 95thPercentileLatency(us), 92287.0
[READ], 99thPercentileLatency(us), 132095.0
[READ], Return=OK, 1983
[UPDATE], Operations, 2017.0
[UPDATE], AverageLatency(us), 1180.3966286564205
[UPDATE], MinLatency(us), 551.0
[UPDATE], MaxLatency(us), 50623.0
[UPDATE], 95thPercentileLatency(us), 1871.0
[UPDATE], 99thPercentileLatency(us), 3261.0
[UPDATE], Return=OK, 2017

@kruthar
Copy link
Collaborator

kruthar commented Jan 6, 2016

Are you sure you changed the operation count to 1000? Earlier you said you were working with 2000 operationcount:

and other benchmark with 2 thread and 2000 operation count :

@busbey
Copy link
Collaborator

busbey commented Jan 6, 2016

the operation count is the total for the given client run. the total is split up amongst the number of threads you specify. So with thread =2 and count = 2k, a total of 2k operations are performed. this still might take longer than with 1 thread and 2k operations because there is overhead to running multiple threads (in YCSB and possibly in the data store driver) and 2 thousand is a small enough number that you may not over come that overhead in throughput savings.

@Hadi14
Copy link
Author

Hadi14 commented Jan 7, 2016

@kruthar my mean is 1000 for example.

@kruthar
Copy link
Collaborator

kruthar commented Jan 7, 2016

@busbey explained threaded functionality. At this point I'm not sure what the question is?

@Hadi14
Copy link
Author

Hadi14 commented Jan 7, 2016

sorry ,when we load and run workload for many time , Is the previous loaded data will be erased on previous run?

@busbey
Copy link
Collaborator

busbey commented Jan 7, 2016

No, YCSB doesn't do anything to clean up data already in a datastore, whether from prior YCSB runs or elsewhere.

@Hadi14
Copy link
Author

Hadi14 commented Jan 7, 2016

i got this error , is it's cause low to memory?


13:16:26 Unable to lock JVM memory (ENOMEM). This can result in part of the JVM being swapped out, especially with mmapped I/O enabled. Increase RLIMIT_MEMLOCK or run Cassandra as root.
WARN  13:16:26 JMX is not enabled to receive remote connections. Please see cassandra-env.sh for more info.
WARN  13:16:26 OpenJDK is not recommended. Please upgrade to the newest Oracle Java release
INFO  13:16:26 Initializing SIGAR library
WARN  13:16:27 Cassandra server running in degraded mode. Is swap disabled? : false,  Address space adequate? : true,  nofile limit adequate? : true, nproc limit adequate? : false

excume.

@busbey
Copy link
Collaborator

busbey commented Jan 7, 2016

You missed the error text.

@Hadi14
Copy link
Author

Hadi14 commented Jan 8, 2016

@busbey
Copy link
Collaborator

busbey commented Jan 8, 2016

If you're having trouble getting Cassandra to run, you should seek help from the Cassandra community.

Their user mailing list details are here:

http://mail-archives.apache.org/mod_mbox/cassandra-user/

@Hadi14
Copy link
Author

Hadi14 commented Jan 8, 2016

@busbey ok thankyou.

@Hadi14
Copy link
Author

Hadi14 commented Jan 9, 2016

Hi , on workload F i got this output,
1- in additional READ-MODIFY-WRITE , why there is READ and CLEANUP and UPDATE?
2- why only 989 for READ-MODIFY-WRITE and UPDATE?

[OVERALL], RunTime(ms), 4132.0
[OVERALL], Throughput(ops/sec), 484.027105517909
[READ], Operations, 2000.0
[READ], AverageLatency(us), 1219.5885
[READ], MinLatency(us), 370.0
[READ], MaxLatency(us), 99263.0
[READ], 95thPercentileLatency(us), 3177.0
[READ], 99thPercentileLatency(us), 7739.0
[READ], Return=OK, 2000
[READ-MODIFY-WRITE], Operations, 989.0
[READ-MODIFY-WRITE], AverageLatency(us), 2380.6511627906975
[READ-MODIFY-WRITE], MinLatency(us), 816.0
[READ-MODIFY-WRITE], MaxLatency(us), 101759.0
[READ-MODIFY-WRITE], 95thPercentileLatency(us), 5883.0
[READ-MODIFY-WRITE], 99thPercentileLatency(us), 10399.0
[CLEANUP], Operations, 1.0
[CLEANUP], AverageLatency(us), 9036.0
[CLEANUP], MinLatency(us), 9032.0
[CLEANUP], MaxLatency(us), 9039.0
[CLEANUP], 95thPercentileLatency(us), 9039.0
[CLEANUP], 99thPercentileLatency(us), 9039.0
[UPDATE], Operations, 989.0
[UPDATE], AverageLatency(us), 1092.4924165824066
[UPDATE], MinLatency(us), 392.0
[UPDATE], MaxLatency(us), 14423.0
[UPDATE], 95thPercentileLatency(us), 2631.0
[UPDATE], 99thPercentileLatency(us), 6683.0
[UPDATE], Return=OK, 989

@kruthar
Copy link
Collaborator

kruthar commented Jan 9, 2016

1- in additional READ-MODIFY-WRITE , why there is READ and CLEANUP and UPDATE?

readmodifywrite is actual just a measure of a read operation and a write operation on the same key. All read and update operations report their performance metrics, readmodifywrite reports its total performance as well. This is why you have readmodifywrite, read, and update metrics. We have already talked about cleanup on this thread, each thread of the workload runs a cleanup operation at the end.

2- why only 989 for READ-MODIFY-WRITE and UPDATE?

This is another case of probabilistic operation selection. We talked about this a few posts up. Please check my answer to your question about why the number of read and update operations are not exactly the same with 50-50 proportions.

@Hadi14
Copy link
Author

Hadi14 commented Jan 9, 2016

But the workloada was not operation on the same key? on this test we have 2000 read but 989 READ-MODIFY-WRITE and 989 UPDATE (989+989=1978) so 22 other operation?

@kruthar
Copy link
Collaborator

kruthar commented Jan 9, 2016

As I said, the readmodifywrite operations are actually double counts of the read and update operations that are contained in the readmodifywrite. This means that 989 readmodifywrite operations consist of 989 read operations and 989 update operations.

As you can see there are 989 update operations recorded. There are 2000 read operations recorded, this is because there are 989 read operations from readmodifywrite, and 1,011 plain read operations. This is because workloadf is 50% readmodifywrite, and 50% read. 1,011 is very close to 989, but not exact. This is to be expected.

So, 989 + 1,011 = 2000 total operations which should be your operationcount, correct?

@Hadi14
Copy link
Author

Hadi14 commented Jan 10, 2016

excusme mean on this workload(read/readmodifywrite 50%/50%) there is two step 1-simple Read (50%) and 2-readmodifywrite (50%).
on step2 and this example we have 989 read and modify and next write mean:

[READ-MODIFY-WRITE], Operations, 989.0
[READ-MODIFY-WRITE], AverageLatency(us), 2380.6511627906975
[READ-MODIFY-WRITE], MinLatency(us), 816.0
[READ-MODIFY-WRITE], MaxLatency(us), 101759.0
[READ-MODIFY-WRITE], 95thPercentileLatency(us), 5883.0
[READ-MODIFY-WRITE], 99thPercentileLatency(us), 10399.0
[CLEANUP], Operations, 1.0
[CLEANUP], AverageLatency(us), 9036.0
[CLEANUP], MinLatency(us), 9032.0
[CLEANUP], MaxLatency(us), 9039.0
[CLEANUP], 95thPercentileLatency(us), 9039.0
[CLEANUP], 99thPercentileLatency(us), 9039.0
[UPDATE], Operations, 989.0
[UPDATE], AverageLatency(us), 1092.4924165824066
[UPDATE], MinLatency(us), 392.0
[UPDATE], MaxLatency(us), 14423.0
[UPDATE], 95thPercentileLatency(us), 2631.0
[UPDATE], 99thPercentileLatency(us), 6683.0
[UPDATE], Return=OK, 989

correct?
but for step1 we have 2000 operation?

[READ], Operations, 2000.0
[READ], AverageLatency(us), 1219.5885
[READ], MinLatency(us), 370.0
[READ], MaxLatency(us), 99263.0
[READ], 95thPercentileLatency(us), 3177.0
[READ], 99thPercentileLatency(us), 7739.0
[READ], Return=OK, 2000

@kruthar
Copy link
Collaborator

kruthar commented Jan 10, 2016

The ordering of the print outs is arbitrary. YCSB performs all the operations in a random order, so there really is no notion of 'step1 and step2'.

Each readmodifywrite operation counts one for [READ-MODIFY-WRITE] and also one each for [READ] and [UPDATE]. So, because there were 989 readmodifywrite operations, that means there were 989 operations each for read and update.

But as you pointed out there are actually: [READ], Operations, 2000.0. As I just explained, 989 of those came from readmodifywrite, which means that 1,011 more read operations were just simple reads. We know that this makes since because 1,011 is very close to 989, workloadf has a 50-50 ratio between simple reads and readmodifywrite operations.

@Hadi14
Copy link
Author

Hadi14 commented Jan 10, 2016

ok understand thankyou :).

@busbey
Copy link
Collaborator

busbey commented Jan 15, 2016

closing out since it seems like we've covered everything relevant to what workload parameters mean.

@metonymic-smokey
Copy link

The ordering of the print outs is arbitrary. YCSB performs all the operations in a random order, so there really is no notion of 'step1 and step2'.

@kruthar , considering the ordering of the prints is arbitrary, is getting cleanup before the operations valid, or is it a sign there's something wrong?
I'm asking this since I'm making some modifications to the source code of an existing DB binding in YCSB and get the cleanup before the operations on running certain operations.
Would love your inputs on the same!
Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants