Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fixes write benchmarks #111

Merged
merged 8 commits into from
Jul 4, 2022
Merged

Conversation

cyriltovena
Copy link
Contributor

This fixes the write benchmarks with new change in the code and some wrong expectations.

However running it seems to show that we are missing data when running insert in parallel but not when we run sequentially.

@cyriltovena
Copy link
Contributor Author

Adding a sleep after the sync fixes the issue. So it seems that Sync() does not properly wait for all writes.

@cyriltovena
Copy link
Contributor Author

Adding some logs seems like we don't get the latest transaction even after a table Sync

    /Users/cyril/work/frostdb/table_test.go:649: write: 300
    /Users/cyril/work/frostdb/table_test.go:663: read: 298

@cyriltovena
Copy link
Contributor Author

I added the wait as recommended but still missing data with 100 workers.

@brancz
Copy link
Member

brancz commented Jul 1, 2022

Could you add a metric for aborts? Then we could see if potentially for some reason there are aborted writes. If not then this is very likely a correctness issue that we haven't fixed yet.

cc @thorfour

@cyriltovena
Copy link
Contributor Author

Yes I will

@cyriltovena
Copy link
Contributor Author

It doesn't seems like it's an abort.

goos: darwin
goarch: amd64
pkg: github.com/polarsignals/frostdb
cpu: Intel(R) Core(TM) i9-9900K CPU @ 3.60GHz
Benchmark_Table_Insert_100Row_100Iter_100Writers
    /Users/cyril/work/frostdb/table_test.go:675: aborted tx:  0
    /Users/cyril/work/frostdb/table_test.go:678:
        	Error Trace:	table_test.go:678
        	            				table_test.go:589
        	Error:      	Not equal:
        	            	expected: 1000000
        	            	actual  : 970380
        	Test:       	Benchmark_Table_Insert_100Row_100Iter_100Writers
--- FAIL: Benchmark_Table_Insert_100Row_100Iter_100Writers
FAIL
coverage: 53.5% of statements
exit status 1
FAIL	github.com/polarsignals/frostdb	53.540s

However sounds like I'm hitting level.Error(t.logger).Log("duplicate insert performed"), going to investigate that.

@cyriltovena
Copy link
Contributor Author

Sounds like the issue was duplicate rows. What I'm not sure to understand is why inserting duplicates with concurrency of 10 was fine (accepting all duplicates) but not 100 (rejecting some duplicates).

Anyway now we can benchmark insert.

@cyriltovena cyriltovena marked this pull request as ready for review July 4, 2022 07:39
Copy link
Member

@brancz brancz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That makes sense now! rand is pseudorandom and when on the same microsecond (or maybe even millisecond, I don't remember) tick, then it produces the same value, so increasing concurrency increases the chances for collisions.

{0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x1},
{0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x2},
},
Timestamp: rand.Int63(),
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Probably fine for a start, but this isn't really representative of our typical workloads, inserts don't happen with random, but roughly monotonically increasing timestamps. Probably good to optimize these cases as well, but we'll probably see CPU time spent in things that a real workload we use FrostDB for doesn't.

@brancz brancz merged commit e094c4f into polarsignals:main Jul 4, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants