Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

db_crashtest.py use cheaper settings #9476

Closed
wants to merge 1 commit into from

Conversation

ajkr
Copy link
Contributor

@ajkr ajkr commented Jan 31, 2022

Despite attempts to optimize db_stress setup phase (i.e.,
pre-OperateDb()) latency in #9470 and #9475, it still always took tens
of seconds. Since we still aren't able to setup a 100M key db_stress
quickly, we should reduce the number of keys. This PR reduces it 4x
while increasing value_size_mult 4x (from its default value of 8) so
that memtables and SST files fill at a similar rate compared to before this PR.

Also disabled bzip2 compression since we'll probably never use it and
I noticed many CI runs spending majority of CPU on bzip2 decompression.

Despite attempts to optimize `db_stress` setup phase (i.e.,
pre-`OperateDb()`) latency in facebook#9470 and facebook#9475, it still always took tens
of seconds. Since we still aren't able to setup a 100M key `db_stress`
efficiently, we should reduce the number of keys. This PR reduces it 4x
while increasing `value_size_mult` 4x (from its default value of 8) so
that memtables and SST files fill up similarly quickly.

Also disabled bzip2 compression since we'll probably never use it and
I noticed many CI runs spending majority of CPU on bzip2 decompression.
@facebook-github-bot
Copy link
Contributor

@ajkr has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator.

Copy link
Contributor

@siying siying left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought of removing bzip and thought that keeping it might be helpful to simulate the case where there is a large gap between compression speed among files. But I guess you are right that improving test speed can help us increase coverage and would be a far better trade-off.

@ajkr
Copy link
Contributor Author

ajkr commented Jan 31, 2022

Thanks for the review! Yeah, after a db_stress run using bzip2 files, I was seeing several subsequent db_stress runs get killed before finishing setup phase because Get()/MultiGet() were taking too long to decompress. I think it wasn't an infinite cycle though - eventually there'd be a db_stress with options causing compaction using a different compression algorithm during the setup phase.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants