Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kv: Enabled option to change levels in RocksDB #22025

Closed
wants to merge 1 commit into from

Conversation

xelexin
Copy link
Contributor

@xelexin xelexin commented May 16, 2018

In our cluster after partial data removal process, we have noticed that we are using L3 or L4 in RocksDB. Even if it is possible to hold data in L2. This is important due to a manual compaction, when RocksDB changes the level of the data and does not use the BDEV 2.

@tchaikov
Copy link
Contributor

retest this please.

@xelexin
Copy link
Contributor Author

xelexin commented May 16, 2018

I checked "make check" and there is a problem with jenkins executor - "Agent went offline during the build". Can you run job again?

@tchaikov
Copy link
Contributor

retest this please

@tchaikov
Copy link
Contributor

retest this please.

@tchaikov
Copy link
Contributor

could you prefix the title of your commit message with the subcomponent your are changing ? see https://github.com/ceph/ceph/blob/master/SubmittingPatches.rst#3-describe-your-changes

also, could you add a "Signed-off-by" line at the end of your commit message? "git commit -s" will do the trick for you. see https://github.com/ceph/ceph/blob/master/SubmittingPatches.rst#1-sign-your-work

please note, in addition to monitor, this change also impacts OSD and bluestore to be specific.

@tchaikov tchaikov requested a review from jdurgin May 31, 2018 09:29
Signed-off-by: Rafal Wadolowski <rwadolowski@cloudferro.com>
@xelexin xelexin force-pushed the wip-rocksdb-level-autochange branch from 365824b to 9a549c2 Compare June 12, 2018 11:33
@xelexin
Copy link
Contributor Author

xelexin commented Jun 12, 2018

@tchaikov I've edited commit message :)

@tchaikov
Copy link
Contributor

retest this please.

@stale
Copy link

stale bot commented Oct 17, 2018

This pull request has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs.
If you are a maintainer or core committer, please follow-up on this issue to identify what steps should be taken by the author to move this proposed change forward.
If you are the author of this pull request, thank you for your proposed contribution. If you believe this change is still appropriate, please ensure that any feedback has been addressed and ask for a code review.

@stale stale bot added the stale label Oct 17, 2018
@liewegas
Copy link
Member

@markhpc can you look at this one?

@stale stale bot removed the stale label Oct 18, 2018
@xelexin
Copy link
Contributor Author

xelexin commented Nov 20, 2018

@markhpc are you alive ? :)
we encountering rocksdb problems on several clusters and I think that this option will help.

@liewegas liewegas changed the title Enabled option to change levels in RocksDB kv: Enabled option to change levels in RocksDB Nov 20, 2018
@liewegas
Copy link
Member

  bool change_level = false;
  // If change_level is true and target_level have non-negative value, compacted
  // files will be moved to target_level.
  int target_level = -1;
  // Compaction outputs will be placed in options.db_paths[target_path_id].
  // Behavior is undefined if target_path_id is out of range.

is the expectation that the user would also manually specify target_level via the options string? It doesn't sound like this should have any effect set by itself.

@stale
Copy link

stale bot commented Jan 19, 2019

This pull request has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs.
If you are a maintainer or core committer, please follow-up on this issue to identify what steps should be taken by the author to move this proposed change forward.
If you are the author of this pull request, thank you for your proposed contribution. If you believe this change is still appropriate, please ensure that any feedback has been addressed and ask for a code review.

@stale stale bot added the stale label Jan 19, 2019
@neha-ojha
Copy link
Member

@markhpc Can you take a look at this?

@stale stale bot removed the stale label Feb 8, 2019
@gregsfortytwo
Copy link
Member

I'm not sure I understand what this option does — is it trying to make compacted levels move down to a lower level's storage if there's room after compaction which wasn't present before? Is this an option that RocksDB recommends using upstream (if so, why isn't it on by default)?
@markhpc do you have any thoughts here?

@stale
Copy link

stale bot commented Apr 9, 2019

This pull request has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs.
If you are a maintainer or core committer, please follow-up on this issue to identify what steps should be taken by the author to move this proposed change forward.
If you are the author of this pull request, thank you for your proposed contribution. If you believe this change is still appropriate, please ensure that any feedback has been addressed and ask for a code review.

@stale stale bot added the stale label Apr 9, 2019
@markhpc
Copy link
Member

markhpc commented Apr 15, 2019

This option isn't super well documented, but there are some hints in the code:

https://github.com/facebook/rocksdb/blob/master/include/rocksdb/options.h#L1311-L1316

And in the manual compaction section of the wiki:

"CompactRangeOptions::change_level,CompactRangeOptions::target_level Together, these options control the level where the compacted files will be placed. If target_level is -1, the compacted files will be moved to the minimum level whose computed max_bytes is still large enough to hold the files. Intermediate levels must be empty. For example, if the files were initially compacted to L5 and L2 is the minimum level large enough to hold the files, they will be placed in L2 if L3 and L4 are empty or in L4 if L3 is non-empty. If target_level is positive, the compacted files will be placed in that level provided intermediate levels are empty. If any any of the intermediate levels are not empty, the compacted files will be left where they are."

Based on that description, it sounds to me like initially the data will be compacted into Ln and if change_level is to to true, it then is moved into Lm where Lm is the lowest level where the data can fit subject to the skipping rules mentioned above. IE for data to move From L5 into L2, L3 and L4 both have to be empty.

I'd be a little nervous about making this change without a lot of testing to make sure it works as expected (and that it doesn't cause any kind of regressive behavior with extra copying in the event it can do so).

@stale stale bot removed the stale label Apr 15, 2019
@stale
Copy link

stale bot commented Jun 14, 2019

This pull request has been automatically marked as stale because it has not had any activity for 60 days. It will be closed if no further activity occurs for another 30 days.
If you are a maintainer or core committer, please follow-up on this pull request to identify what steps should be taken by the author to move this proposed change forward.
If you are the author of this pull request, thank you for your proposed contribution. If you believe this change is still appropriate, please ensure that any feedback has been addressed and ask for a code review.

@stale stale bot added the stale label Jun 14, 2019
@stale
Copy link

stale bot commented Sep 12, 2019

This pull request has been automatically closed because there has been no activity for 90 days. Please feel free to reopen this pull request (or open a new one) if the proposed change is still appropriate. Thank you for your contribution!

@stale stale bot closed this Sep 12, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants