Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

293-offload-bloom-filters #301

Merged
merged 13 commits into from
Nov 18, 2021
Merged

293-offload-bloom-filters #301

merged 13 commits into from
Nov 18, 2021

Conversation

vovac12
Copy link
Contributor

@vovac12 vovac12 commented Aug 19, 2021

Closes #293


pub(crate) async fn filter_memory_allocated(&self) -> usize {
let mut memory = 0;
for holder in self.groups.read().await.iter() {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe it's better to use FuturesUnordered here, so requests will be performed in parallel?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And looks like it's group, not holder


pub(crate) async fn filter_memory_allocated(&self) -> usize {
let mut memory = 0;
for holder in self.holders().read().await.iter() {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same, FuturesUnordered

) -> Vec<(SimpleHolder, usize)> {
let mut res = vec![];
for dc in iter {
for group in dc.groups().read().await.iter() {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Again, this cycle's iterations don't seem to be related to each other, so you can process them in parallel

bob/src/cleaner.rs Outdated Show resolved Hide resolved
bob/src/version_helpers.rs Outdated Show resolved Hide resolved
@ikopylov ikopylov marked this pull request as draft August 25, 2021 22:08
@ikopylov ikopylov added this to the v2.0 milestone Nov 11, 2021
bob-common/src/configs/node.rs Outdated Show resolved Hide resolved
bob-backend/src/pearl/core.rs Outdated Show resolved Hide resolved
@@ -196,4 +203,27 @@ impl BackendStorage for Pearl {
dc.close_unneeded_active_blobs(soft, hard).await;
}
}

async fn offload_old_filters(&self, limit: usize) {
Stuff::offload_old_filters(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Stuff is a bad name. You should give a more descriptive name

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's not my naming) I can rename it to Utils

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Utils is a little bit better, but still a bad name. I don't see the whole picture, so maybe @pyakushin or @idruzhitskiy can suggest a name

bob-backend/src/pearl/core.rs Outdated Show resolved Hide resolved
memory
}

pub(crate) fn groups(&self) -> Arc<RwLock<Vec<Group>>> {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do this need to be exposed as a public?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It make it visible only in current crate. Needed for stuff.rs

bob-backend/src/pearl/postprocessor.rs Outdated Show resolved Hide resolved
bob-backend/src/pearl/postprocessor.rs Outdated Show resolved Hide resolved
bob-backend/src/pearl/postprocessor.rs Outdated Show resolved Hide resolved
size_before.saturating_sub(size_after)
);
self.allocated_size
.fetch_sub(size_before.saturating_sub(size_after), Ordering::Relaxed);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could this just be self.allocated_size = size_after? What is the reason for this math?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry, but my suggestion is incorrect

bob/src/cleaner.rs Outdated Show resolved Hide resolved
async fn collect_holders(
iter: impl IntoIterator<Item = &Arc<DiskController>>,
) -> Vec<(SimpleHolder, usize)> {
iter.into_iter()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This approach violates SOLID principle. It is better to expose holder_iter() function on each level, that provides iterator on holders and uses holder_iter() on level below. With such approach, here you can simply call top level holder_iter() and convert its result into SimpleHolder

size_after,
size_before.saturating_sub(size_after)
);
self.allocated_size.store(size_after, Ordering::Relaxed);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I reread the code. My previous suggestion just to store the value is incorrect. size_after contains filter size of a single holder. You need to revert change and perform math: allocated_size = allocated_size - (size_before - size_after)

@vovac12 vovac12 marked this pull request as ready for review November 18, 2021 02:07
@piakushin piakushin merged commit a19dbae into master Nov 18, 2021
@piakushin piakushin deleted the 293-offload-bloom-filters branch November 18, 2021 14:13
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Offload bloom filters when memory limit is reached
4 participants