Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

React to memory pressure. #1552

Merged
merged 3 commits into from Aug 9, 2017
Merged

React to memory pressure. #1552

merged 3 commits into from Aug 9, 2017

Conversation

@nical
Copy link
Collaborator

nical commented Aug 4, 2017

Memory pressure events are sent when the amount of available memory is so low that we risk crashing very soon if we don't do anything about it. When this happens we don't really care about frame budgets and smoothness anymore, we can only do a best effort at freeing memory and hope to not crash.

This PR adds memory pressure notifications to the API and reacts to it by completely clearing the texture cache and clearing some of the allocations that we try to recycle for performance purposes.

@nical
Copy link
Collaborator Author

nical commented Aug 4, 2017

This built on top of #1532 so it has some extra commits until that PR is merged.

@nical nical mentioned this pull request Aug 4, 2017
@bors-servo
Copy link
Contributor

bors-servo commented Aug 6, 2017

The latest upstream changes (presumably #1545) made this pull request unmergeable. Please resolve the merge conflicts.

@nical nical force-pushed the nical:mem-pressure branch 2 times, most recently from bc0b9df to 1ff0b58 Aug 8, 2017
@nical
Copy link
Collaborator Author

nical commented Aug 8, 2017

Rebased and tested. You can use the 'm' key in the sample and in wrench to trigger memory pressure events.
I reverted the patch that was clearing the frame builder's allocations during memory pressure, because frames would not be rendered until a new frame is built from a new display list which is probably not the behavior one would expect. Fixing this to re-build the frame automatically wouldn't be very hard but it requires to reorganize some more code and I wasn't sure it was worth it at this point. We can revisit that later.

@nical
Copy link
Collaborator Author

nical commented Aug 8, 2017

r? @kvark or @glennw

Copy link
Member

kvark left a comment

What happened to the idea of having the expiration time being a non-linear function of the available memory? It seems like it would ultimately replace the proposed memory pressure mechanics.

@@ -89,6 +90,14 @@ impl GlyphCache {
}
}

pub fn clear(&mut self, texture_cache: &mut TextureCache) {
let caches = mem::replace(&mut self.glyph_key_caches, FastHashMap::default());

This comment has been minimized.

@kvark

kvark Aug 8, 2017

Member

this could be rewritten cleaner without mem::replace, which I consider highly overused in our codebase, e.g.:

for (_, mut glyph_key_cache) in &mut self.glyph_key_caches {
  glyph_key_cache.clear(texture_cache);
}
self.glyph_key_caches.clear();

Bonus points - space is kept reserved.

This comment has been minimized.

@nical

nical Aug 8, 2017

Author Collaborator

Bonus points - space is kept reserved.

Well, this is the one situation where keeping the space reserved is undesirable. But I can just do the loop as you suggest and assign an empty hash map to it instead of clearing if you prefer.

This comment has been minimized.

@kvark

kvark Aug 8, 2017

Member

hah, good point :)

@@ -268,6 +268,7 @@ impl RendererFrame {
pub enum ResultMsg {
RefreshShader(PathBuf),
NewFrame(DocumentId, RendererFrame, TextureUpdateList, BackendProfileCounters),
UpdateResources(TextureUpdateList, bool /* cancel rendering */),

This comment has been minimized.

@kvark

kvark Aug 8, 2017

Member

let's just use named fields here

@nical
Copy link
Collaborator Author

nical commented Aug 8, 2017

What happened to the idea of having the expiration time being a non-linear function of the available memory? It seems like it would ultimately replace the proposed memory pressure mechanics.

I think that we should do both. There are other places where we could free up memory beyond the texture cache (I started with it because it was the obvious one, but we should extend this to other places).

Because of memory fragmentation, it can be hard to tell from the amount of available memory whether we are really in trouble and allocations can start failing (if we are lucky causing memory pressure events) before we detect that we are really in trouble if we only look at the available memory.

I see the idea of adjusting expiration frames in function of the available memory more as a way to keep the memory usage "reasonable" (whatever that means) in general, while memory pressures are a last resort stability thing.

@nical nical force-pushed the nical:mem-pressure branch from 1ff0b58 to 993612c Aug 9, 2017
@kvark
Copy link
Member

kvark commented Aug 9, 2017

@bors-servo
Copy link
Contributor

bors-servo commented Aug 9, 2017

📌 Commit 993612c has been approved by kvark

@bors-servo
Copy link
Contributor

bors-servo commented Aug 9, 2017

Testing commit 993612c with merge 20aa0dc...

bors-servo added a commit that referenced this pull request Aug 9, 2017
React to memory pressure.

Memory pressure events are sent when the amount of available memory is so low that we risk crashing very soon if we don't do anything about it. When this happens we don't really care about frame budgets and smoothness anymore, we can only do a best effort at freeing memory and hope to not crash.

This PR adds memory pressure notifications to the API and reacts to it by completely clearing the texture cache and clearing some of the allocations that we try to recycle for performance purposes.
@bors-servo
Copy link
Contributor

bors-servo commented Aug 9, 2017

☀️ Test successful - status-travis
Approved by: kvark
Pushing 20aa0dc to master...

@bors-servo bors-servo merged commit 993612c into servo:master Aug 9, 2017
2 checks passed
2 checks passed
continuous-integration/travis-ci/pr The Travis CI build passed
Details
homu Test successful
Details
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Linked issues

Successfully merging this pull request may close these issues.

None yet

3 participants
You can’t perform that action at this time.