-
-
Notifications
You must be signed in to change notification settings - Fork 4.3k
Avoid reading the entire asset into memory during asset processing. #21925
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
Nice step towards https://zeux.io/2025/09/30/billions-of-triangles-in-minutes/ ! Some questions:
|
|
@JMS55 Yup haha, I think you posted this a while ago in the Discord and it's been rattling around in my brain as something we simply can't do with Bevy today. I intend to fix that!
|
JMS55
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks pretty good to me, some last feedback.
|
|
||
| ```rust | ||
| // Inside `impl Process for Type` | ||
| let reader = context.asset_reader(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we add a nice helper for this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I intentionally don't want to. We should be encouraging users to engage with the "buffered" API of reading and writing, to reduce how much memory we're using. Today a lot of our loaders and sources just read everything into memory, and that can really limit how much data we can process. Besides, this is exactly how users should be doing it in loaders anyway, so this is no different. This migration is just weird because the previous behavior of having all the bytes is weird.
c3593c7 to
abc6390
Compare
9ea552b to
67d34ca
Compare
crates/bevy_asset/src/meta.rs
Outdated
| loop { | ||
| let bytes_read = asset_reader.read(&mut buffer).await?; | ||
| if bytes_read == 0 { | ||
| // This means we've reached EOF, so we're done consume asset bytes. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| // This means we've reached EOF, so we're done consume asset bytes. | |
| // This means we've reached EOF, so we're done consuming asset bytes. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done! I also realized this was slightly wrong - it should be if we didn't fill the buffer that it keeps processing.
JMS55
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Approved with some minor doc suggestions.
f41e260 to
15712b0
Compare
15712b0 to
7110da5
Compare
Objective
Solution
This means the processing code itself doesn't need to read the whole asset into memory at any point, meaning we can now process much bigger assets.
However, there are some risks. Asset sources which can't read chunks of an asset - which need to read the whole asset into memory anyway - now have to do so twice (not at the same time though). An example of this kind of asset source is the default Wasm source, or the HTTP asset source. In practice, I don't think this is a big issue - processing is likely to be happening on local assets anyway - it seems unlikely that users will want to download large assets from an HTTP asset source multiple times.
Another risk is that for asset processing, copying files is possibly slower - previously we just read the whole asset in, and then wrote the whole asset out. Now, we read small 1k chunks from the file, then write that chunk. This introduces some scheduling overhead in the copying. We can tune this number if we find it's too slow.
Testing