Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

hal: dedicated buffer/image memory #2929

Open
wants to merge 1 commit into
base: master
from
Open

Conversation

@kvark
Copy link
Member

kvark commented Jul 31, 2019

Fixes #2511
Closes #2513

The idea is that backends can implement this more efficiently than the default implementation.

PR checklist:

  • make succeeds (on *nix)
  • make reftests succeeds
  • tested examples with the following backends:
  • rustfmt run on changed code
@kvark kvark requested review from omni-viral and msiglreith Jul 31, 2019
@@ -29,6 +29,7 @@ ash = "0.29.0"
gfx-hal = { path = "../../hal", version = "0.2" }
smallvec = "0.6"
winit = { version = "0.19", optional = true }
#vk-mem = { version = "0.1", optional = true }

This comment has been minimized.

Copy link
@msiglreith

msiglreith Jul 31, 2019

Member

should be removed

@kvark

This comment has been minimized.

Copy link
Member Author

kvark commented Jul 31, 2019

/// Create a dedicated allocation for a buffer.
///
/// Returns a memory object that can't be used for binding any resources.
unsafe fn allocate_buffer_memory(

This comment has been minimized.

Copy link
@omni-viral

omni-viral Jul 31, 2019

To actually do that more efficiently, the function must return a view into memory with offset and size.

This comment has been minimized.

Copy link
@kvark

kvark Jul 31, 2019

Author Member

The user is expected to still call get_buffer_requirements before doing this, so the size is known. And the offset is known to be zero. Is this not enough?

This comment has been minimized.

Copy link
@omni-viral

omni-viral Aug 2, 2019

Oh, so this will work only for dedicated allocations?

This comment has been minimized.

Copy link
@omni-viral

omni-viral Aug 2, 2019

I thought we want to integrate vma into vulkan backend this way

This comment has been minimized.

Copy link
@kvark

kvark Aug 2, 2019

Author Member

I think what we can do with VMA is:

  1. make it a compile time optional dependency
  2. turn Memory into an enum, with one variant for normal allocations and another - for VMA-owned ones. The code dealing with memory would need to match the enum, but I don't think it's a problem.
  3. the user is then expected to treat the Memory as dedicated allocation

Does that sound reasonable?

This comment has been minimized.

Copy link
@kvark

kvark Aug 6, 2019

Author Member

So the memory type exposed by the Vulkan backend would be:

enum Memory {
  Physical(vk::Memory),
  SubAllocated(vma::Allocation),
}

Mapping a Memory::SubAllocated will go though VMA and just give out a pointer without really doing any more Vulkan mapping, if I understand correctly, much like with Rendy-memory.

This comment has been minimized.

Copy link
@omni-viral

omni-viral Aug 8, 2019

If VMA has no limitations here. Rendy has to map all non-dedicated mappable memory persistently to allow this, which is not optimal. I wonder how VMA does that.

This comment has been minimized.

Copy link
@kvark

kvark Aug 8, 2019

Author Member

which is not optimal

Is it, really? My understanding is that it's both optimal and recommended for CPU-visible memory.

This comment has been minimized.

Copy link
@omni-viral

omni-viral Aug 8, 2019

It's not a problem when RAM is abundant, like on typical modern PC. But in other cases it may be not the best approach.

This comment has been minimized.

Copy link
@kvark

kvark Aug 8, 2019

Author Member

Looks like it doesn't map memory by default on creation, unless there is a specific flag provided. Relevant docs - https://gpuopen-librariesandsdks.github.io/VulkanMemoryAllocator/html/memory_mapping.html
It confirms that VMA acts roughly like rendy-memory, and the approach this PR is taking should work.

@kvark

This comment has been minimized.

Copy link
Member Author

kvark commented Aug 9, 2019

I suppose I can integrate VMA in this PR as a validation of the API.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
3 participants
You can’t perform that action at this time.