Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to deal with large ash::Device type in libraries #731

Open
i509VCB opened this issue Mar 31, 2023 · 3 comments
Open

How to deal with large ash::Device type in libraries #731

i509VCB opened this issue Mar 31, 2023 · 3 comments

Comments

@i509VCB
Copy link
Contributor

i509VCB commented Mar 31, 2023

I opened Traverse-Research/gpu-allocator#159 a week ago and while thinking how to resolve that issue on a larger scale across the ecosystem where ash is used, the ash::Device type is quite large. In Smithay's WIP vulkan renderer by using gpu_allocator and having the renderer own a ash::Device I essentially double the size of my VulkanRenderer type. If I wanted to bring in a descriptor pool allocator it would be a third ash::Device.

Clearly tripling the size of the type (without any Boxes) is quite bad. I believe ash is probably the best spot to find a solution so that the ecosystem can adopt it.

Possible solutions

Most libraries and apps use a fraction of the vulkan commands. It would make sense then to reduce the size of types by only loading what the app uses. A few solutions to the problem come to mind:

  1. Allow generating a type like ash::Device with the commands the app/library needs. This would help with apps that need a fraction of the API but code generation for this would be needed and will be quite complicated.
  2. Give every command a function loader type like the extensions. This would mean a safer wrapper can be provided for every command. This is similar to how extension function loading is done in ash. However the current method of storing a vk::Instance or vk::Device will cost 8 bytes per instance on 64-bit systems. It's far less than 100s of commands going unused but it can add up quickly.
  3. Load the raw function pointers by hand and use the function pointers. This is what C libraries do and it works. But I'd really like something better.
  4. Change libraries to not own a device and instead borrow. This can cause some problems with borrow checking.
  5. Just Box/Arc it and call it a day. This works but it does waste heap memory. Plus libraries need to coordinate who destroys the device.
@Ralith
Copy link
Collaborator

Ralith commented Apr 1, 2023

I pass &Device around. I've never had any issue with borrow checking as a result. Don't try to capture the reference.

Just Box/Arc it and call it a day. This works but it does waste heap memory

Arc does not waste memory.

Plus libraries need to coordinate who destroys the device.

If you're sharing a logical Vulkan device, you need to carefully coordinate ownership regardless of how you get function pointers.

@i509VCB
Copy link
Contributor Author

i509VCB commented Apr 2, 2023

Arc does not waste memory.

If the library owns a ash::Device and another owns a Arc<ash::Device> you'll have to clone the device and therefore double the memory use.

@MarijnS95
Copy link
Collaborator

If the library owns a ash::Device and another owns a Arc<ash::Device> you'll have to clone the device and therefore double the memory use.

Sounds like the library in question (gpu-allocator) needs to stop owning the handle and function pointers, perhaps via an Arc or whatever "standard" we decide to settle on in this issue.


Fwiw in our codebase we very carefully guard drop order. Despite using Arcs to share everything across the implementations efficiently, when it comes time to destroy everything there's a lot of Arc::try_unwrap().unwrap() (via an aptly named ArcFinalOwner wrapper type) guarding concluding ownership, before calling functions like destroy_device(). If any Arc is unexpectedly held more than once we get notified about this :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants