Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve use of unified memory #86

Closed
maleadt opened this issue Feb 8, 2023 · 8 comments
Closed

Improve use of unified memory #86

maleadt opened this issue Feb 8, 2023 · 8 comments
Labels
arrays Things about the array abstraction. enhancement New feature or request

Comments

@maleadt
Copy link
Member

maleadt commented Feb 8, 2023

Our buffers are currently allocated as GPU-only buffers by choosing the Private* storage mode. That's OK given our current CUDA-style programming model where we perform explicit copies to and from the GPU, but it would be nice if we'd also properly support buffers that are shared between CPU and GPU, by selecting Shared storage mode: https://developer.apple.com/documentation/metal/resource_fundamentals/choosing_a_resource_storage_mode_for_apple_gpus. This should probably be a kwarg to the MtlArray constructor.

*Since we choose Private storage mode, I'm not sure how the unified memory examples work...

@maleadt maleadt added the enhancement New feature or request label Feb 8, 2023
@habemus-papadum
Copy link
Contributor

Hi -- this looks to be done already:

function MtlArray{T,N}(::UndefInitializer, dims::Dims{N}; storage=Shared) where {T,N}

(MtlBuffer defaults to private but MtlArray to Shared; kwarg exists to choose another option)

If this seems correct, lmk and I will add documentation for others (and our future-selves.)

@maleadt
Copy link
Member Author

maleadt commented Feb 24, 2023

Ah right, that's where it's set. That doesn't seem great, as per Apple we should be using private storage.

@habemus-papadum
Copy link
Contributor

I agree with that. I will try changing the default and updating the unifiedmemory example (and the gtk example)

@maleadt maleadt added the arrays Things about the array abstraction. label May 22, 2023
@jvkersch
Copy link

Just wanted to add that I stumbled across this issue when using Metal.jl through KernelAbstractions; I wanted to do some experiments with shared arrays but quickly found that the default allocator allocates arrays in private storage mode. I ended up just adding some defaults to the allocator (jvkersch@08ea259), but as I'm neither a Julia or GPU programmer this is probably not the right approach. Still interested in seeing how this issue evolves, however!

@maleadt
Copy link
Member Author

maleadt commented Nov 16, 2023

CUDA.jl has recently seen a bunch of unified memory-related improvements, https://info.juliahub.com/cuda-jl-5-1-unified-memory, we should probably backport a bunch of those here (e.g. the ability to conveniently unsafe_wrap a Julia Array to MtlArray, if possible).

@tgymnich
Copy link
Member

fixed by #305

@maleadt
Copy link
Member Author

maleadt commented Mar 12, 2024

There's still a couple of important improvements to make, e.g. the ability to cheaply wrap Arrays with an MtlArray and vice-versa. That should make it much easier to use Metal.jl in an existing application.

@tgymnich
Copy link
Member

tracked also here: #62

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
arrays Things about the array abstraction. enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

4 participants