Join GitHub today
GitHub is home to over 40 million developers working together to host and review code, manage projects, and build software together.
Sign upMacOS support #262
MacOS support #262
Comments
This comment has been minimized.
This comment has been minimized.
|
I think there is no. |
This comment has been minimized.
This comment has been minimized.
|
Right now we need a way to port the ROCm tools to the Mac, which means exposed driver API and also LLVM IR interface, AIR could be the path to get our compiler to work, but better would be the support of our standardized loader interface so we could also support Assembly. This would be dependent on Apple tools team for the latter. |
This comment has been minimized.
This comment has been minimized.
|
+1 Would be nice to expose ROCM i.e. improved compute on Macos right now OpenCL 1.2 for too many years while Nvidia supports CUDA.. |
This comment has been minimized.
This comment has been minimized.
|
Hola! Just double-checking things here. Is it possible at all to get ROCm working for a Docker container running on macOS? Or can the ROC kernel not work that way? |
This comment has been minimized.
This comment has been minimized.
|
@hery Docker on macOS is run in a virtual machine. Since the radeon GPUs used in macs are used by the main OS and Hypervisor.framework does not support PCIe passthrough, Docker probably does not change the situation. |
This comment has been minimized.
This comment has been minimized.
|
Just wanted to leave a vote here for a Mac version. Thanks for your efforts to make AMD GPUs work for deep learning. |
This comment has been minimized.
This comment has been minimized.
|
I vote for this too! Mac devices should have more GPU support like Windows does. |
This comment has been minimized.
This comment has been minimized.
|
Hello! I would like to throw in my request for a Mac version as well! |
This comment has been minimized.
This comment has been minimized.
|
If you've come to this page looking for a Mac version, just keep commenting and leaving a vote. Don't know if it will change anything, but at least the developers will have a sense that demand is real. Though I also think Apple needs to step up and dedicate resources if they are going to stick with AMD. |
This comment has been minimized.
This comment has been minimized.
|
I've been watching this issue for a while. Now I'm wondering if there's anything we can do to help make this happen. The kernel code is more extensive than I would have expected. |
This comment has been minimized.
This comment has been minimized.
|
Still no progress? I thought it'd be an urgent issue and solved at least months ago... |
This comment has been minimized.
This comment has been minimized.
|
Leaving a vote, it's a surprise it hasn't happened yet, considering we don't have a choice on the GPU on the Mac |
This comment has been minimized.
This comment has been minimized.
|
I ended up getting an Nvidia GPU. I feel like even if AMD makes decent GPU, the software stack is just so far behind. |
This comment has been minimized.
This comment has been minimized.
|
Yet another request for Mac OS. It doesn't have to be native, if you guys can get it to work in such a way that I can run it in a VirtualBox Linux VM, that would be just fine, too. |
This comment has been minimized.
This comment has been minimized.
|
It is simply a sin not to make support on mac os. |
This comment has been minimized.
This comment has been minimized.
|
Please do something for mac os support. Don't think it's such a complex staff to cover UNIX as LINUX support finished. |
This comment has been minimized.
This comment has been minimized.
|
One more request for Mac OS Support. Please. How is this not a chief commercial concern? |
This comment has been minimized.
This comment has been minimized.
|
Apple is embracing AMD GPUs. The lack of support for macOS would be a pity. Two years are long enough for PyTorch to grow mature towards production. Though compilation from source is required for PyTorch to support nvidia GPUs on macOS, it's definitely working. So what's actually going on with ROCm? At least in the aspect of driver release, it should be easier for AMD GPUs compared with nvidia's. |
This comment has been minimized.
This comment has been minimized.
|
Really surprised that it is like this. Plz do something, support Mac. |
This comment has been minimized.
This comment has been minimized.
|
+1 to macOS support, especially because Nvidia can't even work with Mojave. |
This comment has been minimized.
This comment has been minimized.
|
Hi 👋🏽 +1 over here on the MacOs support. Would be dreamy to accomplish all the tasks on one machine. |
This comment has been minimized.
This comment has been minimized.
|
Same here |
This comment has been minimized.
This comment has been minimized.
|
Same here - I upgraded to a new MacBook pro with a vega56 as an eGPU and it's great but I'd love to be able to proudly exclaim how awesome it is by making some kick ass art-robots which currently are going to use an nVidia box. |
This comment has been minimized.
This comment has been minimized.
|
Why is this issue still even open ? It's in Apple's full responsibility to maintain their compute stack so the people in this issue need to contact Apple instead to get a meaningful response from them ... For any of you guys that are interested in machine learning, why not ask for a single source C++ compute API like CUDA, HCC, SYCL from Apple because their Metal API is total junk ? |
This comment has been minimized.
This comment has been minimized.
|
The recent response from Degerz is unhelpful, and, in fact, highlights the disparity between the support from Nvidia and AMD. Nvidia’s Cuda has taken over the industry and driven much of this kind of work away from AMD products, even where good hardware and APIs are available from AMD and competitors. Nvidia continues to supply drivers for their video cards on Mac, as well as Cuda and OpenCL stacks, despite a lack of support and cooperation from Apple. Meanwhile, Apples supplies AMD drivers and puts AMD cards in all of their products, but drivers and support from AMD themselves is starkly missing. I realize that this Issue is unlikely to be resolved with anything other than a "won't fix" because ROCm so depends so much on its custom linux kernel, and Mac support would require a new effort to integrate with Mac. However, Nvidia has already done this type of work and more. Nvidia products are the biggest reason this project will fail or succeed, as they continually drive the parallel compute industry. Apple is just a platform. Negativity toward them or their APIs isn't very productive. |
This comment has been minimized.
This comment has been minimized.
No, what's unhelpful are posts like yours in being complacent on accepting mediocrity from Apple and you can kiss goodbye to CUDA support on macOS starting from macOS Mojave so it's high time that you along with the others start holding Apple responsible for their subpar compute stack! It's exactly as you said, Apple are the ones who provide the drivers so they better damn well provide drivers adequate for machine learning or other high-end compute applications ... Demanding that AMD support a platform that they have no control whatsoever is not at all productive in any form and all the work that Nvidia has tried to bypass the macOS kernel has already been dropped. The fundamental issue with Apple's Metal API is that it's NOT single source like CUDA or HIP but despite that design flaw AMD will gladly use Metal for some of their projects, however it'll never become a viable option for machine learning. It's amazing that you're quick to point out AMD's failings but will ignore Apple making the very same mistake twice already once with OpenCL and another with Metal both of which are designed as separate source ... How do you expect the community to cope with a useless tool ? |
This comment has been minimized.
This comment has been minimized.
|
Contrary to the complete rubbish the previous commenter spewed, There are many other projects that were once exclusive to |
This comment has been minimized.
This comment has been minimized.
|
Another +1 for MacOS support, even if we have to compile from source (I'm looking at you PyTorch with GPU support for MacOS!). |
This comment has been minimized.
This comment has been minimized.
|
With the recent announcement of the new Mac Pros with up to 4x Vega 7s in them, which would be a dream ROCm workstation, have there been any plans made yet to address ROCm on Mac yet? |
This comment has been minimized.
This comment has been minimized.
|
@maddyscientist Don't ask AMD ? Go ask Apple instead and ask them when they're going to support a single-source programming model ? If your interest like everyone else in here is in machine learning frameworks such as Tensorflow, PyTorch, etc then ROCm only works because it's APIs support template kernels/shaders ... No matter how many people ask AMD or plead for them to bring over ROCm to MacOS, it'll never happen because Apple only wants to support inferior programming models ... |
This comment has been minimized.
This comment has been minimized.
|
@Degerz so how come CUDA works on Mac but not ROCm? |
This comment has been minimized.
This comment has been minimized.
|
@maddyscientist Not anymore it doesn't starting from macOS version 10.14. There's going to be no more CUDA support starting on Mojave and subsequently no CUDA as well on Catalina ... Again, no single-source programming model means no advanced machine learning frameworks like Tensorflow or PyTorch ... You can tell there's an echochamber in here from people not willing to admit that what Apple are doing with their Metal API is total rubbish. They should give Metal the ability to do templated kernels/shaders or they can get get lost ... Metal = pile of hot junk |
This comment has been minimized.
This comment has been minimized.
|
On the practical/linux side of things, it's either integrated into the newest kernel or inserted as a dkms module into the kernel at runtime. If Apple's Mach kernel disallows that, it will be impossible to run. OS X is not open source. The freebsd kernel they swiped is closed source. You need Apple's keys and/or permission to access it. If Apple wanted ROCm implemented, it could be done in a month with the code here that is freely available. But they don't want to. From their recent announcements they are going to reinvent the wheel with metal. And I understand why, because it must integrate with their kernel, which is in fact a vertical/closed package. So they literally have to do it themselves unless NDA's/permission are granted to an outside vendor, as was done with Nvidia(for a short time). I'm sure Apple will be releasing some kind of compute layer shortly with metal. I just doubt it'll be called ROCm. And yes, that new Navi gpu is an amazing piece of hardware. I would be shocked if they didn't support compute on that crazy overpowered card. If they are charging $1000 for a monitor stand, the least they can do is let you use your own gpu for compute tasks. |
This comment has been minimized.
This comment has been minimized.
|
They won't because Apple only intends for Metal to be a gfx API and even then it's missing tons of features. Literally, every time some developer uses it beyond games it fails pretty hard and miserably at that ... From years on since it's release, a former AMD employee has made the same complaint. There are currently ZERO and EXACTLY ZERO ways to be able to do CUDA-style templates on Apple systems currently since Metal is a dumpster fire of a compute API. If Metal actually had the ability to execute CUDA-style templates then having CUDA or ROCm on their platforms would become redundant and any serious professionals with Apple cash wouldn't care but this is not the case so here we are where we have people who guilty of putting up with an inferior programming model from Apple ... Metal is just another abomination to ML framework specialists and unless somebody demands that Apple support a single-source programming model like CUDA then nothing will change ... |
This comment has been minimized.
This comment has been minimized.
|
Currently, one solution to do deep learning on a Mac with external AMD GPU is to use PlaidML, which supports Keras. I have successfully tested it with an external RX580 GPU. It might also be possible to add support to TensorFlow 2.0, as shown in the comment: Plaidml_issue_281. The PlaidML supports Metal or OpenCL as backend. That being said, it will be exciting if ROCm can support Mac OS as well, so there will be more options. Especially after the release of Apple cheese grater. |
This comment has been minimized.
This comment has been minimized.
|
+1 for AMD GPGPU support, I don't really care if it's ROCm or HIP. Just let me write GPU-accelerated programs like CUDA does. This is a problem with lack of vision from both Apple with its shader file approach in the Metal API, and a from AMD with having ROCm be tied to Linux. |
This comment has been minimized.
This comment has been minimized.
|
@paradox56 The acceleration in PlaidML works by reverting to graphics-based approaches from yesteryear. It is really rather a sad testament that it is easier to regress to previous techniques than to get AMD/Apple on the same page in this. |
This comment has been minimized.
This comment has been minimized.
|
I just found
https://developer.apple.com/documentation/metal/hello_compute |
This comment has been minimized.
This comment has been minimized.
|
It is not single-source, and the API is ridiculously complex to call a single function, sadly.
- Ludvig
… On 24 Jul 2019, at 21:35, Alexander Koz. ***@***.***> wrote:
I just found
A MTLComputePipelineState object represents a compute processing pipeline. Unlike a graphics rendering pipeline, you can create a MTLComputePipelineState object with a single kernel function, without using a pipeline descriptor.
https://developer.apple.com/documentation/metal/hello_compute
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub, or mute the thread.
|
This comment has been minimized.
This comment has been minimized.
This is true, but having worked with CUDA on a Linux system, the plaidml framework has been much more stable and easier to use. It regulates itself far better with available RAM, as one example. It is frustrating to think of the lost potential here. |
This comment has been minimized.
This comment has been minimized.
|
The TVM deep learning compiler stack should also have Metal support, PlaidML doesn't have support for every type of operator yet (or at least not for my more advanced model, normal ones should work) |
This comment has been minimized.
This comment has been minimized.
|
Everyone should ask Apple to implement the drivers changes to make ROCm possible on osx. Send them feedback. I will be doing so also. |
This comment has been minimized.
This comment has been minimized.
|
@kenthinson that or find a journalist to investigate and publish whether Apple develops their ML using non-apple workflows...! |
This comment has been minimized.
This comment has been minimized.
|
It's finally official, Nvidia will drop CUDA support for macOS (not that it mattered since Mojave) after release 10.2 and no amount of shouting or screaming at AMD will reverse this decision since they were never at fault to begin with like some people here seem to have implied ... Their trash Metal API will have to meet these 3 demands to be even considered again or else ...
Until these demands are met by Apple their Metal API will be straight garbage. Even Intel with their oneAPI compute stack and their DPC++ API is a better idea than Metal which shows just how pathetic Apple's support has been thus far for this community ... |
This comment has been minimized.
This comment has been minimized.
|
Looks like PlaidML is the only viable option moving forward |
This comment has been minimized.
This comment has been minimized.
|
Plaid is a "program". You'll mean opencl as the equivalent api to rocm, for eventually talking with the gpu. |
This comment has been minimized.
This comment has been minimized.
|
PlaidML works with metal too. So I don’t anticipate any problems. Because it is a program, it is a little more robust to Apple’s baffling decisions. I don’t know how well it will Leah with Keras now that it is integrated within tensorflow in v2. |
This comment has been minimized.
This comment has been minimized.
|
@Degerz Will MLIR make any difference in the future? |
This comment has been minimized.
This comment has been minimized.
|
@bofeizhu Unfortunately, I don't think MLIR is going to help with the situation on macOS all that much. It looks to be an even higher level IR than SPIR-V! SPIR-V compute shaders in Vulkan as it is have too many limitations for targeting advanced machine learning frameworks like Tensorflow or PyTorch so MLIR targeting SPIR-V won't end well IMO ... My main motivation behind demanding an offline compilation model for Metal (or any other API really) is because nobody wants to deal with the weaknesses of high-level abstractions such as SPIR-V or MSL (Metal Shading Language). An offline compilation model would offer us more powerful programming features to enable more complex frameworks like machine learning and more powerful lower level optimizations ... Currently today, shading languages/bytecode aren't very powerful abstractions of the hardware we use. CUDA proves that having a less portable standard is a more powerful tool and the industry is slowly starting to acknowledge this in the current ecosystem ... CUDA kernel language -> PTX ISA -> SASS (Nvidia) Apple like with the above standards should embrace CUDA's programming/compilation model rather than Direct3D or Vulkan's programming/compilation model with the Metal API if they want to have any chance of success with complex machine learning frameworks. MSL is unacceptable as it is but if Apple doesn't want an offline compilation model for portability reasons then they should at least give us something lower level like a virtual ISA such as PTX because things can't keep going on like this ... |
This comment has been minimized.
This comment has been minimized.
|
@Degerz Correct me if I’m wrong, but I think MLIR is supposed to be working with Swift for Tensorflow. And we can use Swift to write every layer of the deep learning stack and compile them directly to hardware. |
This comment has been minimized.
This comment has been minimized.
|
@bofeizhu Think of it like this ... MLIR uses a source language as an input (such as Swift as you mentioned) as the front-end for the intermediate representation and then compiles it into the target back-end intermediate representation or native ISA (like PTX or GCN ISA) ... (following example) Swift source language -> (source-to-IR compiler) -> MLIR -> (IR-to-IR/ISA compiler) -> PTX ISA MLIR is just simply another layer in the compilation process. IMO the biggest hurdle to Tensorflow support on macOS/Metal isn't going to be the source language or the higher level IR but it's going to be the back-end IR/ISA ... Right now we can't just simply use the Metal Shading Language to support the full featured Tensorflow framework because it doesn't have enough features. At best it's only good enough for Tensorflow Lite and it's the same story with SPIR-V ... We need a lower level target from Apple to be able to have Tensorflow use the Metal API. If we had access to some sort of virtual ISA like PTX or even a native ISA like GCN bytecode in Metal API then we could just start porting existing CUDA/HIP kernels by writing custom hand optimized Metal shaders with assembly code instead of only using the high level MSL source which is not nearly as powerful or featured compared to the assembly language ... The biggest gripe with the Metal API is that it's not nearly low level enough compared to CUDA in which you can write kernels using Nvidia's assembly language for their GPUs known as PTX or with HIP in AMD's case you can write HIP kernels using GCN assembly! Where is Metal's equivalent to PTX or GCN ISA in Apple's case ? |
This comment has been minimized.
This comment has been minimized.
|
Has anyone tried using Parallels desktop with Ubuntu to make this work? Or nGraph + PlaidML to use tensorflow on AMD Gpu? |
This comment has been minimized.
This comment has been minimized.
|
It is not a problem with apple. Apple should not use amd. |
Is there a timeline for RocM on MacOS?