You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Dec 21, 2023. It is now read-only.
The Apple i/GPU usebase is mostly made of Intel and AMD. Yet you've chosen to follow that irational trend of going for the ridiculously little minority (watch the JPR stats) that are nvidia gpus, because 1) they invest ton of money for brainwashing developpers, 2) some tools like Tensorflow do not yet support the standards because of the 1) reason.
The second reason would be valid for à little developer team that has no the choice, BUT you are APPLE, you have unlimited money and you are the creators of OpenCL !!
If you weren't aware, AMD has designed an impressive tool for translating existing modern CUDA code in OpenCL code. Often it take less than one week to convert à project with this tool. https://github.com/ROCm-Developer-Tools/HIP
I'm not even talking of the fact that AMD hardware is far more suited for machine learning than Nvidia because for example it support full FP16...
So, will you choose the rational way that let choice to the consumer, and to the PC maker ?
The text was updated successfully, but these errors were encountered:
The Apple i/GPU usebase is mostly made of Intel and AMD. Yet you've chosen to follow that irational trend of going for the ridiculously little minority (watch the JPR stats) that are nvidia gpus, because 1) they invest ton of money for brainwashing developpers, 2) some tools like Tensorflow do not yet support the standards because of the 1) reason.
The second reason would be valid for à little developer team that has no the choice, BUT you are APPLE, you have unlimited money and you are the creators of OpenCL !!
If you weren't aware, AMD has designed an impressive tool for translating existing modern CUDA code in OpenCL code. Often it take less than one week to convert à project with this tool.
https://github.com/ROCm-Developer-Tools/HIP
I'm not even talking of the fact that AMD hardware is far more suited for machine learning than Nvidia because for example it support full FP16...
So, will you choose the rational way that let choice to the consumer, and to the PC maker ?
The text was updated successfully, but these errors were encountered: