Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
[CUDAExtension] support all visible cards when building a cudaextensi…
…on (#48891) Summary: Currently CUDAExtension assumes that all cards are of the same type on the same machine and builds the extension with compute capability of the 0th card. This breaks later at runtime if the machine has cards of different types. Specifically resulting in: ``` RuntimeError: CUDA error: no kernel image is available for execution on the device ``` when the cards of the types that weren't compiled for are used. (and the error is far from telling what the problem is to the uninitiated) My current setup is: ``` $ CUDA_VISIBLE_DEVICES=0 python -c "import torch; print(torch.cuda.get_device_capability())" (8, 6) $ CUDA_VISIBLE_DEVICES=1 python -c "import torch; print(torch.cuda.get_device_capability())" (6, 1) ``` but the extension was getting built with `-gencode=arch=compute_80,code=sm_80`. This PR: * [x] introduces a loop over all visible at build time devices to ensure the extension will run on all of them (it sorts the new list generated by the loop, so that the output is easier to debug should a card with lower capacity come last) * [x] adds `+PTX` to the last entry of ccs derived from local cards (`if not _arch_list:`) to support other archs * [x] adds a digest of my conversation with ptrblck on slack in the form of docs which hopefully can help others know which archs to support, how to override defaults, when and how to add PTX, etc. Please kindly review that my prose is clear and easy to understand. ptrblck Pull Request resolved: #48891 Reviewed By: ngimel Differential Revision: D25358285 Pulled By: ezyang fbshipit-source-id: 8160f3adebffbc8e592ddfcc3adf153a9dc91557
- Loading branch information