@@ -156,14 +156,13 @@ pip install torch_xla[tpu] -f https://storage.googleapis.com/libtpu-releases/ind
156156
157157GPU and nightly builds are available in our public GCS bucket.
158158
159- | Version | Cloud TPU/ GPU VMs Wheel |
159+ | Version | Cloud GPU VM Wheels |
160160| --- | ----------- |
161- | 2.3 (Python 3.8) | ` https://storage.googleapis.com/pytorch-xla-releases/wheels/tpuvm/torch_xla-2.3.0-cp38-cp38-manylinux_2_28_x86_64.whl ` |
162- | 2.3 (Python 3.10) | ` https://storage.googleapis.com/pytorch-xla-releases/wheels/tpuvm/torch_xla-2.3.0-cp310-cp310-manylinux_2_28_x86_64.whl ` |
163- | 2.3 (Python 3.11) | ` https://storage.googleapis.com/pytorch-xla-releases/wheels/tpuvm/torch_xla-2.3.0-cp311-cp311-manylinux_2_28_x86_64.whl ` |
164161| 2.3 (CUDA 12.1 + Python 3.8) | ` https://storage.googleapis.com/pytorch-xla-releases/wheels/cuda/12.1/torch_xla-2.3.0-cp38-cp38-manylinux_2_28_x86_64.whl ` |
165162| 2.3 (CUDA 12.1 + Python 3.10) | ` https://storage.googleapis.com/pytorch-xla-releases/wheels/cuda/12.1/torch_xla-2.3.0-cp310-cp310-manylinux_2_28_x86_64.whl ` |
166163| 2.3 (CUDA 12.1 + Python 3.11) | ` https://storage.googleapis.com/pytorch-xla-releases/wheels/cuda/12.1/torch_xla-2.3.0-cp311-cp311-manylinux_2_28_x86_64.whl ` |
164+ | 2.2 (CUDA 12.1 + Python 3.8) | ` https://storage.googleapis.com/pytorch-xla-releases/wheels/cuda/12.1/torch_xla-2.2.0-cp38-cp38-manylinux_2_28_x86_64.whl ` |
165+ | 2.2 (CUDA 12.1 + Python 3.10) | ` https://storage.googleapis.com/pytorch-xla-releases/wheels/cuda/12.1/torch_xla-2.2.0-cp310-cp310-manylinux_2_28_x86_64.whl ` |
167166| nightly (Python 3.8) | ` https://storage.googleapis.com/pytorch-xla-releases/wheels/tpuvm/torch_xla-nightly-cp38-cp38-linux_x86_64.whl ` |
168167| nightly (Python 3.10) | ` https://storage.googleapis.com/pytorch-xla-releases/wheels/tpuvm/torch_xla-nightly-cp310-cp310-linux_x86_64.whl ` |
169168| nightly (CUDA 12.1 + Python 3.8) | ` https://storage.googleapis.com/pytorch-xla-releases/wheels/cuda/12.1/torch_xla-nightly-cp38-cp38-linux_x86_64.whl ` |
@@ -183,54 +182,17 @@ The torch wheel version `2.5.0.dev20240613+cpu` can be found at https://download
183182
184183| Version | Cloud TPU VMs Wheel |
185184| ---------| -------------------|
186- | 2.2 (Python 3.8) | ` https://storage.googleapis.com/pytorch-xla-releases/wheels/tpuvm/torch_xla-2.2.0-cp38-cp38-manylinux_2_28_x86_64.whl ` |
187- | 2.2 (Python 3.10) | ` https://storage.googleapis.com/pytorch-xla-releases/wheels/tpuvm/torch_xla-2.2.0-cp310-cp310-manylinux_2_28_x86_64.whl ` |
188- | 2.2 (CUDA 12.1 + Python 3.8) | ` https://storage.googleapis.com/pytorch-xla-releases/wheels/cuda/12.1/torch_xla-2.2.0-cp38-cp38-manylinux_2_28_x86_64.whl ` |
189- | 2.2 (CUDA 12.1 + Python 3.10) | ` https://storage.googleapis.com/pytorch-xla-releases/wheels/cuda/12.1/torch_xla-2.2.0-cp310-cp310-manylinux_2_28_x86_64.whl ` |
190185| 2.1 (XRT + Python 3.10) | ` https://storage.googleapis.com/pytorch-xla-releases/wheels/xrt/tpuvm/torch_xla-2.1.0%2Bxrt-cp310-cp310-manylinux_2_28_x86_64.whl ` |
191186| 2.1 (Python 3.8) | ` https://storage.googleapis.com/pytorch-xla-releases/wheels/tpuvm/torch_xla-2.1.0-cp38-cp38-linux_x86_64.whl ` |
192- | 2.0 (Python 3.8) | ` https://storage.googleapis.com/tpu-pytorch/wheels/tpuvm/torch_xla-2.0-cp38-cp38-linux_x86_64.whl ` |
193- | 1.13 | ` https://storage.googleapis.com/tpu-pytorch/wheels/tpuvm/torch_xla-1.13-cp38-cp38-linux_x86_64.whl ` |
194- | 1.12 | ` https://storage.googleapis.com/tpu-pytorch/wheels/tpuvm/torch_xla-1.12-cp38-cp38-linux_x86_64.whl ` |
195- | 1.11 | ` https://storage.googleapis.com/tpu-pytorch/wheels/tpuvm/torch_xla-1.11-cp38-cp38-linux_x86_64.whl ` |
196- | 1.10 | ` https://storage.googleapis.com/tpu-pytorch/wheels/tpuvm/torch_xla-1.10-cp38-cp38-linux_x86_64.whl ` |
197-
198- <br />
199-
200- Note: For TPU Pod customers using XRT (our legacy runtime), we have custom
201- wheels for ` torch ` and ` torch_xla ` at
202- ` https://storage.googleapis.com/tpu-pytorch/wheels/xrt ` .
203-
204- | Package | Cloud TPU VMs Wheel (XRT on Pod, Legacy Only) |
205- | --- | ----------- |
206- | torch_xla | ` https://storage.googleapis.com/pytorch-xla-releases/wheels/xrt/tpuvm/torch_xla-2.1.0%2Bxrt-cp310-cp310-manylinux_2_28_x86_64.whl ` |
207- | torch | ` https://storage.googleapis.com/pytorch-xla-releases/wheels/xrt/tpuvm/torch-2.1.0%2Bxrt-cp310-cp310-linux_x86_64.whl ` |
187+ | 2.0 (Python 3.8) | ` https://storage.googleapis.com/pytorch-xla-releases/wheels/tpuvm/torch_xla-2.0-cp38-cp38-linux_x86_64.whl ` |
188+ | 1.13 | ` https://storage.googleapis.com/pytorch-xla-releases/wheels/tpuvm/torch_xla-1.13-cp38-cp38-linux_x86_64.whl ` |
208189
209190<br />
210191
211192| Version | GPU Wheel + Python 3.8 |
212193| --- | ----------- |
213- | 2.1+ CUDA 11.8 | ` https://storage.googleapis.com/pytorch-xla-releases/wheels/cuda/11.8/torch_xla-2.1.0-cp38-cp38-manylinux_2_28_x86_64.whl ` |
214- | 2.0 + CUDA 11.8 | ` https://storage.googleapis.com/tpu-pytorch/wheels/cuda/118/torch_xla-2.0-cp38-cp38-linux_x86_64.whl ` |
215- | 2.0 + CUDA 11.7 | ` https://storage.googleapis.com/tpu-pytorch/wheels/cuda/117/torch_xla-2.0-cp38-cp38-linux_x86_64.whl ` |
216- | 1.13 | ` https://storage.googleapis.com/tpu-pytorch/wheels/cuda/112/torch_xla-1.13-cp38-cp38-linux_x86_64.whl ` |
194+ | 2.1 + CUDA 11.8 | ` https://storage.googleapis.com/pytorch-xla-releases/wheels/cuda/11.8/torch_xla-2.1.0-cp38-cp38-manylinux_2_28_x86_64.whl ` |
217195| nightly + CUDA 12.0 >= 2023/06/27| ` https://storage.googleapis.com/pytorch-xla-releases/wheels/cuda/12.0/torch_xla-nightly-cp38-cp38-linux_x86_64.whl ` |
218- | nightly + CUDA 11.8 <= 2023/04/25| ` https://storage.googleapis.com/tpu-pytorch/wheels/cuda/118/torch_xla-nightly-cp38-cp38-linux_x86_64.whl ` |
219- | nightly + CUDA 11.8 >= 2023/04/25| ` https://storage.googleapis.com/pytorch-xla-releases/wheels/cuda/11.8/torch_xla-nightly-cp38-cp38-linux_x86_64.whl ` |
220-
221- <br />
222-
223- | Version | GPU Wheel + Python 3.7 |
224- | --- | ----------- |
225- | 1.13 | ` https://storage.googleapis.com/tpu-pytorch/wheels/cuda/112/torch_xla-1.13-cp37-cp37m-linux_x86_64.whl ` |
226- | 1.12 | ` https://storage.googleapis.com/tpu-pytorch/wheels/cuda/112/torch_xla-1.12-cp37-cp37m-linux_x86_64.whl ` |
227- | 1.11 | ` https://storage.googleapis.com/tpu-pytorch/wheels/cuda/112/torch_xla-1.11-cp37-cp37m-linux_x86_64.whl ` |
228-
229- <br />
230-
231- | Version | Colab TPU Wheel |
232- | --- | ----------- |
233- | 2.0 | ` https://storage.googleapis.com/tpu-pytorch/wheels/colab/torch_xla-2.0-cp310-cp310-linux_x86_64.whl ` |
234196
235197</details >
236198
@@ -241,8 +203,8 @@ wheels for `torch` and `torch_xla` at
241203| 2.3 | ` us-central1-docker.pkg.dev/tpu-pytorch-releases/docker/xla:r2.3.0_3.10_tpuvm ` |
242204| 2.2 | ` us-central1-docker.pkg.dev/tpu-pytorch-releases/docker/xla:r2.2.0_3.10_tpuvm ` |
243205| 2.1 | ` us-central1-docker.pkg.dev/tpu-pytorch-releases/docker/xla:r2.1.0_3.10_tpuvm ` |
244- | 2.0 | ` gcr.io /tpu-pytorch/xla:r2.0_3.8_tpuvm` |
245- | 1.13 | ` gcr.io /tpu-pytorch/xla:r1.13_3.8_tpuvm` |
206+ | 2.0 | ` us-central1-docker.pkg.dev /tpu-pytorch-releases/docker /xla:r2.0_3.8_tpuvm` |
207+ | 1.13 | ` us-central1-docker.pkg.dev /tpu-pytorch-releases/docker /xla:r1.13_3.8_tpuvm` |
246208| nightly python | ` us-central1-docker.pkg.dev/tpu-pytorch-releases/docker/xla:nightly_3.10_tpuvm ` |
247209
248210To use the above dockers, please pass ` --privileged --net host --shm-size=16G ` along. Here is an example:
@@ -265,34 +227,10 @@ docker run --privileged --net host --shm-size=16G -it us-central1-docker.pkg.dev
265227| Version | GPU CUDA 11.8 + Docker |
266228| --- | ----------- |
267229| 2.1 | ` us-central1-docker.pkg.dev/tpu-pytorch-releases/docker/xla:r2.1.0_3.10_cuda_11.8 ` |
268- | 2.0 | ` gcr.io/tpu-pytorch/xla:r2.0_3.8_cuda_11.8 ` |
269- | nightly | ` us-central1-docker.pkg.dev/tpu-pytorch-releases/docker/xla:nightly_3.8_cuda_11.8 ` |
270- | nightly at date | ` us-central1-docker.pkg.dev/tpu-pytorch-releases/docker/xla:nightly_3.8_cuda_11.8_YYYYMMDD ` |
271-
272- <br />
273-
274- <details >
275-
276- <summary >older versions</summary >
277-
278- | Version | GPU CUDA 11.7 + Docker |
279- | --- | ----------- |
280- | 2.0 | ` gcr.io/tpu-pytorch/xla:r2.0_3.8_cuda_11.7 ` |
281-
282- <br />
283-
284- | Version | GPU CUDA 11.2 + Docker |
285- | --- | ----------- |
286- | 1.13 | ` gcr.io/tpu-pytorch/xla:r1.13_3.8_cuda_11.2 ` |
230+ | 2.0 | ` us-central1-docker.pkg.dev/tpu-pytorch-releases/docker/xla:r2.0_3.8_cuda_11.8 ` |
287231
288232<br />
289233
290- | Version | GPU CUDA 11.2 + Docker |
291- | --- | ----------- |
292- | 1.13 | ` gcr.io/tpu-pytorch/xla:r1.13_3.7_cuda_11.2 ` |
293- | 1.12 | ` gcr.io/tpu-pytorch/xla:r1.12_3.7_cuda_11.2 ` |
294-
295- </details >
296234
297235To run on [ compute instances with
298236GPUs] ( https://cloud.google.com/compute/docs/gpus/create-vm-with-gpus ) .
0 commit comments