Skip to content

[ROCm] packaging.version.parse(torch.version.hip) yields InvalidVersion: Invalid version: '6.4.43482-0f2d60242' while version.parse(torch.version.cuda) succeeds #166068

@fxmarty-amd

Description

@fxmarty-amd

🐛 Describe the bug

As per title:

import torch
from packaging import version

print(version.parse(torch.version.hip))

gives

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/root/miniforge3/lib/python3.12/site-packages/packaging/version.py", line 56, in parse
    return Version(version)
           ^^^^^^^^^^^^^^^^
  File "/root/miniforge3/lib/python3.12/site-packages/packaging/version.py", line 202, in __init__
    raise InvalidVersion(f"Invalid version: {version!r}")
packaging.version.InvalidVersion: Invalid version: '6.4.43482-0f2d60242'

It is all fine on CUDA distribution of pytorch (torch.version.cuda is "12.6").

Versions

PyTorch version: 2.8.0+rocm6.4
Is debug build: False
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: 6.4.43482-0f2d60242

OS: Ubuntu 24.04.2 LTS (x86_64)
GCC version: (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version: Could not collect
CMake version: version 3.31.6
Libc version: glibc-2.39

Python version: 3.12.11 | packaged by conda-forge | (main, Jun  4 2025, 14:45:31) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-6.8.0-60-generic-x86_64-with-glibc2.39
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: AMD Instinct MI325X (gfx942:sramecc+:xnack-)
Nvidia driver version: Could not collect
cuDNN version: Could not collect
Is XPU available: False
HIP runtime version: 6.4.43482
MIOpen runtime version: 3.4.0
Is XNNPACK available: True

CPU:
Architecture:                         x86_64
CPU op-mode(s):                       32-bit, 64-bit
Address sizes:                        52 bits physical, 57 bits virtual
Byte Order:                           Little Endian
CPU(s):                               128
On-line CPU(s) list:                  0-127
Vendor ID:                            AuthenticAMD
Model name:                           AMD EPYC 9575F 64-Core Processor
CPU family:                           26
Model:                                2
Thread(s) per core:                   1
Core(s) per socket:                   64
Socket(s):                            2
Stepping:                             1
Frequency boost:                      enabled
CPU(s) scaling MHz:                   40%
CPU max MHz:                          5008.0068
CPU min MHz:                          1500.0000
BogoMIPS:                             6600.01
Flags:                                fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good amd_lbr_v2 nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba perfmon_v2 ibrs ibpb stibp ibrs_enhanced vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local user_shstk avx_vnni avx512_bf16 clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin cppc amd_ibpb_ret arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif x2avic v_spec_ctrl vnmi avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq la57 rdpid bus_lock_detect movdiri movdir64b overflow_recov succor smca fsrm avx512_vp2intersect flush_l1d debug_swap
L1d cache:                            6 MiB (128 instances)
L1i cache:                            4 MiB (128 instances)
L2 cache:                             128 MiB (128 instances)
L3 cache:                             512 MiB (16 instances)
NUMA node(s):                         2
NUMA node0 CPU(s):                    0-63
NUMA node1 CPU(s):                    64-127
Vulnerability Gather data sampling:   Not affected
Vulnerability Itlb multihit:          Not affected
Vulnerability L1tf:                   Not affected
Vulnerability Mds:                    Not affected
Vulnerability Meltdown:               Not affected
Vulnerability Mmio stale data:        Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed:               Not affected
Vulnerability Spec rstack overflow:   Not affected
Vulnerability Spec store bypass:      Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1:             Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2:             Mitigation; Enhanced / Automatic IBRS; IBPB conditional; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds:                  Not affected
Vulnerability Tsx async abort:        Not affected

Versions of relevant libraries:
[pip3] numpy==2.1.3
[pip3] onnx==1.19.0
[pip3] onnxruntime==1.22.1
[pip3] onnxruntime_extensions==0.14.0
[pip3] onnxruntime-genai==0.9.2
[pip3] onnxsim==0.4.36
[pip3] pytorch-triton-rocm==3.4.0
[pip3] torch==2.8.0+rocm6.4
[pip3] torchaudio==2.8.0+rocm6.4
[pip3] torchvision==0.23.0+rocm6.4
[conda] numpy                     2.1.3                    pypi_0    pypi
[conda] pytorch-triton-rocm       3.4.0                    pypi_0    pypi
[conda] torch                     2.8.0+rocm6.4            pypi_0    pypi
[conda] torchaudio                2.8.0+rocm6.4            pypi_0    pypi
[conda] torchvision               0.23.0+rocm6.4           pypi_0    pypi

cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd

Metadata

Metadata

Assignees

Labels

module: rocmAMD GPU support for PytorchtriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate module

Type

No type

Projects

Status

Done

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions