-
Notifications
You must be signed in to change notification settings - Fork 7.2k
Description
🐛 Describe the bug
torchvision.ops.roi_align on CPU segfaults (ASAN DEADLYSIGNAL) when any ROI contains NaN coordinates.
The crash occurs in the C++ kernel roi_align_forward_kernel_impl due to missing finite-value checks on rois[:, 1:5]. With NaN in x1/x2 (or y1/y2), roi_start_*, roi_end_*, and bin_size_* become NaN, causing invalid indices to be precomputed and eventually an out-of-bounds read from the input feature map.
Poc:
A small poc will be provided to reproduce the issue. To reproduce the issue, run python3 poc.py using the torchvision compiled with asan.
# poc.py
import torch
from torchvision.ops import roi_align
torch.set_num_threads(1)
print("torch:", torch.__version__)
import torchvision
print("torchvision:", torchvision.__version__)
# The input tensor values are irrelevant; use zeros.
x = torch.zeros(2, 10, 15, 19, dtype=torch.float32)
# Key: The 0th image in the list has 1 ROI containing NaN; the 1st image has no ROI.
boxes = [
torch.tensor([[float('nan'), 0.0, float('nan'), 1.0]], dtype=torch.float32),
torch.empty(0, 4, dtype=torch.float32),
]
y = roi_align(
x, boxes, (7, 7),
spatial_scale=0.1,
sampling_ratio=1,
aligned=True
)
print("roi_align output shape:", tuple(y.shape))ASAN-report:
AddressSanitizer:DEADLYSIGNAL
=================================================================
==2278719==ERROR: AddressSanitizer: SEGV on unknown address 0x629e0010e200 (pc 0x7f729a212a19 bp 0x7ffc4aa79c90 sp 0x7ffc4aa79aa0 T0)
==2278719==The signal is caused by a READ memory access.
#0 0x7f729a212a19 in void vision::ops::(anonymous namespace)::roi_align_forward_kernel_impl<float>(int, float const*, float const&, int, int, int, int, int, int, bool, float const*, float*) roi_align_kernel.cpp
#1 0x7f729a20fc81 in vision::ops::(anonymous namespace)::roi_align_forward_kernel(at::Tensor const&, at::Tensor const&, double, long, long, long, bool)::$_0::operator()() const::'lambda0'()::operator()() const roi_align_kernel.cpp
#2 0x7f729a20f774 in vision::ops::(anonymous namespace)::roi_align_forward_kernel(at::Tensor const&, at::Tensor const&, double, long, long, long, bool)::$_0::operator()() const roi_align_kernel.cpp
#3 0x7f729a20f41f in vision::ops::(anonymous namespace)::roi_align_forward_kernel(at::Tensor const&, at::Tensor const&, double, long, long, long, bool) roi_align_kernel.cpp
#4 0x7f729a20ed88 in c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor (at::Tensor const&, at::Tensor const&, double, long, long, long, bool), &(vision::ops::(anonymous namespace)::roi_align_forward_kernel(at::Tensor const&, at::Tensor const&, double, long, long, long, bool))>, at::Tensor, c10::guts::typelist::typelist<at::Tensor const&, at::Tensor const&, double, long, long, long, bool> >, at::Tensor (at::Tensor const&, at::Tensor const&, double, long, long, long, bool)>::call(c10::OperatorKernel*, c10::DispatchKeySet, at::Tensor const&, at::Tensor const&, double, long, long, long, bool) roi_align_kernel.cpp
#5 0x7f729a28787a in at::Tensor c10::callUnboxedKernelFunction<at::Tensor, at::Tensor const&, at::Tensor const&, double, long, long, long, bool>(void*, c10::OperatorKernel*, c10::DispatchKeySet, at::Tensor const&, at::Tensor const&, double&&, long&&, long&&, long&&, bool&&) (/root/vision/torchvision/_C.so+0x33687a) (BuildId: a3e21323fc0c2ebde4de928df041991e8eee55d6)
#6 0x7f729a282191 in vision::ops::roi_align_symint(at::Tensor const&, at::Tensor const&, double, c10::SymInt, c10::SymInt, long, bool) (/root/vision/torchvision/_C.so+0x331191) (BuildId: a3e21323fc0c2ebde4de928df041991e8eee55d6)
#7 0x7f729a1adf70 in vision::ops::(anonymous namespace)::ROIAlignFunction::forward(torch::autograd::AutogradContext*, at::Tensor const&, at::Tensor const&, double, c10::SymInt const&, c10::SymInt const&, long, bool) roi_align_kernel.cpp
#8 0x7f729a1ac62d in std::enable_if<std::is_same_v<vision::ops::(anonymous namespace)::ROIAlignFunction, vision::ops::(anonymous namespace)::ROIAlignFunction>, decltype(vision::ops::(anonymous namespace)::ROIAlignFunction::forward(nullptr, std::declval<at::Tensor const&>(), std::declval<at::Tensor const&>(), std::declval<double&>(), std::declval<c10::SymInt&>(), std::declval<c10::SymInt&>(), std::declval<long&>(), std::declval<bool&>()))>::type torch::autograd::Function<vision::ops::(anonymous namespace)::ROIAlignFunction>::apply<vision::ops::(anonymous namespace)::ROIAlignFunction, at::Tensor const&, at::Tensor const&, double&, c10::SymInt&, c10::SymInt&, long&, bool&>(at::Tensor const&, at::Tensor const&, double&, c10::SymInt&, c10::SymInt&, long&, bool&) roi_align_kernel.cpp
#9 0x7f729a1abeff in vision::ops::(anonymous namespace)::roi_align_autograd(at::Tensor const&, at::Tensor const&, double, c10::SymInt, c10::SymInt, long, bool) roi_align_kernel.cpp
#10 0x7f729a1abbe1 in c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor (at::Tensor const&, at::Tensor const&, double, c10::SymInt, c10::SymInt, long, bool), &(vision::ops::(anonymous namespace)::roi_align_autograd(at::Tensor const&, at::Tensor const&, double, c10::SymInt, c10::SymInt, long, bool))>, at::Tensor, c10::guts::typelist::typelist<at::Tensor const&, at::Tensor const&, double, c10::SymInt, c10::SymInt, long, bool> >, at::Tensor (at::Tensor const&, at::Tensor const&, double, c10::SymInt, c10::SymInt, long, bool)>::call(c10::OperatorKernel*, c10::DispatchKeySet, at::Tensor const&, at::Tensor const&, double, c10::SymInt, c10::SymInt, long, bool) roi_align_kernel.cpp
#11 0x7f729a1b5612 in std::decay<c10::guts::infer_function_traits<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor (at::Tensor const&, at::Tensor const&, double, c10::SymInt, c10::SymInt, long, bool), &(vision::ops::(anonymous namespace)::roi_align_autograd(at::Tensor const&, at::Tensor const&, double, c10::SymInt, c10::SymInt, long, bool))>, at::Tensor, c10::guts::typelist::typelist<at::Tensor const&, at::Tensor const&, double, c10::SymInt, c10::SymInt, long, bool> > >::type::return_type>::type c10::impl::call_functor_with_args_from_stack_<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor (at::Tensor const&, at::Tensor const&, double, c10::SymInt, c10::SymInt, long, bool), &(vision::ops::(anonymous namespace)::roi_align_autograd(at::Tensor const&, at::Tensor const&, double, c10::SymInt, c10::SymInt, long, bool))>, at::Tensor, c10::guts::typelist::typelist<at::Tensor const&, at::Tensor const&, double, c10::SymInt, c10::SymInt, long, bool> >, false, 0ul, 1ul, 2ul, 3ul, 4ul, 5ul, 6ul, at::Tensor const&, at::Tensor const&, double, c10::SymInt, c10::SymInt, long, bool>(c10::OperatorKernel*, c10::DispatchKeySet, std::vector<c10::IValue, std::allocator<std::vector> >*, std::integer_sequence<unsigned long, 0ul, 1ul, 2ul, 3ul, 4ul, 5ul, 6ul>, c10::guts::typelist::typelist<at::Tensor const&, at::Tensor const&, double, c10::SymInt, c10::SymInt, long, bool>*) roi_align_kernel.cpp
#12 0x7f729a1b538c in std::decay<c10::guts::infer_function_traits<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor (at::Tensor const&, at::Tensor const&, double, c10::SymInt, c10::SymInt, long, bool), &(vision::ops::(anonymous namespace)::roi_align_autograd(at::Tensor const&, at::Tensor const&, double, c10::SymInt, c10::SymInt, long, bool))>, at::Tensor, c10::guts::typelist::typelist<at::Tensor const&, at::Tensor const&, double, c10::SymInt, c10::SymInt, long, bool> > >::type::return_type>::type c10::impl::call_functor_with_args_from_stack<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor (at::Tensor const&, at::Tensor const&, double, c10::SymInt, c10::SymInt, long, bool), &(vision::ops::(anonymous namespace)::roi_align_autograd(at::Tensor const&, at::Tensor const&, double, c10::SymInt, c10::SymInt, long, bool))>, at::Tensor, c10::guts::typelist::typelist<at::Tensor const&, at::Tensor const&, double, c10::SymInt, c10::SymInt, long, bool> >, false>(c10::OperatorKernel*, c10::DispatchKeySet, std::vector<c10::IValue, std::allocator<std::vector> >*) roi_align_kernel.cpp
#13 0x7f729a1abd5b in c10::impl::make_boxed_from_unboxed_functor<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor (at::Tensor const&, at::Tensor const&, double, c10::SymInt, c10::SymInt, long, bool), &(vision::ops::(anonymous namespace)::roi_align_autograd(at::Tensor const&, at::Tensor const&, double, c10::SymInt, c10::SymInt, long, bool))>, at::Tensor, c10::guts::typelist::typelist<at::Tensor const&, at::Tensor const&, double, c10::SymInt, c10::SymInt, long, bool> >, false>::call(c10::OperatorKernel*, c10::OperatorHandle const&, c10::DispatchKeySet, std::vector<c10::IValue, std::allocator<c10::IValue> >*) roi_align_kernel.cpp
#14 0x7f734a22d8a7 in c10::BoxedKernel::callBoxed(c10::OperatorHandle const&, c10::DispatchKeySet, std::vector<c10::IValue, std::allocator<c10::IValue> >*) const /root/pytorch/aten/src/ATen/core/boxing/BoxedKernel_impl.h:48:3
#15 0x7f734a22d8a7 in c10::KernelFunction::callBoxed(c10::OperatorHandle const&, c10::DispatchKeySet, std::vector<c10::IValue, std::allocator<c10::IValue> >*) const /root/pytorch/aten/src/ATen/core/boxing/KernelFunction_impl.h:92:22
#16 0x7f734a22d8a7 in c10::Dispatcher::callBoxed(c10::OperatorHandle const&, std::vector<c10::IValue, std::allocator<c10::IValue> >*) const /root/pytorch/aten/src/ATen/core/dispatch/Dispatcher.h:893:10
#17 0x7f736afe7c45 in std::function<void (std::vector<c10::IValue, std::allocator<c10::IValue> >&)>::operator()(std::vector<c10::IValue, std::allocator<c10::IValue> >&) const /usr/bin/../lib/gcc/x86_64-linux-gnu/11/../../../../include/c++/11/bits/std_function.h:590:9
#18 0x7f736afe7c45 in torch::jit::Operation::operator()(std::vector<c10::IValue, std::allocator<c10::IValue> >&) /root/pytorch/aten/src/ATen/core/stack.h:41:5
#19 0x7f736afe7c45 in torch::jit::invokeOperatorFromPython(c10::ArrayRef<std::shared_ptr<torch::jit::Operator> >, pybind11::args const&, pybind11::kwargs const&, std::optional<c10::DispatchKey>) /root/pytorch/torch/csrc/jit/python/pybind_utils.cpp:860:7
#20 0x7f736afe92d2 in torch::jit::_get_operation_for_overload_or_packet(c10::ArrayRef<std::shared_ptr<torch::jit::Operator> >, c10::Symbol, pybind11::args const&, pybind11::kwargs const&, bool, std::optional<c10::DispatchKey>) /root/pytorch/torch/csrc/jit/python/pybind_utils.cpp:968:9
#21 0x7f736afe9098 in torch::jit::_get_operation_for_overload_or_packet(std::vector<std::shared_ptr<torch::jit::Operator>, std::allocator<std::shared_ptr<torch::jit::Operator> > > const&, c10::Symbol, pybind11::args const&, pybind11::kwargs const&, bool, std::optional<c10::DispatchKey>) /root/pytorch/torch/csrc/jit/python/pybind_utils.cpp:949:10
#22 0x7f736ae0f847 in torch::jit::initJITBindings(_object*)::$_229::operator()(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) const::'lambda'(pybind11::args const&, pybind11::kwargs const&)::operator()(pybind11::args const&, pybind11::kwargs const&) const /root/pytorch/torch/csrc/jit/python/init.cpp:1815:24
#23 0x7f736ae0f847 in pybind11::object pybind11::detail::argument_loader<pybind11::args const&, pybind11::kwargs const&>::call_impl<pybind11::object, torch::jit::initJITBindings(_object*)::$_229::operator()(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) const::'lambda'(pybind11::args const&, pybind11::kwargs const&)&, 0ul, 1ul, pybind11::detail::void_type>(torch::jit::initJITBindings(_object*)::$_229::operator()(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) const::'lambda'(pybind11::args const&, pybind11::kwargs const&)&, std::integer_sequence<unsigned long, 0ul, 1ul>, pybind11::detail::void_type&&) && /root/pytorch/cmake/../third_party/pybind11/include/pybind11/cast.h:2137:16
#24 0x7f736ae0f847 in std::enable_if<!(std::is_void<pybind11::object>::value), pybind11::object>::type pybind11::detail::argument_loader<pybind11::args const&, pybind11::kwargs const&>::call<pybind11::object, pybind11::detail::void_type, torch::jit::initJITBindings(_object*)::$_229::operator()(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) const::'lambda'(pybind11::args const&, pybind11::kwargs const&)&>(torch::jit::initJITBindings(_object*)::$_229::operator()(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) const::'lambda'(pybind11::args const&, pybind11::kwargs const&)&) && /root/pytorch/cmake/../third_party/pybind11/include/pybind11/cast.h:2105:42
#25 0x7f736ae0f48e in void pybind11::cpp_function::initialize<torch::jit::initJITBindings(_object*)::$_229::operator()(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) const::'lambda'(pybind11::args const&, pybind11::kwargs const&), pybind11::object, pybind11::args const&, pybind11::kwargs const&, pybind11::name, pybind11::doc>(torch::jit::initJITBindings(_object*)::$_229::operator()(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) const::'lambda'(pybind11::args const&, pybind11::kwargs const&)&&, pybind11::object (*)(pybind11::args const&, pybind11::kwargs const&), pybind11::name const&, pybind11::doc const&)::'lambda'(pybind11::detail::function_call&)::operator()(pybind11::detail::function_call&) const /root/pytorch/cmake/../third_party/pybind11/include/pybind11/pybind11.h:430:56
#26 0x7f736ae0f48e in void pybind11::cpp_function::initialize<torch::jit::initJITBindings(_object*)::$_229::operator()(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) const::'lambda'(pybind11::args const&, pybind11::kwargs const&), pybind11::object, pybind11::args const&, pybind11::kwargs const&, pybind11::name, pybind11::doc>(torch::jit::initJITBindings(_object*)::$_229::operator()(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) const::'lambda'(pybind11::args const&, pybind11::kwargs const&)&&, pybind11::object (*)(pybind11::args const&, pybind11::kwargs const&), pybind11::name const&, pybind11::doc const&)::'lambda'(pybind11::detail::function_call&)::__invoke(pybind11::detail::function_call&) /root/pytorch/cmake/../third_party/pybind11/include/pybind11/pybind11.h:400:21
#27 0x7f7369e43a0a in pybind11::cpp_function::dispatcher(_object*, _object*, _object*) /root/pytorch/cmake/../third_party/pybind11/include/pybind11/pybind11.h:1063:30
#28 0x55d0e0881961 (/usr/bin/python3.10+0x18a961) (BuildId: b2fd9010dc75aa747aee5296c31a07d210d124ad)
#29 0x55d0e088ff4a in PyObject_Call (/usr/bin/python3.10+0x198f4a) (BuildId: b2fd9010dc75aa747aee5296c31a07d210d124ad)
#30 0x55d0e0872722 in _PyEval_EvalFrameDefault (/usr/bin/python3.10+0x17b722) (BuildId: b2fd9010dc75aa747aee5296c31a07d210d124ad)
#31 0x55d0e0877603 in _PyObject_FastCallDictTstate (/usr/bin/python3.10+0x180603) (BuildId: b2fd9010dc75aa747aee5296c31a07d210d124ad)
#32 0x55d0e088c4d0 in _PyObject_Call_Prepend (/usr/bin/python3.10+0x1954d0) (BuildId: b2fd9010dc75aa747aee5296c31a07d210d124ad)
#33 0x55d0e09946b3 (/usr/bin/python3.10+0x29d6b3) (BuildId: b2fd9010dc75aa747aee5296c31a07d210d124ad)
#34 0x55d0e087842a in _PyObject_MakeTpCall (/usr/bin/python3.10+0x18142a) (BuildId: b2fd9010dc75aa747aee5296c31a07d210d124ad)
#35 0x55d0e087212d in _PyEval_EvalFrameDefault (/usr/bin/python3.10+0x17b12d) (BuildId: b2fd9010dc75aa747aee5296c31a07d210d124ad)
#36 0x55d0e08821bb in _PyFunction_Vectorcall (/usr/bin/python3.10+0x18b1bb) (BuildId: b2fd9010dc75aa747aee5296c31a07d210d124ad)
#37 0x55d0e086daef in _PyEval_EvalFrameDefault (/usr/bin/python3.10+0x176aef) (BuildId: b2fd9010dc75aa747aee5296c31a07d210d124ad)
#38 0x55d0e0951565 (/usr/bin/python3.10+0x25a565) (BuildId: b2fd9010dc75aa747aee5296c31a07d210d124ad)
#39 0x55d0e0951435 in PyEval_EvalCode (/usr/bin/python3.10+0x25a435) (BuildId: b2fd9010dc75aa747aee5296c31a07d210d124ad)
#40 0x55d0e0977ed7 (/usr/bin/python3.10+0x280ed7) (BuildId: b2fd9010dc75aa747aee5296c31a07d210d124ad)
#41 0x55d0e09726de (/usr/bin/python3.10+0x27b6de) (BuildId: b2fd9010dc75aa747aee5296c31a07d210d124ad)
#42 0x55d0e0977c74 (/usr/bin/python3.10+0x280c74) (BuildId: b2fd9010dc75aa747aee5296c31a07d210d124ad)
#43 0x55d0e0977257 in _PyRun_SimpleFileObject (/usr/bin/python3.10+0x280257) (BuildId: b2fd9010dc75aa747aee5296c31a07d210d124ad)
#44 0x55d0e0976f36 in _PyRun_AnyFileObject (/usr/bin/python3.10+0x27ff36) (BuildId: b2fd9010dc75aa747aee5296c31a07d210d124ad)
#45 0x55d0e096b3ad in Py_RunMain (/usr/bin/python3.10+0x2743ad) (BuildId: b2fd9010dc75aa747aee5296c31a07d210d124ad)
#46 0x55d0e094547c in Py_BytesMain (/usr/bin/python3.10+0x24e47c) (BuildId: b2fd9010dc75aa747aee5296c31a07d210d124ad)
#47 0x7f7370063d8f (/lib/x86_64-linux-gnu/libc.so.6+0x29d8f) (BuildId: 4f7b0c955c3d81d7cac1501a2498b69d1d82bfe7)
#48 0x7f7370063e3f in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x29e3f) (BuildId: 4f7b0c955c3d81d7cac1501a2498b69d1d82bfe7)
#49 0x55d0e0945374 in _start (/usr/bin/python3.10+0x24e374) (BuildId: b2fd9010dc75aa747aee5296c31a07d210d124ad)
AddressSanitizer can not provide additional info.
SUMMARY: AddressSanitizer: SEGV roi_align_kernel.cpp in void vision::ops::(anonymous namespace)::roi_align_forward_kernel_impl<float>(int, float const*, float const&, int, int, int, int, int, int, bool, float const*, float*)
==2278719==ABORTING
Suspected root cause:
In torchvision/csrc/ops/cpu/roi_align_kernel.cpp (CPU path), roi_align_forward_kernel only checks device and shape ([K,5]) but does not verify that coordinates are finite. With NaN in x1/x2 or y1/y2, the following chain happens:
T roi_start_w = offset_rois[1] * spatial_scale - offset; // NaN
T roi_end_w = offset_rois[3] * spatial_scale - offset; // NaN
T roi_width = roi_end_w - roi_start_w; // NaN
T bin_size_w = roi_width / pooled_width; // NaN
detail::pre_calc_for_bilinear_interpolate(..., roi_start_h, roi_start_w,
bin_size_h, bin_size_w, ..., pre_calc);
// later uses precomputed indices:
output_val += pc.w1 * offset_input[pc.pos1] + ...; // pos* invalid => OOB READ => SEGV(When sampling_ratio <= 0, there’s an additional hazard: ceil(roi_width / pooled_width) on NaN is undefined once cast to int.)
Versions
PyTorch version: 2.10.0a0+gitf2bb22f
Is debug build: False
CUDA used to build PyTorch: Could not collect
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04.2) 11.4.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 4.1.0
Libc version: glibc-2.35
Python version: 3.10.12 (main, Aug 15 2025, 14:32:43) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.4.0-200-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration:
Nvidia driver version: Could not collect
cuDNN version: Could not collect
Is XPU available: False
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: False
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-127
Vendor ID: AuthenticAMD
Model name: AMD Ryzen Threadripper PRO 5995WX 64-Cores
CPU family: 25
Model: 8
Thread(s) per core: 2
Core(s) per socket: 64
Socket(s): 1
Stepping: 2
Frequency boost: enabled
CPU max MHz: 2700.0000
CPU min MHz: 1800.0000
BogoMIPS: 5390.04
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca
Virtualization: AMD-V
L1d cache: 2 MiB (64 instances)
L1i cache: 2 MiB (64 instances)
L2 cache: 32 MiB (64 instances)
L3 cache: 256 MiB (8 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-127
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; IBRS_FW; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.2.6
[pip3] nvidia-cublas-cu12==12.8.4.1
[pip3] nvidia-cuda-cupti-cu12==12.8.90
[pip3] nvidia-cuda-nvrtc-cu12==12.8.93
[pip3] nvidia-cuda-runtime-cu12==12.8.90
[pip3] nvidia-cudnn-cu12==9.10.2.21
[pip3] nvidia-cufft-cu12==11.3.3.83
[pip3] nvidia-curand-cu12==10.3.9.90
[pip3] nvidia-cusolver-cu12==11.7.3.90
[pip3] nvidia-cusparse-cu12==12.5.8.93
[pip3] nvidia-cusparselt-cu12==0.7.1
[pip3] nvidia-nccl-cu12==2.27.3
[pip3] nvidia-nvjitlink-cu12==12.8.93
[pip3] nvidia-nvtx-cu12==12.8.90
[pip3] optree==0.17.0
[pip3] torch==2.10.0a0+gitf2bb22f
[pip3] torchvision==0.25.0a0
[pip3] triton==3.4.0
[conda] Could not collect