In this LLVM IR ```ll define void @foo(ptr noalias readonly align 16 %0, ptr noalias readonly align 16 %1) { %3 = getelementptr half, ptr %0, i64 0 %4 = load <8 x half>, ptr %3, align 16 %5 = fpext <8 x half> %4 to <8 x float> store <8 x float> %5, ptr %1, align 16 ret void } !nvvm.annotations = !{!0} !0 = !{ptr @foo, !"kernel", i32 1} ``` running with `bin/llc < ~/test.ll -march=nvptx -mcpu=sm_80 --debug` will get the following failure https://gist.github.com/nirvedhmeshram/59912f126126f00a22920b770c330782 However, if you remove the nvvm.annotation this will work as it makes a `LDV_f32_v2_ari`, so my first question is, is it by design that we dont want to use this code for addrspace 1? https://github.com/llvm/llvm-project/blob/main/llvm/lib/Target/NVPTX/NVPTXISelDAGToDAG.cpp#L1071-L1085 because then its hitting this part https://github.com/llvm/llvm-project/blob/main/llvm/lib/Target/NVPTX/NVPTXISelDAGToDAG.cpp#L1599-L1622 which is not meant to handle the `NVPTXISD::LoadV` nodes so we can extend its handling if that is the right thing to do?