Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enable LLVM-12 #802

Closed
wants to merge 1 commit into from
Closed

Enable LLVM-12 #802

wants to merge 1 commit into from

Conversation

r-barnes
Copy link

@r-barnes r-barnes commented Dec 1, 2021

No description provided.

@esc
Copy link
Member

esc commented Dec 1, 2021

@r-barnes thank you for submitting this! Your effort is appreciated. I have labelled this as ready for review. I would like to set some expectations ahead of time however: a) we are currently in a release candidate phase followed by holidays. It is unlikely that anyone will take a closer look before mid/end Q1 2022 b) we have discussed going stright to LLVM 13 in a recent developer meeting. This means, this PR may never actually be looked at, but no-one can say for certain at this stage.

Thank you again for your efforts. Perhaps they will be useful sometime soon!

@esc
Copy link
Member

esc commented Dec 1, 2021

@r-barnes thank you for submitting this! Your effort is appreciated. I have labelled this as ready for review. I would like to set some expectations ahead of time however: a) we are currently in a release candidate phase followed by holidays. It is unlikely that anyone will take a closer look before mid/end Q1 2022 b) we have discussed going stright to LLVM 13 in a recent developer meeting. This means, this PR may never actually be looked at, but no-one can say for certain at this stage.

Thank you again for your efforts. Perhaps they will be useful sometime soon!

If you want to repeat this for LLVM 13, that may actually turn our to be really useful. If LLVM 13 is released even.

@esc
Copy link
Member

esc commented Dec 1, 2021

@r-barnes thank you for submitting this! Your effort is appreciated. I have labelled this as ready for review. I would like to set some expectations ahead of time however: a) we are currently in a release candidate phase followed by holidays. It is unlikely that anyone will take a closer look before mid/end Q1 2022 b) we have discussed going stright to LLVM 13 in a recent developer meeting. This means, this PR may never actually be looked at, but no-one can say for certain at this stage.
Thank you again for your efforts. Perhaps they will be useful sometime soon!

If you want to repeat this for LLVM 13, that may actually turn our to be really useful. If LLVM 13 is released even.

I also noticed, that the conda recpie was not updated to LLVM 12. We run our LLVM with a set of custom patches needed for Numba. To that end, we compile LLVM with conda-build so as to manage our patches. Usually, porting llvmlite to new LLVM version will also include updating these patches so that apply cleanly to the LLVM codebase.

You can take a look at what is needed, as I have done this for LLVM 11.1.0 here: https://github.com/numba/llvmlite/pull/715/files

Hope it helps!

@r-barnes
Copy link
Author

r-barnes commented Dec 1, 2021

@esc: I've updated the PR to include LLVM-13 as well as LLVM-12. I'm looking at the conda stuff now.

@r-barnes
Copy link
Author

r-barnes commented Dec 1, 2021

@esc: Are you suggesting replacing LLVM-11 with LLVM-12 (or 13) in the conda files or adding a different set of conda tests for LLVM-12 to be run in addition to the existing ones for LLVM-11?

@esc
Copy link
Member

esc commented Dec 2, 2021

@esc: Are you suggesting replacing LLVM-11 with LLVM-12 (or 13) in the conda files or adding a different set of conda tests for LLVM-12 to be run in addition to the existing ones for LLVM-11?

Correct. Historically, llvmlite has supported only one LLVM version at a given time. My suggestion would be to replace existing LLVM 11 support with LLVM 13 support.

A further note (a 'heads up' if you will): one of the goals of the LLVM 13 upgrade is to migrate from MCJIT to ORCJIT https://llvm.org/docs/ORCv2.html -- while this task is not yet concretely specified (I think there is not even a ticket for this) -- it will be part of the upgrade. And from what I can tell, this is unlikely to be a trivial task. Anyway, one thing at a time.

@esc
Copy link
Member

esc commented Dec 2, 2021

@esc: I've updated the PR to include LLVM-13 as well as LLVM-12. I'm looking at the conda stuff now.

Thank you!! 🙏🙏 there is some documentation here: https://llvmlite.readthedocs.io/en/latest/admin-guide/install.html#compiling-llvm -- however after giving it a quick skim, it looks very out of date, so please be careful when consulting it and don't hesitate to ask questions. I am also available at: https://gitter.im/numba/numba-dev in case a more real-time communication channels suits you better.

@r-barnes
Copy link
Author

r-barnes commented Dec 2, 2021

@esc: I'm working in an environment pinned to LLVM-12, so support for that's kind of important to me. So far, there are no differences between what's required to make llvmlite work on LLVM-12 versus LLVM-13. Could we:

  1. Aim to cut a release that provides the LLVM-12 and LLVM-13 functionality with the understanding that LLVM-12 will probably be deprecated at some point depending on your development cycle when you make the MCJIT to ORCJIT change.
  2. Set up conda to test with LLVM-13.

@r-barnes
Copy link
Author

r-barnes commented Dec 2, 2021

(Current numba test failures are due to an inability to find the llvm-config utility.)

@esc
Copy link
Member

esc commented Dec 3, 2021

@esc: I'm working in an environment pinned to LLVM-12, so support for that's kind of important to me. So far, there are no differences between what's required to make llvmlite work on LLVM-12 versus LLVM-13. Could we:

  1. Aim to cut a release that provides the LLVM-12 and LLVM-13 functionality with the understanding that LLVM-12 will probably be deprecated at some point depending on your development cycle when you make the MCJIT to ORCJIT change.

Supporting multiple LLVMs will be a deviation of current policy. We will need to achieve consensus with the other stakeholders of this project, but I think it is a very reasonable ask. Historically, llvmlite was strongly coupled to Numba. However, now that it is more and more becoming a project in it's own right, we may need to think about supporting multiple LLVMs, if that is what folks want. You can of course, always apply simple patches, but Numba requires a patched variant of LLVM and most of the trouble when porting LLVM comes from porting these patches. If you don't have llvmlite running for the sake of Numba, of course those patches are not relevant and you can use whatever llvmlite you can get your hands on. I presume this is your use-case?

Having said all of that, I think this is a good thing to start tackling in 2022. The Numba team is currently wrapping up a Numba release candidate so all hands are occupied with this. My suggestion would be, for you to come and join our public developer meeting. The announcement for 2021 is here:

https://numba.discourse.group/t/weekly-public-meeting-every-tuesday-for-2021/658

And I am assuming that this will continue in 2022 with the same link and calendar invite. It would be awesome to see you there!!

@esc
Copy link
Member

esc commented Dec 3, 2021

(Current numba test failures are due to an inability to find the llvm-config utility.)

If your LLVM is installed in a nonstandard location, set the LLVM_CONFIG environment variable to the location of the corresponding llvm-config or llvm-config.exe executable. This variable must persist into the installation of llvmlite—for example, into a Python environment.

From: https://llvmlite.readthedocs.io/en/latest/admin-guide/install.html

Does that help?

@esc
Copy link
Member

esc commented Dec 6, 2021

We will discuss the use-case of llvmlite to support multiple LLVM versions during the developer meeting tomorrow.

@esc
Copy link
Member

esc commented Dec 7, 2021

@r-barnes we discussed this today: https://github.com/numba/numba/wiki/Minutes_2021_12_07 -- it looks like I will have time to work on ways to support multiple LLVMs in January/Febuary, hopefully.

@r-barnes
Copy link
Author

r-barnes commented Dec 7, 2021

@esc: Awesome. Let me know if there are things I can do to be useful.

@esc
Copy link
Member

esc commented Dec 8, 2021

@esc: Awesome. Let me know if there are things I can do to be useful.

Great, that sounds very good! Thank you in advance for your kind offer. I'll probably need some user-feedback on this use-case shortly after the holidays so I'll probably be reaching out to you then.

@gmarkall
Copy link
Member

it looks like I will have time to work on ways to support multiple LLVMs in January/Febuary, hopefully.

Based on this plan, I'll include this in the 0.39RC milestone for now, with the hope that we see some progress on this PR in time.

@gmarkall
Copy link
Member

(Marked as "waiting on reviewer" as I believe the next step is when @esc looks into support for multiple LLVMs)

@r-barnes
Copy link
Author

r-barnes commented Jan 2, 2022

Sounds good, all.

@esc
Copy link
Member

esc commented Feb 20, 2022

Just a quick heads up here: the work to support the Apple M1 Silicon is taking more time than expected, so I am having some delays. However, this is the next item in my Queue. (My (experimental) queue is now here: https://github.com/orgs/numba/projects/4/views/1)

@r-barnes
Copy link
Author

r-barnes commented Mar 3, 2022

Great, thanks!

@apmasell
Copy link
Contributor

apmasell commented Apr 1, 2022

We plan to discuss supporting multiple LLVM versions on Tuesday April 5, 2022. See meeting calendar for the time and meeting video conferencing link.

@esc
Copy link
Member

esc commented Apr 6, 2022

@r-barnes thank you again for bringing this up. I have passed on this issue to @apmasell and we (the community) discussed this yesterday in the developer meeting and the results can be found at:

https://github.com/numba/numba/wiki/Minutes_2022_04_05

@detrout
Copy link

detrout commented Oct 24, 2022

Hi,

I was trying to compile the Debian llvmlite 0.39.1 version with the patch from here against llvm-13, but ran into a problem.

passmanagers.cpp:21:10: fatal error: llvm/IR/RemarkStreamer.h: No such file or directory                               
   21 | #include "llvm/IR/RemarkStreamer.h"                                                                            
      |          ^~~~~~~~~~~~~~~~~~~~~~~~~~                                                                            
compilation terminated.

That include is wrapped in a

#if LLVM_VERSION_MAJOR > 11
#include "llvm/IR/RemarkStreamer.h"
#endif
#include "llvm/IR/LLVMRemarkStreamer.h"

But RemarkStreamer.h doesn't appear to be present in the llvm-13 package, though LLVMRemarkStreamer does exist.

Since I'm using the released version 0.39.1, I made sure the line I'm having trouble with is also present in HEAD.
https://github.com/numba/llvmlite/blob/main/ffi/passmanagers.cpp#L20

I forced the above pull request to apply against 0.39.1 and refreshed it, dropping one change, and then added removing the include "llvm/IR/RemarkStreamer.h" block to the patch

Leaving me with this patch to make 0.39.1 compatible with llvm-13

From 1d928ebcd59b23b5050234a2bf71f9be7f5f6bd1 Mon Sep 17 00:00:00 2001
From: Richard Barnes <rbarnes@...>
Date: Wed, 1 Dec 2021 10:29:08 -0700
Subject: [PATCH] Enable LLVM-12 and LLVM-13

---
 ffi/build.py                   |  5 ++---
 ffi/targets.cpp                |  2 ++
 llvmlite/tests/test_binding.py | 19 ++++++++++++++++---
 3 files changed, 20 insertions(+), 6 deletions(-)

--- a/ffi/build.py
+++ b/ffi/build.py
@@ -163,9 +163,8 @@
         print(msg)
         print(warning + '\n')
     else:
-
-        if not out.startswith('11'):
-            msg = ("Building llvmlite requires LLVM 11.x.x, got "
+        if not (out.startswith('11') or out.startswith('12') or out.startswith('13')):
+            msg = ("Building llvmlite requires LLVM 11-13.x.x, got "
                    "{!r}. Be sure to set LLVM_CONFIG to the right executable "
                    "path.\nRead the documentation at "
                    "http://llvmlite.pydata.org/ for more information about "
--- a/ffi/targets.cpp
+++ b/ffi/targets.cpp
@@ -204,7 +204,9 @@
         rm = Reloc::DynamicNoPIC;
 
     TargetOptions opt;
+#if LLVM_VERSION_MAJOR < 12
     opt.PrintMachineCode = PrintMC;
+#endif
     opt.MCOptions.ABIName = ABIName;
 
     bool jit = JIT;
--- a/llvmlite/tests/test_binding.py
+++ b/llvmlite/tests/test_binding.py
@@ -18,6 +18,16 @@
 from llvmlite.tests import TestCase
 
 
+def clean_string_whitespace(x: str) -> str:
+    # Remove trailing whitespace from the end of each line
+    x = re.sub(r"\s+$", "", x, flags=re.MULTILINE)
+    # Remove intermediate blank lines
+    x = re.sub(r"\n\s*\n", r"\n", x, flags=re.MULTILINE)
+    # Remove extraneous whitespace from the beginning and end of the string
+    x = x.strip()
+    return x
+
+
 # arvm7l needs extra ABI symbols to link successfully
 if platform.machine() == 'armv7l':
     llvm.load_library_permanently('libgcc_s.so.1')
@@ -555,7 +565,10 @@
         bd = ir.IRBuilder(fn.append_basic_block(name="<>!*''#"))
         bd.ret(ir.Constant(ir.IntType(32), 12345))
         asm = str(mod)
-        self.assertEqual(asm, asm_nonalphanum_blocklabel)
+        self.assertEqual(
+            clean_string_whitespace(asm),
+            clean_string_whitespace(asm_nonalphanum_blocklabel)
+        )
 
     def test_global_context(self):
         gcontext1 = llvm.context.get_global_context()
@@ -640,7 +653,7 @@
     def test_version(self):
         major, minor, patch = llvm.llvm_version_info
         # one of these can be valid
-        valid = [(11,)]
+        valid = [(11,), (12,), (13,)]
         self.assertIn((major,), valid)
         self.assertIn(patch, range(10))
 
--- a/ffi/passmanagers.cpp
+++ b/ffi/passmanagers.cpp
@@ -17,9 +17,6 @@
 #include "llvm-c/Transforms/IPO.h"
 #include "llvm-c/Transforms/Scalar.h"
 #include "llvm/IR/LegacyPassManager.h"
-#if LLVM_VERSION_MAJOR > 11
-#include "llvm/IR/RemarkStreamer.h"
-#endif
 #include "llvm/IR/LLVMRemarkStreamer.h"
 #include "llvm/Remarks/RemarkStreamer.h"
 #include "llvm/Transforms/IPO.h"

With the above patch the debian package for 0.39.1 builds and passes all of its tests cases. (on x86_64)

For the real test then I built numba using the version of llvmlite 0.39.1 modified to use llvm-13. I do have some failures, though I think most of them are due Debian's test environment being different from what the tests expect.

FAILED (failures=28, errors=1, skipped=337 expected failures=13)

One chunk of failures is problems with gdb not behaving as numba expects.
Another batch of failures that looks like the caching code can't find the cached compiled files.

I've got a a couple memory leak errors

======================================================================                                              
FAIL: test_argmax_axis_out_of_range (numba.tests.test_array_reductions.TestArrayReductions)                         
----------------------------------------------------------------------                                              
Traceback (most recent call last):                                                                                  
  File "/usr/lib/python3/dist-packages/numba/tests/support.py", line 841, in tearDown                               
    self.memory_leak_teardown()                                                                                     
  File "/usr/lib/python3/dist-packages/numba/tests/support.py", line 815, in memory_leak_teardown                   
    self.assert_no_memory_leak()                                                                                    
  File "/usr/lib/python3/dist-packages/numba/tests/support.py", line 824, in assert_no_memory_leak                  
    self.assertEqual(total_alloc, total_free)                                                                       
AssertionError: 4 != 2   

And a couple of long error messages like this which seem to have the best chance of having been caused by the update to llvm-13.

======================================================================                                              
FAIL: test_instcombine_effect (numba.tests.test_vectorization.TestVectorization)                                    
----------------------------------------------------------------------                                              
Traceback (most recent call last):                                                                                  
  File "/usr/lib/python3/dist-packages/numba/tests/test_vectorization.py", line 72, in test_instcombine_effect      
    self.assertIn("vector.body", llvm_ir)                                                                           
AssertionError: 'vector.body' not found in '; ModuleID = \'TestVectorization.test_instcombine_effect.<locals>.sum_sq
rt_list\'\nsource_filename = "<string>"\ntarget datalayout = "e-m:e-p270:32:32-p271:32:32-p272:64:64-i64:64-f80:128-
n8:16:32:64-S128"\ntarget triple = "x86_64-pc-linux-gnu"\n\n@.const.pickledata.140205814123648 = internal constant [
81 x i8] c"\\80\\04\\95F\\00\\00\\00\\00\\00\\00\\00\\8C\\08builtins\\94\\8C\\0CRuntimeError\\94\\93\\94\\8C!list wa
s mutated during iteration\\94\\85\\94N\\87\\94."\n@.const.pickledata.140205814123648.sha1 = internal constant [20 x
 i8] c"1\\D11\\BF\\C4\\F4*\\0F\\C2\\80\\05\\BD\\E8$\\1E\\C1\\CB\\B0\\CE_"\n@.const.picklebuf.140205814123648 = inter
nal constant { i8*, i32, i8* } { i8* getelementptr inbounds ([81 x i8], [81 x i8]* @.const.pickledata.14020581412364
8, i32 0, i32 0), i32 81, i8* getelementptr inbounds ([20 x i8], [20 x i8]* @.const.pickledata.140205814123648.sha1,
 i32 0, i32 0) }\n@".const.TestVectorization.test_instcombine_effect.<locals>.sum_sqrt_list" = internal constant [65
 x i8] c"TestVectorization.test_instcombine_effect.<locals>.sum_sqrt_list\\00"\n@_ZN08NumbaEnv5numba5tests18test_vec
torization17TestVectorization23test_instcombine_effect12_3clocals_3e13sum_sqrt_listB3v25B44c8tJTC_2fWgLnCFt0Z1eogKfV
aTWBIFJXYgtKEJgA_3dE21ListType_5bfloat64_5d = common local_unnamed_addr global i8* null\n@PyExc_RuntimeError = exter
nal global i8\n@".const.missing Environment: _ZN08NumbaEnv5numba5tests18test_vectorization17TestVectorization23test_
instcombine_effect12_3clocals_3e13sum_sqrt_listB3v25B44c8tJTC_2fWgLnCFt0Z1eogKfVaTWBIFJXYgtKEJgA_3dE21ListType_5bflo
at64_5d" = internal constant [216 x i8] c"missing Environment: _ZN08NumbaEnv5numba5tests18test_vectorization17TestVe
ctorization23test_instcombine_effect12_3clocals_3e13sum_sqrt_listB3v25B44c8tJTC_2fWgLnCFt0Z1eogKfVaTWBIFJXYgtKEJgA_3
dE21ListType_5bfloat64_5d\\00"\n@.const.pickledata.94623123396704 = internal constant [45 x i8] c"\\80\\04\\95\\22\\
00\\00\\00\\00\\00\\00\\00\\8C\\15numba.typed.typedlist\\94\\8C\\04List\\94\\93\\94."\n@.const.pickledata.9462312339
6704.sha1 = internal constant [20 x i8] c"\\11[H\\F34\\CF\\147\\1A\\B2o\\F5cd\\0F\\09K\'\\E7\\83"\n@.const.picklebuf
.94623123396704 = internal unnamed_addr constant { i8*, i32, i8* } { i8* getelementptr inbounds ([45 x i8], [45 x i8
]* @.const.pickledata.94623123396704, i32 0, i32 0), i32 45, i8* getelementptr inbounds ([20 x i8], [20 x i8]* @.con
st.pickledata.94623123396704.sha1, i32 0, i32 0) }\n@.const._opaque = internal constant [8 x i8] c"_opaque\\00"\n@Py
Exc_TypeError = external global i8\n@".const.can\'t unbox a %S as a %S" = internal constant [25 x i8] c"can\'t unbox
 a %S as a %S\\00"\n@".const.unknown error when calling native function" = internal constant [43 x i8] c"unknown err
or when calling native function\\00"\n@PyExc_SystemError = external global i8\n@".const.unknown error when calling n
ative function.1" = internal constant [43 x i8] c"unknown error when calling native function\\00"\n@".const.<numba.c
ore.cpu.CPUContext object at 0x7f8435afea70>" = internal constant [53 x i8] c"<numba.core.cpu.CPUContext object at 0
x7f8435afea70>\\00"\n@_ZN08NumbaEnv5numba5typed10listobject8impl_len12_3clocals_3e4implB3v26B62c8tJTIcFHzwl2ILiXkcBV
0KBSsOcbovu9mp1kJR6rSYw_2bDAiGKoZGEBgliYAE21ListType_5bfloat64_5d = common local_unnamed_addr global i8* null\n\ndef
ine i32 @_ZN5numba5tests18test_vectorization17TestVectorization23test_instcombine_effect12_3clocals_3e13sum_sqrt_lis
tB3v25B44c8tJTC_2fWgLnCFt0Z1eogKfVaTWBIFJXYgtKEJgA_3dE21ListType_5bfloat64_5d(double* noalias nocapture %retptr, { i
8*, i32, i8* }** noalias nocapture %excinfo, i8* %arg.lst.0, i8* %arg.lst.1) local_unnamed_addr {\nentry:\n  tail ca
ll void @NRT_incref(i8* %arg.lst.0)\n  %.13.i = tail call i64 @numba_list_size_address(i8* %arg.lst.1) #5, !noalias 
!2\n  %.14.i = inttoptr i64 %.13.i to i64*\n  %.15.i = load i64, i64* %.14.i, align 8, !noalias !2\n  %.13.i1 = tail
 call i64 @numba_list_size_address(i8* %arg.lst.1) #5, !noalias !5\n  %.14.i2 = inttoptr i64 %.13.i1 to i64*\n  %.15
.i3 = load i64, i64* %.14.i2, align 8, !noalias !5\n  %.99.not = icmp eq i64 %.15.i, %.15.i3\n  br i1 %.99.not, labe
l %B10.endif.endif.us.preheader, label %B10.endif.if, !prof !8\n\nB10.endif.endif.us.preheader:                     
; preds = %entry\n  %smax = call i64 @llvm.smax.i64(i64 %.15.i, i64 0)\n  %exitcond.not18 = icmp slt i64 %.15.i, 1\n
  br i1 %exitcond.not18, label %B30, label %for.end.us.preheader\n\nfor.end.us.preheader:                           
  ; preds = %B10.endif.endif.us.preheader\n  %.118.us = tail call i8* @numba_list_base_ptr(i8* %arg.lst.1)\n  br lab
el %for.end.us\n\nfor.end.us:                                       ; preds = %for.end.us.preheader, %for.end.us\n  
%.13.016.us20 = phi i64 [ %.163.us, %for.end.us ], [ 0, %for.end.us.preheader ]\n  %acc.2.017.us19 = phi double [ %.
201.us, %for.end.us ], [ 0.000000e+00, %for.end.us.preheader ]\n  %0 = shl i64 %.13.016.us20, 3\n  %scevgep = getele
mentptr i8, i8* %.118.us, i64 %0\n  %scevgep21 = bitcast i8* %scevgep to double*\n  %.130.us = load double, double* 
%scevgep21, align 8\n  %.163.us = add nuw i64 %.13.016.us20, 1\n  %.195.us = tail call fast double @sqrt(double %.13
0.us)\n  %.201.us = fadd fast double %.195.us, %acc.2.017.us19\n  %exitcond.not = icmp eq i64 %smax, %.163.us\n  br 
i1 %exitcond.not, label %B30, label %for.end.us\n\ncommon.ret:                                       ; preds = %B10.
endif.if, %B30\n  %common.ret.op = phi i32 [ 0, %B30 ], [ 1, %B10.endif.if ]\n  ret i32 %common.ret.op\n\nB30:  
                                        ; preds = %for.end.us, %B10.endif.endif.us.preheader\n  %acc.2.017.us.lcssa 
= phi double [ 0.000000e+00, %B10.endif.endif.us.preheader ], [ %.201.us, %for.end.us ]\n  tail call void @NRT_decre
f(i8* %arg.lst.0)\n  store double %acc.2.017.us.lcssa, double* %retptr, align 8\n  br label %common.ret\n\nB10.endif
.if:                                     ; preds = %entry\n  store { i8*, i32, i8* }* @.const.picklebuf.140205814123
648, { i8*, i32, i8* }** %excinfo, align 8\n  br label %common.ret\n}\n\n; Function Attrs: alwaysinline nofree nounw
ind readonly\ndeclare i8* @numba_list_base_ptr(i8*) local_unnamed_addr #0\n\n; Function Attrs: mustprogress nofree n
ounwind readonly willreturn\ndeclare double @sqrt(double) local_unnamed_addr #1\n\ndefine i8* @_ZN7cpython5numba5tes
ts18test_vectorization17TestVectorization23test_instcombine_effect12_3clocals_3e13sum_sqrt_listB3v25B44c8tJTC_2fWgLn
CFt0Z1eogKfVaTWBIFJXYgtKEJgA_3dE21ListType_5bfloat64_5d(i8* nocapture readnone %py_closure, i8* %py_args, i8* nocapt
ure readnone %py_kws) local_unnamed_addr {\nentry:\n  %.5 = alloca i8*, align 8\n  %.6 = call i32 (i8*, i8*, i64, i6
4, ...) @PyArg_UnpackTuple(i8* %py_args, i8* getelementptr inbounds ([65 x i8], [65 x i8]* @".const.TestVectorizatio
n.test_instcombine_effect.<locals>.sum_sqrt_list", i64 0, i64 0), i64 1, i64 1, i8** nonnull %.5)\n  %.7 = icmp eq i
32 %.6, 0\n  %.57 = alloca double, align 8\n  %excinfo = alloca { i8*, i32, i8* }*, align 8\n  store { i8*, i32, i8*
 }* null, { i8*, i32, i8* }** %excinfo, align 8\n  br i1 %.7, label %common.ret, label %entry.endif, !prof !9\n\ncom
mon.ret:                                       ; preds = %entry.endif.endif.endif.endif.endif.if, %entry.endif.endif
.endif.endif.endif.if.if, %entry.endif.endif.endif.endif.endif.endif.endif.endif, %entry.endif.endif.endif, %entry, 
%entry.endif.endif.endif.endif.if.endif, %entry.endif.if\n  %common.ret.op = phi i8* [ null, %entry.endif.if ], [ %.
77, %entry.endif.endif.endif.endif.if.endif ], [ null, %entry ], [ null, %entry.endif.endif.endif ], [ null, %entry.
endif.endif.endif.endif.endif.endif.endif.endif ], [ null, %entry.endif.endif.endif.endif.endif.if.if ], [ null, %en
try.endif.endif.endif.endif.endif.if ]\n  ret i8* %common.ret.op\n\nentry.endif:                                    
  ; preds = %entry\n  %.11 = load i8*, i8** @_ZN08NumbaEnv5numba5tests18test_vectorization17TestVectorization23test_
instcombine_effect12_3clocals_3e13sum_sqrt_listB3v25B44c8tJTC_2fWgLnCFt0Z1eogKfVaTWBIFJXYgtKEJgA_3dE21ListType_5bflo
at64_5d, align 8\n  %.16 = icmp eq i8* %.11, null\n  br i1 %.16, label %entry.endif.if, label %entry.endif.endif, !p
rof !9\n\nentry.endif.if:                                   ; preds = %entry.endif\n  call void @PyErr_SetString(i8*
 nonnull @PyExc_RuntimeError, i8* getelementptr inbounds ([216 x i8], [216 x i8]* @".const.missing Environment: _ZN0
8NumbaEnv5numba5tests18test_vectorization17TestVectorization23test_instcombine_effect12_3clocals_3e13sum_sqrt_listB3
v25B44c8tJTC_2fWgLnCFt0Z1eogKfVaTWBIFJXYgtKEJgA_3dE21ListType_5bfloat64_5d", i64 0, i64 0))\n  br label %common.ret\
n\nentry.endif.endif:                                ; preds = %entry.endif\n  %.20 = load i8*, i8** %.5, align 8\n 
 %.21 = load { i8*, i32, i8* }, { i8*, i32, i8* }* @.const.picklebuf.94623123396704, align 16\n  %.22 = extractvalue
 { i8*, i32, i8* } %.21, 0\n  %.24 = extractvalue { i8*, i32, i8* } %.21, 1\n  %.26 = extractvalue { i8*, i32, i8* }
 %.21, 2\n  %.27 = call i8* @numba_unpickle(i8* %.22, i32 %.24, i8* %.26)\n  %.28 = call i8* @PyObject_Type(i8* %.20
)\n  %.29 = icmp eq i8* %.28, %.27\n  br i1 %.29, label %entry.endif.endif.if, label %entry.endif.endif.else\n\nentr
y.endif.endif.if:                             ; preds = %entry.endif.endif\n  %.31 = call i8* @PyObject_GetAttrStrin
g(i8* %.20, i8* getelementptr inbounds ([8 x i8], [8 x i8]* @.const._opaque, i64 0, i64 0))\n  %.32 = call i8* @NRT_
meminfo_from_pyobject(i8* %.31)\n  %.5.i = getelementptr i8, i8* %.32, i64 24\n  %0 = bitcast i8* %.5.i to i8***\n  
%.6.i1 = load i8**, i8*** %0, align 8\n  %.39 = load i8*, i8** %.6.i1, align 8\n  %.44.fca.0.insert = insertvalue { 
i8*, i8* } undef, i8* %.32, 0\n  %.44.fca.1.insert = insertvalue { i8*, i8* } %.44.fca.0.insert, i8* %.39, 1\n  call
 void @Py_DecRef(i8* %.31)\n  br label %entry.endif.endif.endif\n\nentry.endif.endif.else:                          
 ; preds = %entry.endif.endif\n  call void (i8*, i8*, ...) @PyErr_Format(i8* nonnull @PyExc_TypeError, i8* getelemen
tptr inbounds ([25 x i8], [25 x i8]* @".const.can\'t unbox a %S as a %S", i64 0, i64 0), i8* %.28, i8* %.27)\n  br l
abel %entry.endif.endif.endif\n\nentry.endif.endif.endif:                          ; preds = %entry.endif.endif.else
, %entry.endif.endif.if\n  %.49 = phi { i8*, i8* } [ %.44.fca.1.insert, %entry.endif.endif.if ], [ zeroinitializer, 
%entry.endif.endif.else ]\n  %1 = icmp eq i8* %.28, %.27\n  call void @Py_DecRef(i8* %.27)\n  call void @Py_DecRef(i
8* %.28)\n  br i1 %1, label %entry.endif.endif.endif.endif, label %common.ret, !prof !8\n\nentry.endif.endif.endif.e
ndif:                    ; preds = %entry.endif.endif.endif\n  store double 0.000000e+00, double* %.57, align 8\n  %
extracted.meminfo.1 = extractvalue { i8*, i8* } %.49, 0\n  %extracted.data.1 = extractvalue { i8*, i8* } %.49, 1\n  
%.61 = call i32 @_ZN5numba5tests18test_vectorization17TestVectorization23test_instcombine_effect12_3clocals_3e13sum_
sqrt_listB3v25B44c8tJTC_2fWgLnCFt0Z1eogKfVaTWBIFJXYgtKEJgA_3dE21ListType_5bfloat64_5d(double* nonnull %.57, { i8*, i
32, i8* }** nonnull %excinfo, i8* %extracted.meminfo.1, i8* %extracted.data.1) #3\n  %.62 = load { i8*, i32, i8* }*,
 { i8*, i32, i8* }** %excinfo, align 8\n  %.69 = icmp sgt i32 %.61, 0\n  %.70 = select i1 %.69, { i8*, i32, i8* }* %
.62, { i8*, i32, i8* }* undef\n  %.71 = load double, double* %.57, align 8\n  call void @NRT_decref(i8* %extracted.m
eminfo.1)\n  switch i32 %.61, label %entry.endif.endif.endif.endif.endif [\n    i32 -2, label %entry.endif.endif.end
if.endif.if.endif\n    i32 0, label %entry.endif.endif.endif.endif.if.endif\n  ]\n\nentry.endif.endif.endif.endif.en
dif:              ; preds = %entry.endif.endif.endif.endif\n  %2 = icmp sgt i32 %.61, 0\n  br i1 %2, label %entry.en
dif.endif.endif.endif.endif.if, label %entry.endif.endif.endif.endif.endif.endif.endif.endif\n\nentry.endif.endif.en
dif.endif.if.endif:           ; preds = %entry.endif.endif.endif.endif, %entry.endif.endif.endif.endif\n  %.77 = cal
l i8* @PyFloat_FromDouble(double %.71)\n  br label %common.ret\n\nentry.endif.endif.endif.endif.endif.if:           
; preds = %entry.endif.endif.endif.endif.endif\n  call void @PyErr_Clear()\n  %.82 = load { i8*, i32, i8* }, { i8*, 
i32, i8* }* %.70, align 8\n  %.83 = extractvalue { i8*, i32, i8* } %.82, 0\n  %.85 = extractvalue { i8*, i32, i8* } 
%.82, 1\n  %.87 = extractvalue { i8*, i32, i8* } %.82, 2\n  %.88 = call i8* @numba_unpickle(i8* %.83, i32 %.85, i8* 
%.87)\n  %.89.not = icmp eq i8* %.88, null\n  br i1 %.89.not, label %common.ret, label %entry.endif.endif.endif.endi
f.endif.if.if, !prof !9\n\nentry.endif.endif.endif.endif.endif.if.if:        ; preds = %entry.endif.endif.endif.endi
f.endif.if\n  call void @numba_do_raise(i8* nonnull %.88)\n  br label %common.ret\n\nentry.endif.endif.endif.endif.e
ndif.endif.endif.endif: ; preds = %entry.endif.endif.endif.endif.endif\n  call void @PyErr_SetString(i8* nonnull @Py
Exc_SystemError, i8* getelementptr inbounds ([43 x i8], [43 x i8]* @".const.unknown error when calling native functi
on", i64 0, i64 0))\n  br label %common.ret\n}\n\ndeclare i32 @PyArg_UnpackTuple(i8*, i8*, i64, i64, ...) local_unna
med_addr\n\ndeclare void @PyErr_SetString(i8*, i8*) local_unnamed_addr\n\ndeclare i8* @numba_unpickle(i8*, i32, i8*)
 local_unnamed_addr\n\ndeclare i8* @PyObject_Type(i8*) local_unnamed_addr\n\ndeclare i8* @PyObject_GetAttrString(i8*
, i8*) local_unnamed_addr\n\ndeclare noalias i8* @NRT_meminfo_from_pyobject(i8*) local_unnamed_addr\n\ndeclare void 
@Py_DecRef(i8*) local_unnamed_addr\n\ndeclare void @PyErr_Format(i8*, i8*, ...) local_unnamed_addr\n\ndeclare i8* @P
yFloat_FromDouble(double) local_unnamed_addr\n\ndeclare void @PyErr_Clear() local_unnamed_addr\n\ndeclare void @numb
a_do_raise(i8*) local_unnamed_addr\n\ndefine double @cfunc._ZN5numba5tests18test_vectorization17TestVectorization23t
est_instcombine_effect12_3clocals_3e13sum_sqrt_listB3v25B44c8tJTC_2fWgLnCFt0Z1eogKfVaTWBIFJXYgtKEJgA_3dE21ListType_5
bfloat64_5d({ i8*, i8* } %.1) local_unnamed_addr {\nentry:\n  %.3 = alloca double, align 8\n  store double 0.000000e
+00, double* %.3, align 8\n  %excinfo = alloca { i8*, i32, i8* }*, align 8\n  store { i8*, i32, i8* }* null, { i8*, 
i32, i8* }** %excinfo, align 8\n  %extracted.meminfo = extractvalue { i8*, i8* } %.1, 0\n  %extracted.data = extract
value { i8*, i8* } %.1, 1\n  %.7 = call i32 @_ZN5numba5tests18test_vectorization17TestVectorization23test_instcombin
e_effect12_3clocals_3e13sum_sqrt_listB3v25B44c8tJTC_2fWgLnCFt0Z1eogKfVaTWBIFJXYgtKEJgA_3dE21ListType_5bfloat64_5d(do
uble* nonnull %.3, { i8*, i32, i8* }** nonnull %excinfo, i8* %extracted.meminfo, i8* %extracted.data) #3\n  %.8 = lo
ad { i8*, i32, i8* }*, { i8*, i32, i8* }** %excinfo, align 8\n  %.9.not = icmp eq i32 %.7, 0\n  %.15 = icmp sgt i32 
%.7, 0\n  %.16 = select i1 %.15, { i8*, i32, i8* }* %.8, { i8*, i32, i8* }* undef\n  %.17 = load double, double* %.3
, align 8\n  %.19 = alloca i32, align 4\n  store i32 0, i32* %.19, align 4\n  br i1 %.9.not, label %entry.endif, lab
el %entry.if, !prof !8\n\nentry.if:                                         ; preds = %entry\n  %0 = icmp sgt i32 %.
7, 0\n  call void @numba_gil_ensure(i32* nonnull %.19)\n  br i1 %0, label %entry.if.if, label %entry.if.endif.endif.
endif\n\nentry.endif:                                      ; preds = %.22, %entry\n  ret double %.17\n\n.22:        
                                      ; preds = %entry.if.if, %entry.if.if.if, %entry.if.endif.endif.endif\n  %.44 =
 call i8* @PyUnicode_FromString(i8* getelementptr inbounds ([53 x i8], [53 x i8]* @".const.<numba.core.cpu.CPUContex
t object at 0x7f8435afea70>", i64 0, i64 0))\n  call void @PyErr_WriteUnraisable(i8* %.44)\n  call void @Py_DecRef(i
8* %.44)\n  call void @numba_gil_release(i32* nonnull %.19)\n  br label %entry.endif\n\nentry.if.if:                
                      ; preds = %entry.if\n  call void @PyErr_Clear()\n  %.25 = load { i8*, i32, i8* }, { i8*, i32, 
i8* }* %.16, align 8\n  %.26 = extractvalue { i8*, i32, i8* } %.25, 0\n  %.28 = extractvalue { i8*, i32, i8* } %.25,
 1\n  %.30 = extractvalue { i8*, i32, i8* } %.25, 2\n  %.31 = call i8* @numba_unpickle(i8* %.26, i32 %.28, i8* %.30)
\n  %.32.not = icmp eq i8* %.31, null\n  br i1 %.32.not, label %.22, label %entry.if.if.if, !prof !9\n\nentry.if.if.
if:                                   ; preds = %entry.if.if\n  call void @numba_do_raise(i8* nonnull %.31)\n  br la
bel %.22\n\nentry.if.endif.endif.endif:                       ; preds = %entry.if\n  call void @PyErr_SetString(i8* 
nonnull @PyExc_SystemError, i8* getelementptr inbounds ([43 x i8], [43 x i8]* @".const.unknown error when calling na
tive function.1", i64 0, i64 0))\n  br label %.22\n}\n\ndeclare void @numba_gil_ensure(i32*) local_unnamed_addr\n\nd
eclare i8* @PyUnicode_FromString(i8*) local_unnamed_addr\n\ndeclare void @PyErr_WriteUnraisable(i8*) local_unnamed_a
ddr\n\ndeclare void @numba_gil_release(i32*) local_unnamed_addr\n\n; Function Attrs: alwaysinline nofree nounwind re
adonly\ndeclare i64 @numba_list_size_address(i8*) local_unnamed_addr #0\n\n; Function Attrs: mustprogress nofree noi
nline norecurse nounwind willreturn\ndefine linkonce_odr void @NRT_incref(i8* %.1) local_unnamed_addr #2 {\n.3:\n  %
.4 = icmp eq i8* %.1, null\n  br i1 %.4, label %common.ret, label %.3.endif, !prof !9\n\ncommon.ret:                
                       ; preds = %.3.endif, %.3\n  ret void\n\n.3.endif:                                         ; p
reds = %.3\n  %.7 = bitcast i8* %.1 to i64*\n  %.4.i = atomicrmw add i64* %.7, i64 1 monotonic, align 8\n  br label 
%common.ret\n}\n\n; Function Attrs: noinline\ndefine linkonce_odr void @NRT_decref(i8* %.1) local_unnamed_addr #3 {\
n.3:\n  %.4 = icmp eq i8* %.1, null\n  br i1 %.4, label %common.ret1, label %.3.endif, !prof !9\n\ncommon.ret1:     
                                 ; preds = %.3, %.3.endif\n  ret void\n\n.3.endif:                                  
       ; preds = %.3\n  fence release\n  %.8 = bitcast i8* %.1 to i64*\n  %.4.i = atomicrmw sub i64* %.8, i64 1 mono
tonic, align 8\n  %.10 = icmp eq i64 %.4.i, 1\n  br i1 %.10, label %.3.endif.if, label %common.ret1, !prof !9\n\n.3.
endif.if:                                      ; preds = %.3.endif\n  fence acquire\n  tail call void @NRT_MemInfo_c
all_dtor(i8* nonnull %.1)\n  ret void\n}\n\ndeclare void @NRT_MemInfo_call_dtor(i8*) local_unnamed_addr\n\n; Functio
n Attrs: nofree nosync nounwind readnone speculatable willreturn\ndeclare i64 @llvm.smax.i64(i64, i64) #4\n\nattribu
tes #0 = { alwaysinline nofree nounwind readonly }\nattributes #1 = { mustprogress nofree nounwind readonly willretu
rn }\nattributes #2 = { mustprogress nofree noinline norecurse nounwind willreturn }\nattributes #3 = { noinline }\n
attributes #4 = { nofree nosync nounwind readnone speculatable willreturn }\nattributes #5 = { nounwind }\n\n!numba_
args_may_always_need_nrt = !{!0, !1, !1, !1}\n\n!0 = !{i32 (double*, { i8*, i32, i8* }**, i8*, i8*)* @_ZN5numba5test
s18test_vectorization17TestVectorization23test_instcombine_effect12_3clocals_3e13sum_sqrt_listB3v25B44c8tJTC_2fWgLnC
Ft0Z1eogKfVaTWBIFJXYgtKEJgA_3dE21ListType_5bfloat64_5d}\n!1 = distinct !{null}\n!2 = !{!3}\n!3 = distinct !{!3, !4, 
!"_ZN5numba5typed10listobject8impl_len12_3clocals_3e4implB3v26B62c8tJTIcFHzwl2ILiXkcBV0KBSsOcbovu9mp1kJR6rSYw_2bDAiG
KoZGEBgliYAE21ListType_5bfloat64_5d: %retptr"}\n!4 = distinct !{!4, !"_ZN5numba5typed10listobject8impl_len12_3clocal
s_3e4implB3v26B62c8tJTIcFHzwl2ILiXkcBV0KBSsOcbovu9mp1kJR6rSYw_2bDAiGKoZGEBgliYAE21ListType_5bfloat64_5d"}\n!5 = !{!6
}\n!6 = distinct !{!6, !7, !"_ZN5numba5typed10listobject8impl_len12_3clocals_3e4implB3v26B62c8tJTIcFHzwl2ILiXkcBV0KB
SsOcbovu9mp1kJR6rSYw_2bDAiGKoZGEBgliYAE21ListType_5bfloat64_5d: %retptr"}\n!7 = distinct !{!7, !"_ZN5numba5typed10li
stobject8impl_len12_3clocals_3e4implB3v26B62c8tJTIcFHzwl2ILiXkcBV0KBSsOcbovu9mp1kJR6rSYw_2bDAiGKoZGEBgliYAE21ListTyp
e_5bfloat64_5d"}\n!8 = !{!"branch_weights", i32 99, i32 1}\n!9 = !{!"branch_weights", i32 1, i32 99}\n'

@apmasell
Copy link
Contributor

Yes, I'm working on these fixes in #830 , but it's going rather slowly.

@esc
Copy link
Member

esc commented Oct 27, 2022

@apmasell do we want to close this issue, since we don't intend to support LLVM 12 or 13?

@apmasell
Copy link
Contributor

I think so. Most of this work has been superseded by the LLVM 14 changes.

@esc
Copy link
Member

esc commented Oct 27, 2022

@apmasell thank you!

@detrout thank you again for asking about this, unfortunately, I am not sure we'll be able to help you out here due to resource constraints. llvmlite 0.39.1 only supports LLVM 11. llvmlite 0.40.0 will most likely support only LLVM 14 (or stay at 11). There is also some text about this on the FAQ of the llvmlite docs here:

https://llvmlite.readthedocs.io/en/latest/faqs.html

FWIW: Using Numba with llvmlite REQUIRES a patched version of LLVM to pass the whole test suite and function correctly, we discourage using this setup with a dynamically linked LLVM. Not doing this may end up in strange bugs that manifest on user systems and then end up on our tracker, but which are actually caused by the incorrect redistribution of the software:

numba/numba#8215

But of course, the terms of the OSS licence mean you have all the freedom. 🎉

@esc esc closed this Oct 27, 2022
@esc
Copy link
Member

esc commented Oct 27, 2022

@r-barnes apologies here also, my schedule ended up changing quite a bit and as you will have noticed, @apmasell has taken over the LLVM front and we are (hopefully) going straight to LLVM 14 (if we can make it compatible with Numba/llvmlite 🤞 ).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

6 participants