-
-
Notifications
You must be signed in to change notification settings - Fork 27
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Version 0.47 with llvmlite 0.31 #41
Conversation
Hi! This is the friendly automated conda-forge-linting service. I just wanted to let you know that I linted all conda-recipes in your PR ( |
Can you also include https://github.com/conda-forge/numba-feedstock/pull/38/files#diff-074883f7d957e193a6bcdd3c93e34b76 and rerender so that this also solves #38 ? |
Sure, wasn't sure it would be picked up in the migration properly if I did. |
The migration will continue once #38 is closed for some time. |
…nda-forge-pinning 2019.12.18
325b2fc
to
f33392d
Compare
For linux, And for windows, llvmlite doesn't support 2.7 I believe. |
According to the documentation it is optional |
It is better described here actually |
The others pass. |
Why does |
Yes, but we want to have it in the tests to make sure it works if installed.
This one is odd and probably needs some more investigation...
This seems to be a failure that hadn't been caught by Numba's CI due to some unfortunate circumstances. I'll open a PR for Numba; not to fix it, but to make them aware of this.
IMHO, we can ignore this for now. If there is demand to get this working, someone can tackle this on the
This comes from numba/numba@fed89e3:
|
I couldn't reproduce the |
The previous timeouts were indeed only hiccups, apparently.
re 1.: This is an upstream issue and possibly (probably?!) not that important that it warrants blocking Numba builds for Python 3.8 on conda-forge. re 2.: We can either merge as is and just rerun the CI once Python 2.7 builds of |
The threading backend tests are compute heavy (runs many forking/threading/multiprocessing things along with the compiler and also threaded execution, all at the same time), Numba does not run them on public CI https://github.com/numba/numba/blob/88e3dad3d43a59c79e6295f72fc344d77f13330c/buildscripts/incremental/test.sh#L75 (
The fail is because a loop didn't lift and the test was asserting that a loop did, annotations should work fine still. This is the sample code, with the annotations call added, runs fine and reports correctly: from numba import jit
def bar(x):
return x
@jit
def foo(x):
h = 0.
for k in range(x):
h = h + k
if x:
h = h - bar(x)
return h
foo(10)
cres = foo.overloads[foo.signatures[0]]
ta = cres.type_annotation
with open("annotated.html", 'wt') as f:
ta.html_annotate(f) will fix for 0.48.0.
|
TBB is optional, however if you build Numba without |
I assume this conda-forge/llvmlite-feedstock#13 is where Python 2.7 got removed as a result of not being able to resolve these build problems: https://ci.appveyor.com/project/conda-forge/llvmlite-feedstock/builds/22085156/job/7vrdo4g30l9wwesn ? The errors there look like they could be from mixed tooling? Is perhaps conda-forge LLVM built with VS2017 and then llvmlite with VS2015? |
Yeah, I figured the CI might've had a general slowdown and/or didn't handle the concurrency well. Locally, the two reported tests ran in about
Great, thanks for taking a look at it!
I have very limited knowledge on that topic. If he's available, @isuruf would know, I guess. |
@henryiii, you may want to take a look at those two:
Those seem to additionally add |
I think we should drop Windows + 2.7 for now, then add it back if llvmlite gets a Windows 2.7 build in the future - I don't want to block this for 2.7 on Windows in 2020. We shouldn't even need to bump the build number, I believe (if that's all that changes). |
…nda-forge-pinning 2020.01.05
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Does anyone else have objections?
At least one test needs to be restarted to avoid the timeout. Is there a way to increase that timeout? |
The timeout seems to be 600 seconds / 10 minutes? https://github.com/numba/numba/blob/0.47.0/numba/testing/main.py#L699-L700 Edit: I apparently can't read, this is listed above. |
|
Thanks restarted. |
Sure, I'd be okay with it. |
Great! Could you please add yourself to the maintainers' list in the recipe? |
Let's see what @mbargull says, then I'll add either myself or both of us. I'm especially interested in the result of his PR conda-forge/llvmlite-feedstock#26, looks like that is building for Windows + 2.7. Edit: Looks like that works, but would require a CFEP to be written in order to be merged (VS 17 + Python 2.7 is not allowed). So this cannot be provided for Windows + 2.7 unless it can be built (it == LLVM) with MSVC 2008 - which I assume is not likely. |
Given the time the tests take on my local run, I'm really baffled that we are hitting that timeout... |
dea7dde
to
ab9ea3e
Compare
Note that Numba CI does not run the tests that are timing out ( |
ab9ea3e
to
068efe0
Compare
There's a bug, that we (Numba core devs) have internally named "Frosty Thompson" after the docker container that first got stuck with it. Frosty Thompson's are basically where something goes wrong in TBB and it "gets stuck", we've been trying to pin it down for a while. It seems to happen most often on heavily loaded systems and always on linux. The place it "gets stuck" is here: https://github.com/intel/tbb/blob/18070344d755ece04d169e6cc40775cae9288cee/src/tbbmalloc/backend.cpp#L301-L327, essentially the loop invokes longer and longer pauses (and eventually ends up yielding) whilst waiting for a condition to change that never does. Needless to say we're trying to debug and get a reproducer for this. If longer timeouts still don't "fix" the failing test case(s) I'd be very suspicious that you've hit the first case of a Frosty Thompson outside the Numba build farm. It should also be noted that the test cases in |
The way in which 3.6/3.7/3.8 are all failing, different TBB tests timing out, suggests Frosty Thompson. CC @seibert. |
I was thinking an additional 50% / 5 minutes timeout increase should really suffice in case things were only "slow". But I was also suspicious if some potential locking or contention issue may be at hand and whether we should just "Frosty Thompson" it is then :D. Thanks for your clarifications! |
I've no idea if you can shell into the build containers, but if you can, Frosty Thompson symptoms include:
|
conda-forge/llvmlite-feedstock#26 is being blocked and I can't tell what will come of it. |
Checklist
0
(if the version changed)conda-smithy
(Use the phrase@conda-forge-admin, please rerender
in a comment in this PR for automated rerendering)Closes #39
Also closes #38 by including the commit from that PR.
Uses latest llvmlite.