Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Codegen change during 0.6 development cycle #21305

Closed
TotalVerb opened this issue Apr 6, 2017 · 4 comments
Closed

Codegen change during 0.6 development cycle #21305

TotalVerb opened this issue Apr 6, 2017 · 4 comments
Labels
compiler:codegen Generation of LLVM IR and native code compiler:inference Type inference performance Must go faster

Comments

@TotalVerb
Copy link
Contributor

The output of this file

eltypestuple() = Tuple{}
eltypestuple(x, xs...) = Tuple{eltype(x), eltypes(xs...).types...}

null_safe_op(::Any, ::Type, ::Type...) = false
Base.@pure null_safe_eltype_op(f, xs...) =
    null_safe_op(f, eltypestuple(xs...).types...)

@inline function broadcast_c(f, a...)
    if null_safe_eltype_op(f, a...)
        0
    else
        1
    end
end

f() = broadcast_c(1:10, 1:10, 1:10)

code_llvm(STDOUT, f, Tuple{})

is on 0.5, quite long; on latest 0.6, also quite long.

But briefly, around when #16961 was merged, it was just

define i64 @julia_f_61750() #0 !dbg !5 {
top:
  ret i64 1
}

so something must have changed since. (The latest commit I know to have the short LLVM is 495114c, but I'll bisect to find a better bound.)

cc @JeffBezanson

Some notes:

@Keno
Copy link
Member

Keno commented Apr 6, 2017

Yes, I've seen this too. Something regressed in inference's ability to constant fold type parameters.

@ararslan ararslan added the compiler:codegen Generation of LLVM IR and native code label Apr 6, 2017
@TotalVerb
Copy link
Contributor Author

TotalVerb commented Apr 6, 2017

Although I'm not done the bisect yet, tentatively 8fee77f is the most likely cause (#20726). The symptoms match up in this particular case. While there is no actual risk of a MethodError here, there is no way for the compiler to know that, and so it must perform the call anyway. Then I think this is probably not fixable?

@TotalVerb
Copy link
Contributor Author

Yep, the bisect confirms that

8fee77f296f5626fbcebfd87364f0ca2a3f17c87 is the first bad commit
commit 8fee77f296f5626fbcebfd87364f0ca2a3f17c87
Author: Jeff Bezanson <jeff.bezanson@gmail.com>
Date:   Tue Feb 21 15:40:09 2017 -0500

    fix #20704, `pure` annotation should not skip method errors

:040000 040000 ee47f6abb5f79d4eb4a8896eb8f5dcd22cffd8b9 ac7168b69b6c7ff5afa55c0189373d9afe36fa3c M	base
:040000 040000 ef05c2698c4935b9694e04726682b36b74f7bebb a738e9e235b557d81ebaa5d972c3ea0ab89d4e1c M	test

@JeffBezanson JeffBezanson added compiler:inference Type inference performance Must go faster labels Apr 7, 2017
@JeffBezanson
Copy link
Sponsor Member

Fixed by #21310.

JeffBezanson added a commit that referenced this issue Apr 7, 2017
fix #21305, inference of splatting T.types or T.parameters
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
compiler:codegen Generation of LLVM IR and native code compiler:inference Type inference performance Must go faster
Projects
None yet
Development

No branches or pull requests

4 participants