Skip to content
This repository has been archived by the owner on Nov 8, 2022. It is now read-only.

Cartesianmap type error #129

Closed
ehsantn opened this issue Nov 19, 2016 · 8 comments
Closed

Cartesianmap type error #129

ehsantn opened this issue Nov 19, 2016 · 8 comments
Assignees

Comments

@ehsantn
Copy link
Contributor

ehsantn commented Nov 19, 2016

I have two versions of the same data parallel program (below). Both crash in DomainIR probably because of types not being inferred properly. Any suggestions?

@acc function score2()
    samples = Float64[i for i in 1:6]
    points = [-1.0, 2.0, 5.0]
    b = 3.0
    N = size(points,1)
    dis::Vector{Vector{Float64}} = [ -(x-points).^2/(2*b.^2) for x in samples]
    exps::Vector{Float64} = [ minimum(d)-log(b*N)+log(sum(exp(d-minimum(d)))) for d in dis]
    return sum(exps)
end
@acc function score2()
    samples = Float64[i for i in 1:6]
    points = [-1.0, 2.0, 5.0]
    b = 3.0
    N = size(points,1)
    exps = 0.0
    @par exps(+) for x in samples
       d = -(x-points).^2/(2*b.^2)
       exps += minimum(d)-log(b*N)+log(sum(exp(d-minimum(d))))
   end
   return exps
end
@ninegua
Copy link
Contributor

ninegua commented Nov 21, 2016

The range over array is not supported in comprehension or @par for loops, see #114. The difficulty in supporting this kind of syntax is exactly the type inference issue you are getting here, i.e., x is given an Any type.

I'll suggest you re-write the above using integer ranges. For example:

@acc function score2()
    samples = Float64[i for i in 1:6]
    points = [-1.0, 2.0, 5.0]
    b = 3.0
    N = size(points,1)
    dis::Vector{Vector{Float64}} = [ -(samples[i].-points).^2./(2*b^2) for i in 1:length(samples)]
    exps::Vector{Float64} = [ minimum(dis[i])-log(b*N)+log(sum(exp(dis[i]-minimum(dis[i])))) for i in 1:length(dis)]
    return sum(exps)
end

or

@acc function score2()
    samples = Float64[i for i in 1:6]
    points = [-1.0, 2.0, 5.0]
    b = 3.0
    N = size(points,1)
    exps = 0.0
    @par exps(+) for i in 1:length(samples)
       x = samples[i]
       d = -(x-points).^2./(2*b^2)
       exps += minimum(d)-log(b*N)+log(sum(exp(d-minimum(d))))
   end
   return exps
end

Or if we still prefer no-indexing, we can write it using map. There is a recently introduced convert issue when map function deals with nested arrays, I'm fixing it.

@ninegua
Copy link
Contributor

ninegua commented Nov 21, 2016

The above program can also be writtten using map:

@acc function score2()
    samples = Float64[i for i in 1:6]
    points = [-1.0, 2.0, 5.0]
    b = 3.0
    N = size(points,1)
    exps::Vector{Float64} = map(samples) do x
                d = -(x.-points).^2./(2*b^2)
                minimum(d)-log(b*N)+log(sum(exp(d-minimum(d))))
           end
    return sum(exps)
end

This works for Julia 0.5. Theoretically we can translate comprehension to map instead of cartesianmap. The challenge again is to get around type inference limitations and have Julia figure out the concrete types for all variables before AST gets passed to ParallelAccelerator.

Since convert works for Julia 0.5, I'm not going to spend time fixing it for 0.4, unless you must have it.

@ehsantn
Copy link
Contributor Author

ehsantn commented Nov 21, 2016

Thanks, Paul. The map version works which can get me going.

The @par version fails which might be useful to fix.

@ehsantn ehsantn closed this as completed Nov 21, 2016
@ninegua
Copy link
Contributor

ninegua commented Nov 21, 2016

how come, the par version works for me.

@ehsantn
Copy link
Contributor Author

ehsantn commented Nov 21, 2016

I copy/pasted your @par code. This is the error with latest PA and Julia 0.5:

ERROR: LoadError: "Could not determine type for arg 2 to call .ParallelAccelerator.API.+ with name _60"
 in from_call(::Array{Any,1}, ::CompilerTools.LambdaHandling.LambdaVarInfo) at /home/etotoni/.julia/v0.5/ParallelAccelerator/src/cgen.jl:1737
 in from_expr(::Expr, ::CompilerTools.LambdaHandling.LambdaVarInfo) at /home/etotoni/.julia/v0.5/ParallelAccelerator/src/cgen.jl:2309
 in from_assignment(::Array{Any,1}, ::CompilerTools.LambdaHandling.LambdaVarInfo) at /home/etotoni/.julia/v0.5/ParallelAccelerator/src/cgen.jl:694
 in from_expr(::Expr, ::CompilerTools.LambdaHandling.LambdaVarInfo) at /home/etotoni/.julia/v0.5/ParallelAccelerator/src/cgen.jl:2299
 in from_exprs(::Array{Any,1}, ::CompilerTools.LambdaHandling.LambdaVarInfo) at /home/etotoni/.julia/v0.5/ParallelAccelerator/src/cgen.jl:588
 in from_reductionFunc(::ParallelAccelerator.ParallelIR.DelayedFunc, ::TypedSlot, ::TypedSlot, ::CompilerTools.LambdaHandling.LambdaVarInfo) at /home/etotoni/.julia/v0.5/ParallelAccelerator/src/cgen.jl:1984
 in from_parforend(::Array{Any,1}, ::CompilerTools.LambdaHandling.LambdaVarInfo) at /home/etotoni/.julia/v0.5/ParallelAccelerator/src/cgen.jl:1929
 in from_expr(::Expr, ::CompilerTools.LambdaHandling.LambdaVarInfo) at /home/etotoni/.julia/v0.5/ParallelAccelerator/src/cgen.jl:2338
 in from_exprs(::Array{Any,1}, ::CompilerTools.LambdaHandling.LambdaVarInfo) at /home/etotoni/.julia/v0.5/ParallelAccelerator/src/cgen.jl:588
 in from_expr(::Expr, ::CompilerTools.LambdaHandling.LambdaVarInfo) at /home/etotoni/.julia/v0.5/ParallelAccelerator/src/cgen.jl:2287
 in from_lambda(::CompilerTools.LambdaHandling.LambdaVarInfo, ::Expr) at /home/etotoni/.julia/v0.5/ParallelAccelerator/src/cgen.jl:570
 in from_root_entry(::Tuple{CompilerTools.LambdaHandling.LambdaVarInfo,Expr}, ::String, ::Tuple{}, ::Dict{DataType,Int64}) at /home/etotoni/.julia/v0.5/ParallelAccelerator/src/cgen.jl:2916
 in toCGen(::GlobalRef, ::Tuple{CompilerTools.LambdaHandling.LambdaVarInfo,Expr}, ::Tuple{}) at /home/etotoni/.julia/v0.5/ParallelAccelerator/src/driver.jl:267
 in processFuncCall(::Any, ::Any, ::Any) at /home/etotoni/.julia/v0.5/CompilerTools/src/OptFramework.jl:463
 in score2() at /home/etotoni/.julia/v0.5/CompilerTools/src/OptFramework.jl:578
 in include_from_node1(::String) at ./loading.jl:488
 in process_options(::Base.JLOptions) at ./client.jl:262
 in _start() at ./client.jl:318
while loading /home/etotoni/tmp_code/kernelscore4.jl, in expression starting on line 17

@ninegua
Copy link
Contributor

ninegua commented Nov 21, 2016

I only tried that one in Julia 0.4. I'm seeing the same error in 0.5, will investigate.

@ninegua
Copy link
Contributor

ninegua commented Nov 21, 2016

Annotate the type of exps to be exps :: Float64 = 0 does the trick to get past 0.5.

This kind of additional annotation is annoying, but a major departure from 0.4 is that escaping variables are given an opaque boxed type. I filed an issue here: JuliaLang/julia#16431, the decision in the end is just to go with explicit annotation.

@ehsantn
Copy link
Contributor Author

ehsantn commented Nov 21, 2016

Ok, makes sense. Thanks.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants