Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

llvmcall requires the compiler #53680

Closed
chrstphrbrns opened this issue Mar 9, 2024 · 4 comments · Fixed by #54816
Closed

llvmcall requires the compiler #53680

chrstphrbrns opened this issue Mar 9, 2024 · 4 comments · Fixed by #54816

Comments

@chrstphrbrns
Copy link
Contributor

chrstphrbrns commented Mar 9, 2024

I'm recently seeing this error when using TiffImages.jl (dev branch) with multi-threading enabled

julia> VERSION
v"1.12.0-DEV.149"

julia> using TiffImages

julia> @time TiffImages.load("image.tif");
...
    nested task error: `llvmcall` requires the compiler
    Stacktrace:
     [1] macro expansion
       @ ~/.julia/packages/SIMD/2fAdM/src/LLVM_intrinsics.jl:673 [inlined]
     [2] shufflevector(x::NTuple{…}, ::Val{…})
       @ SIMD.Intrinsics ~/.julia/packages/SIMD/2fAdM/src/LLVM_intrinsics.jl:664
...

julia> @time TiffImages.load("image.tif");
  0.283151 seconds (164.76 k allocations: 282.782 MiB, 17.25% gc time, 38.50% compilation time)

Sometimes it works on the first try, but it always works on the second try

I don't get this with v1.10.2

@PallHaraldsson
Copy link
Contributor

PallHaraldsson commented Mar 9, 2024

I tried to debug. The package is pure Julia, and the problem not strictly there, but seemingly in this less pure dependency (or at least it triggering a bug in Julia?):

https://github.com/search?q=repo%3Aeschnett%2FSIMD.jl+llvmcall&type=code

Maybe until fixed you can use some workaround (older Julia or) e.g. non-pure code package:
JuliaIO/ImageMagick.jl#99

There's even very old pure Julia code (not registered, as not a package even):
https://github.com/rephorm/TIFF.jl

@vtjnash
Copy link
Sponsor Member

vtjnash commented Mar 9, 2024

Likely one of several known regressions tracked here #53498

@chrstphrbrns
Copy link
Contributor Author

chrstphrbrns commented Mar 27, 2024

Another error with the same symptom (fails on the first attempt when multi-threading is enabled; succeeds on subsequent attempts)

recode is a generated function that also uses SIMD

   nested task error: LoadError: UndefRefError: access to undefined reference
    Stacktrace:
      [1] getindex
        @ ./essentials.jl:375 [inlined]
      [2] ht_keyindex
        @ ./dict.jl:248 [inlined]
      [3] haskey
        @ ./dict.jl:548 [inlined]
      [4] in(x::Symbol, s::Set{Symbol})
        @ Base ./set.jl:92
      [5] log_record_id(_module::Any, level::Any, message::Any, log_kws::Any)
        @ Base.CoreLogging ./logging.jl:302
      [6] process_logmsg_exs(::Any, ::Any, ::Any, ::Any, ::Any)
        @ Base.CoreLogging ./logging.jl:461
      [7] logmsg_code(::Any, ::Any, ::Any, ::Any, ::Any)
        @ Base.CoreLogging ./logging.jl:348
      [8] var"@debug"(__source__::LineNumberNode, __module__::Module, exs::Vararg{Any})
        @ Base.CoreLogging ./logging.jl:520
      [9] recode(v::AbstractVector, r::Any, c::Any, n::Val{N}) where N
        @ TiffImages ~/.julia/dev/TiffImages/src/ifds.jl:560

@vtjnash
Copy link
Sponsor Member

vtjnash commented Mar 27, 2024

Using logging is not permitted currently when threads are enabled due to data races in the current implementation

vtjnash added a commit that referenced this issue Jun 15, 2024
Continuing from previous PRs to making CodeInstance the primary means of
tracking compilation, this introduces an "engine" which keeps track
externally of whether a particular inference result is in progress and
where. At present, this handles unexpected cycles by permitting both
threads to work on it. This is likely to be optimal most of the time
currently, until we have the ability to do work-stealing of the results.

Includes fix for #53434, by ensuring SOURCE_MODE_ABI results in the item
going into the global cache.

Fixes #53433, as inInference is computed by the engine and
protected by a lock, which also fixes #53680.
vtjnash added a commit that referenced this issue Jun 15, 2024
Continuing from previous PRs to making CodeInstance the primary means of
tracking compilation, this introduces an "engine" which keeps track
externally of whether a particular inference result is in progress and
where. At present, this handles unexpected cycles by permitting both
threads to work on it. This is likely to be optimal most of the time
currently, until we have the ability to do work-stealing of the results.

To assist with that, CodeInstance is now primarily allocated by
`jl_engine_reserve`, which also tracks that this is being currently
inferred. This creates a sort of per-(MI,owner) tuple lock mechanism,
which can be used with the double-check pattern to see if inference was
completed while waiting on that. The `world` value is not included since
that is inferred later, so there is a possibility that a thread waits
only to discover that the result was already invalid before it could use
it (though this should be unlikely).

The process then can notify when it has finished and wants to release
the reservation lock on that identity pair. When doing so, it may also
provide source code, allowing the process to potentially begin a
threadpool to compile that result while the main thread continues with
the job of inference.

Includes fix for #53434, by ensuring SOURCE_MODE_ABI results in the item
going into the global cache.

Fixes #53433, as inInference is computed by the engine and
protected by a lock, which also fixes #53680.
vtjnash added a commit that referenced this issue Jun 15, 2024
Continuing from previous PRs to making CodeInstance the primary means of
tracking compilation, this introduces an "engine" which keeps track
externally of whether a particular inference result is in progress and
where. At present, this handles unexpected cycles by permitting both
threads to work on it. This is likely to be optimal most of the time
currently, until we have the ability to do work-stealing of the results.

To assist with that, CodeInstance is now primarily allocated by
`jl_engine_reserve`, which also tracks that this is being currently
inferred. This creates a sort of per-(MI,owner) tuple lock mechanism,
which can be used with the double-check pattern to see if inference was
completed while waiting on that. The `world` value is not included since
that is inferred later, so there is a possibility that a thread waits
only to discover that the result was already invalid before it could use
it (though this should be unlikely).

The process then can notify when it has finished and wants to release
the reservation lock on that identity pair. When doing so, it may also
provide source code, allowing the process to potentially begin a
threadpool to compile that result while the main thread continues with
the job of inference.

Includes fix for #53434, by ensuring SOURCE_MODE_ABI results in the item
going into the global cache.

Fixes #53433, as inInference is computed by the engine and
protected by a lock, which also fixes #53680.
vtjnash added a commit that referenced this issue Jun 15, 2024
Continuing from previous PRs to making CodeInstance the primary means of
tracking compilation, this introduces an "engine" which keeps track
externally of whether a particular inference result is in progress and
where. At present, this handles unexpected cycles by permitting both
threads to work on it. This is likely to be optimal most of the time
currently, until we have the ability to do work-stealing of the results.

To assist with that, CodeInstance is now primarily allocated by
`jl_engine_reserve`, which also tracks that this is being currently
inferred. This creates a sort of per-(MI,owner) tuple lock mechanism,
which can be used with the double-check pattern to see if inference was
completed while waiting on that. The `world` value is not included since
that is inferred later, so there is a possibility that a thread waits
only to discover that the result was already invalid before it could use
it (though this should be unlikely).

The process then can notify when it has finished and wants to release
the reservation lock on that identity pair. When doing so, it may also
provide source code, allowing the process to potentially begin a
threadpool to compile that result while the main thread continues with
the job of inference.

Includes fix for #53434, by ensuring SOURCE_MODE_ABI results in the item
going into the global cache.

Fixes #53433, as inInference is computed by the engine and
protected by a lock, which also fixes #53680.
vtjnash added a commit that referenced this issue Jun 19, 2024
Continuing from previous PRs to making CodeInstance the primary means of
tracking compilation, this introduces an "engine" which keeps track
externally of whether a particular inference result is in progress and
where. At present, this handles unexpected cycles by permitting both
threads to work on it. This is likely to be optimal most of the time
currently, until we have the ability to do work-stealing of the results.

To assist with that, CodeInstance is now primarily allocated by
`jl_engine_reserve`, which also tracks that this is being currently
inferred. This creates a sort of per-(MI,owner) tuple lock mechanism,
which can be used with the double-check pattern to see if inference was
completed while waiting on that. The `world` value is not included since
that is inferred later, so there is a possibility that a thread waits
only to discover that the result was already invalid before it could use
it (though this should be unlikely).

The process then can notify when it has finished and wants to release
the reservation lock on that identity pair. When doing so, it may also
provide source code, allowing the process to potentially begin a
threadpool to compile that result while the main thread continues with
the job of inference.

Includes fix for #53434, by ensuring SOURCE_MODE_ABI results in the item
going into the global cache.

Fixes #53433, as inInference is computed by the engine and
protected by a lock, which also fixes #53680.
vtjnash added a commit that referenced this issue Jun 22, 2024
Continuing from previous PRs to making CodeInstance the primary means of
tracking compilation, this introduces an "engine" which keeps track
externally of whether a particular inference result is in progress and
where. At present, this handles unexpected cycles by permitting both
threads to work on it. This is likely to be optimal most of the time
currently, until we have the ability to do work-stealing of the results.

To assist with that, CodeInstance is now primarily allocated by
`jl_engine_reserve`, which also tracks that this is being currently
inferred. This creates a sort of per-(MI,owner) tuple lock mechanism,
which can be used with the double-check pattern to see if inference was
completed while waiting on that. The `world` value is not included since
that is inferred later, so there is a possibility that a thread waits
only to discover that the result was already invalid before it could use
it (though this should be unlikely).

The process then can notify when it has finished and wants to release
the reservation lock on that identity pair. When doing so, it may also
provide source code, allowing the process to potentially begin a
threadpool to compile that result while the main thread continues with
the job of inference.

Includes fix for #53434, by ensuring SOURCE_MODE_ABI results in the item
going into the global cache.

Fixes #53433, as inInference is computed by the engine and
protected by a lock, which also fixes #53680.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants