You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently we hide math intrinsics (e.g. exp, log, etc) from LLVM to avoid it using the wrong libm for constant folding and in the JIT. However, esp on highly parallel architectures such as KNL and probably on GPUs as well, LLVM can vectorize those functions and in fact needs to if we want to get performance. However, this of course doesn't work if LLVM doesn't know that those functions exist. We should find a proper solution to it using the wrong libm and switch back to using the intrinsics.
The text was updated successfully, but these errors were encountered:
We don't have as many intrinsics anymore, as many have now been translated into Julia. Additionally, we now use Cassette for this ("should find a proper solution ... and switch back"). Closing in favor of #15265.
Currently we hide math intrinsics (e.g.
exp
,log
, etc) from LLVM to avoid it using the wrong libm for constant folding and in the JIT. However, esp on highly parallel architectures such as KNL and probably on GPUs as well, LLVM can vectorize those functions and in fact needs to if we want to get performance. However, this of course doesn't work if LLVM doesn't know that those functions exist. We should find a proper solution to it using the wrong libm and switch back to using the intrinsics.The text was updated successfully, but these errors were encountered: