LLVM Crash in SelectionDAG #36955
Closed
LLVM Crash in SelectionDAG #36955
Labels
Comments
|
Automated reduction from the included rr trace:
|
|
Proposed upstream fix: https://reviews.llvm.org/D85499 |
Keno
added a commit
that referenced
this issue
Aug 7, 2020
This is an LLVM bug. See upstream discussion at https://reviews.llvm.org/D85499.
Keno
added a commit
that referenced
this issue
Aug 7, 2020
This is an LLVM bug. See upstream discussion at https://reviews.llvm.org/D85499.
|
Fixed upstream, but we still need to bump it here. |
Keno
added a commit
that referenced
this issue
Aug 8, 2020
This is an LLVM bug. See upstream discussion at https://reviews.llvm.org/D85499.
Merged
KristofferC
added a commit
that referenced
this issue
Aug 10, 2020
simeonschaub
added a commit
to simeonschaub/julia
that referenced
this issue
Aug 11, 2020
This is an LLVM bug. See upstream discussion at https://reviews.llvm.org/D85499.
KristofferC
added a commit
that referenced
this issue
Aug 18, 2020
KristofferC
added a commit
that referenced
this issue
Aug 19, 2020
JohnHolmesII
pushed a commit
to JohnHolmesII/llvm-project
that referenced
this issue
Oct 12, 2020
In D85499, I attempted to fix this same issue by canonicalizing andnp for i1 vectors, but since there was some opposition to such a change, this commit just fixes the bug by using two different forms depending on which kind of vector type is in use. We can then always decide to switch the canonical forms later. Description of the original bug: We have a DAG combine that tries to fold (vselect cond, 0000..., X) -> (andnp cond, x). However, it does so by attempting to create an i64 vector with the number of elements obtained by truncating division by 64 from the bitwidth. This is bad for mask vectors like v8i1, since that division is just zero. Besides, we don't want i64 vectors anyway. For i1 vectors, switch the pattern to (andnp (not cond), x), which is the canonical form for `kandn` on mask registers. Fixes JuliaLang/julia#36955. Differential Revision: https://reviews.llvm.org/D85553
github-actions bot
pushed a commit
to tstellar/llvm-project
that referenced
this issue
Nov 24, 2020
In D85499, I attempted to fix this same issue by canonicalizing andnp for i1 vectors, but since there was some opposition to such a change, this commit just fixes the bug by using two different forms depending on which kind of vector type is in use. We can then always decide to switch the canonical forms later. Description of the original bug: We have a DAG combine that tries to fold (vselect cond, 0000..., X) -> (andnp cond, x). However, it does so by attempting to create an i64 vector with the number of elements obtained by truncating division by 64 from the bitwidth. This is bad for mask vectors like v8i1, since that division is just zero. Besides, we don't want i64 vectors anyway. For i1 vectors, switch the pattern to (andnp (not cond), x), which is the canonical form for `kandn` on mask registers. Fixes JuliaLang/julia#36955. Differential Revision: https://reviews.llvm.org/D85553 (cherry picked from commit c58674d)
github-actions bot
pushed a commit
to tstellar/llvm-project
that referenced
this issue
Nov 24, 2020
In D85499, I attempted to fix this same issue by canonicalizing andnp for i1 vectors, but since there was some opposition to such a change, this commit just fixes the bug by using two different forms depending on which kind of vector type is in use. We can then always decide to switch the canonical forms later. Description of the original bug: We have a DAG combine that tries to fold (vselect cond, 0000..., X) -> (andnp cond, x). However, it does so by attempting to create an i64 vector with the number of elements obtained by truncating division by 64 from the bitwidth. This is bad for mask vectors like v8i1, since that division is just zero. Besides, we don't want i64 vectors anyway. For i1 vectors, switch the pattern to (andnp (not cond), x), which is the canonical form for `kandn` on mask registers. Fixes JuliaLang/julia#36955. Differential Revision: https://reviews.llvm.org/D85553 (cherry picked from commit c58674d)
tstellar
added a commit
to tstellar/llvm-project
that referenced
this issue
Nov 25, 2020
In D85499, I attempted to fix this same issue by canonicalizing andnp for i1 vectors, but since there was some opposition to such a change, this commit just fixes the bug by using two different forms depending on which kind of vector type is in use. We can then always decide to switch the canonical forms later. Description of the original bug: We have a DAG combine that tries to fold (vselect cond, 0000..., X) -> (andnp cond, x). However, it does so by attempting to create an i64 vector with the number of elements obtained by truncating division by 64 from the bitwidth. This is bad for mask vectors like v8i1, since that division is just zero. Besides, we don't want i64 vectors anyway. For i1 vectors, switch the pattern to (andnp (not cond), x), which is the canonical form for `kandn` on mask registers. Fixes JuliaLang/julia#36955. Differential Revision: https://reviews.llvm.org/D85553 (cherry picked from commit c58674d)
arichardson
added a commit
to arichardson/llvm-project
that referenced
this issue
Mar 22, 2021
In D85499, I attempted to fix this same issue by canonicalizing andnp for i1 vectors, but since there was some opposition to such a change, this commit just fixes the bug by using two different forms depending on which kind of vector type is in use. We can then always decide to switch the canonical forms later. Description of the original bug: We have a DAG combine that tries to fold (vselect cond, 0000..., X) -> (andnp cond, x). However, it does so by attempting to create an i64 vector with the number of elements obtained by truncating division by 64 from the bitwidth. This is bad for mask vectors like v8i1, since that division is just zero. Besides, we don't want i64 vectors anyway. For i1 vectors, switch the pattern to (andnp (not cond), x), which is the canonical form for `kandn` on mask registers. Fixes JuliaLang/julia#36955. Differential Revision: https://reviews.llvm.org/D85553
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Over in PumasAI/PumasTutorials.jl#56, @chriselrod reports an LLVM crash in SelectionDAG with the following backtrace:
The text was updated successfully, but these errors were encountered: