Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Revert "Fix CUDA with LLVM9" #606

Closed

Conversation

gmarkall
Copy link
Member

This reverts commit 3f66129.

Once numba/numba#6030 is merged, it will no longer be necessary to disable the autoupgrade of atomic intrinsics for NVPTX, because LLVM from llvmlite will not be used to optimize the IR before sending it to NVVM.

This reverts commit 3f66129.

Once numba/numba#6030 is merged, it will no
longer be necessary to disable the autoupgrade of atomic intrinsics for
NVPTX, because LLVM from llvmlite will not be used to optimize the IR
before sending it to NVVM.
Copy link
Contributor

@stuartarchibald stuartarchibald left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the patch.

@stuartarchibald
Copy link
Contributor

@esc probably best to consume this into the LLVM 10 series work?

@esc
Copy link
Member

esc commented Aug 4, 2020

Cherry 🍒 picked as: 0fc90da

@esc esc closed this Aug 4, 2020
@esc
Copy link
Member

esc commented Aug 5, 2020

So I ran this through the build farm last night, and received the following errors:

$ conda install -c numba/label/dev llvmlite=0.34.0dev0_llvm10_0_
...
$ python runtests.py numba.cuda.tests.cudapy.test_atomics.TestCudaAtomics
...EEEEEEEEEE.......................................................
======================================================================
ERROR: test_atomic_add_double (numba.cuda.tests.cudapy.test_atomics.TestCudaAtomics)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/user/git/numba/numba/cuda/tests/cudapy/test_atomics.py", line 269, in test_atomic_add_double
    cuda_func = cuda.jit('void(int64[:], float64[:])')(atomic_add_double)
  File "/home/user/git/numba/numba/cuda/decorators.py", line 136, in kernel_jit
    return Dispatcher(func, sigs, bind=bind, targetoptions=targetoptions)
  File "/home/user/git/numba/numba/cuda/compiler.py", line 804, in __init__
    self.compile(sigs[0])
  File "/home/user/git/numba/numba/cuda/compiler.py", line 921, in compile
    kernel.bind()
  File "/home/user/git/numba/numba/cuda/compiler.py", line 573, in bind
    self._func.get()
  File "/home/user/git/numba/numba/cuda/compiler.py", line 443, in get
    ptx = self.ptx.get()
  File "/home/user/git/numba/numba/cuda/compiler.py", line 412, in get
    ptx = nvvm.llvm_to_ptx(self.llvmir, arch=arch,
  File "/home/user/git/numba/numba/cuda/cudadrv/nvvm.py", line 507, in llvm_to_ptx
    ptx = cu.compile(**opts)
  File "/home/user/git/numba/numba/cuda/cudadrv/nvvm.py", line 232, in compile
    self._try_error(err, 'Failed to compile\n')
  File "/home/user/git/numba/numba/cuda/cudadrv/nvvm.py", line 250, in _try_error
    self.driver.check_error(err, "%s\n%s" % (msg, self.get_log()))
  File "/home/user/git/numba/numba/cuda/cudadrv/nvvm.py", line 140, in check_error
    raise exc
numba.cuda.cudadrv.error.NvvmError: Failed to compile

<unnamed> (405, 17): parse expected binary operation in atomicrmw
NVVM_ERROR_COMPILATION

======================================================================
ERROR: test_atomic_add_double_2 (numba.cuda.tests.cudapy.test_atomics.TestCudaAtomics)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/user/git/numba/numba/cuda/tests/cudapy/test_atomics.py", line 282, in test_atomic_add_double_2
    cuda_func = cuda.jit('void(float64[:,:])')(atomic_add_double_2)
  File "/home/user/git/numba/numba/cuda/decorators.py", line 136, in kernel_jit
    return Dispatcher(func, sigs, bind=bind, targetoptions=targetoptions)
  File "/home/user/git/numba/numba/cuda/compiler.py", line 804, in __init__
    self.compile(sigs[0])
  File "/home/user/git/numba/numba/cuda/compiler.py", line 921, in compile
    kernel.bind()
  File "/home/user/git/numba/numba/cuda/compiler.py", line 573, in bind
    self._func.get()
  File "/home/user/git/numba/numba/cuda/compiler.py", line 443, in get
    ptx = self.ptx.get()
  File "/home/user/git/numba/numba/cuda/compiler.py", line 412, in get
    ptx = nvvm.llvm_to_ptx(self.llvmir, arch=arch,
  File "/home/user/git/numba/numba/cuda/cudadrv/nvvm.py", line 507, in llvm_to_ptx
    ptx = cu.compile(**opts)
  File "/home/user/git/numba/numba/cuda/cudadrv/nvvm.py", line 232, in compile
    self._try_error(err, 'Failed to compile\n')
  File "/home/user/git/numba/numba/cuda/cudadrv/nvvm.py", line 250, in _try_error
    self.driver.check_error(err, "%s\n%s" % (msg, self.get_log()))
  File "/home/user/git/numba/numba/cuda/cudadrv/nvvm.py", line 140, in check_error
    raise exc
numba.cuda.cudadrv.error.NvvmError: Failed to compile

<unnamed> (470, 17): parse expected binary operation in atomicrmw
NVVM_ERROR_COMPILATION

======================================================================
ERROR: test_atomic_add_double_3 (numba.cuda.tests.cudapy.test_atomics.TestCudaAtomics)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/user/git/numba/numba/cuda/tests/cudapy/test_atomics.py", line 290, in test_atomic_add_double_3
    cuda_func = cuda.jit('void(float64[:,:])')(atomic_add_double_3)
  File "/home/user/git/numba/numba/cuda/decorators.py", line 136, in kernel_jit
    return Dispatcher(func, sigs, bind=bind, targetoptions=targetoptions)
  File "/home/user/git/numba/numba/cuda/compiler.py", line 804, in __init__
    self.compile(sigs[0])
  File "/home/user/git/numba/numba/cuda/compiler.py", line 921, in compile
    kernel.bind()
  File "/home/user/git/numba/numba/cuda/compiler.py", line 573, in bind
    self._func.get()
  File "/home/user/git/numba/numba/cuda/compiler.py", line 443, in get
    ptx = self.ptx.get()
  File "/home/user/git/numba/numba/cuda/compiler.py", line 412, in get
    ptx = nvvm.llvm_to_ptx(self.llvmir, arch=arch,
  File "/home/user/git/numba/numba/cuda/cudadrv/nvvm.py", line 507, in llvm_to_ptx
    ptx = cu.compile(**opts)
  File "/home/user/git/numba/numba/cuda/cudadrv/nvvm.py", line 232, in compile
    self._try_error(err, 'Failed to compile\n')
  File "/home/user/git/numba/numba/cuda/cudadrv/nvvm.py", line 250, in _try_error
    self.driver.check_error(err, "%s\n%s" % (msg, self.get_log()))
  File "/home/user/git/numba/numba/cuda/cudadrv/nvvm.py", line 140, in check_error
    raise exc
numba.cuda.cudadrv.error.NvvmError: Failed to compile

<unnamed> (483, 17): parse expected binary operation in atomicrmw
NVVM_ERROR_COMPILATION

======================================================================
ERROR: test_atomic_add_double_global (numba.cuda.tests.cudapy.test_atomics.TestCudaAtomics)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/user/git/numba/numba/cuda/tests/cudapy/test_atomics.py", line 300, in test_atomic_add_double_global
    cuda_func = cuda.jit('void(int64[:], float64[:])')(atomic_add_double_global)
  File "/home/user/git/numba/numba/cuda/decorators.py", line 136, in kernel_jit
    return Dispatcher(func, sigs, bind=bind, targetoptions=targetoptions)
  File "/home/user/git/numba/numba/cuda/compiler.py", line 804, in __init__
    self.compile(sigs[0])
  File "/home/user/git/numba/numba/cuda/compiler.py", line 921, in compile
    kernel.bind()
  File "/home/user/git/numba/numba/cuda/compiler.py", line 573, in bind
    self._func.get()
  File "/home/user/git/numba/numba/cuda/compiler.py", line 443, in get
    ptx = self.ptx.get()
  File "/home/user/git/numba/numba/cuda/compiler.py", line 412, in get
    ptx = nvvm.llvm_to_ptx(self.llvmir, arch=arch,
  File "/home/user/git/numba/numba/cuda/cudadrv/nvvm.py", line 507, in llvm_to_ptx
    ptx = cu.compile(**opts)
  File "/home/user/git/numba/numba/cuda/cudadrv/nvvm.py", line 232, in compile
    self._try_error(err, 'Failed to compile\n')
  File "/home/user/git/numba/numba/cuda/cudadrv/nvvm.py", line 250, in _try_error
    self.driver.check_error(err, "%s\n%s" % (msg, self.get_log()))
  File "/home/user/git/numba/numba/cuda/cudadrv/nvvm.py", line 140, in check_error
    raise exc
numba.cuda.cudadrv.error.NvvmError: Failed to compile

<unnamed> (265, 17): parse expected binary operation in atomicrmw
NVVM_ERROR_COMPILATION

======================================================================
ERROR: test_atomic_add_double_global_2 (numba.cuda.tests.cudapy.test_atomics.TestCudaAtomics)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/user/git/numba/numba/cuda/tests/cudapy/test_atomics.py", line 313, in test_atomic_add_double_global_2
    cuda_func = cuda.jit('void(float64[:,:])')(atomic_add_double_global_2)
  File "/home/user/git/numba/numba/cuda/decorators.py", line 136, in kernel_jit
    return Dispatcher(func, sigs, bind=bind, targetoptions=targetoptions)
  File "/home/user/git/numba/numba/cuda/compiler.py", line 804, in __init__
    self.compile(sigs[0])
  File "/home/user/git/numba/numba/cuda/compiler.py", line 921, in compile
    kernel.bind()
  File "/home/user/git/numba/numba/cuda/compiler.py", line 573, in bind
    self._func.get()
  File "/home/user/git/numba/numba/cuda/compiler.py", line 443, in get
    ptx = self.ptx.get()
  File "/home/user/git/numba/numba/cuda/compiler.py", line 412, in get
    ptx = nvvm.llvm_to_ptx(self.llvmir, arch=arch,
  File "/home/user/git/numba/numba/cuda/cudadrv/nvvm.py", line 507, in llvm_to_ptx
    ptx = cu.compile(**opts)
  File "/home/user/git/numba/numba/cuda/cudadrv/nvvm.py", line 232, in compile
    self._try_error(err, 'Failed to compile\n')
  File "/home/user/git/numba/numba/cuda/cudadrv/nvvm.py", line 250, in _try_error
    self.driver.check_error(err, "%s\n%s" % (msg, self.get_log()))
  File "/home/user/git/numba/numba/cuda/cudadrv/nvvm.py", line 140, in check_error
    raise exc
numba.cuda.cudadrv.error.NvvmError: Failed to compile

<unnamed> (209, 17): parse expected binary operation in atomicrmw
NVVM_ERROR_COMPILATION

======================================================================
ERROR: test_atomic_add_double_global_3 (numba.cuda.tests.cudapy.test_atomics.TestCudaAtomics)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/user/git/numba/numba/cuda/tests/cudapy/test_atomics.py", line 321, in test_atomic_add_double_global_3
    cuda_func = cuda.jit('void(float64[:,:])')(atomic_add_double_global_3)
  File "/home/user/git/numba/numba/cuda/decorators.py", line 136, in kernel_jit
    return Dispatcher(func, sigs, bind=bind, targetoptions=targetoptions)
  File "/home/user/git/numba/numba/cuda/compiler.py", line 804, in __init__
    self.compile(sigs[0])
  File "/home/user/git/numba/numba/cuda/compiler.py", line 921, in compile
    kernel.bind()
  File "/home/user/git/numba/numba/cuda/compiler.py", line 573, in bind
    self._func.get()
  File "/home/user/git/numba/numba/cuda/compiler.py", line 443, in get
    ptx = self.ptx.get()
  File "/home/user/git/numba/numba/cuda/compiler.py", line 412, in get
    ptx = nvvm.llvm_to_ptx(self.llvmir, arch=arch,
  File "/home/user/git/numba/numba/cuda/cudadrv/nvvm.py", line 507, in llvm_to_ptx
    ptx = cu.compile(**opts)
  File "/home/user/git/numba/numba/cuda/cudadrv/nvvm.py", line 232, in compile
    self._try_error(err, 'Failed to compile\n')
  File "/home/user/git/numba/numba/cuda/cudadrv/nvvm.py", line 250, in _try_error
    self.driver.check_error(err, "%s\n%s" % (msg, self.get_log()))
  File "/home/user/git/numba/numba/cuda/cudadrv/nvvm.py", line 140, in check_error
    raise exc
numba.cuda.cudadrv.error.NvvmError: Failed to compile

<unnamed> (222, 17): parse expected binary operation in atomicrmw
NVVM_ERROR_COMPILATION

======================================================================
ERROR: test_atomic_add_float (numba.cuda.tests.cudapy.test_atomics.TestCudaAtomics)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/user/git/numba/numba/cuda/tests/cudapy/test_atomics.py", line 225, in test_atomic_add_float
    cuda_atomic_add_float = cuda.jit('void(float32[:])')(atomic_add_float)
  File "/home/user/git/numba/numba/cuda/decorators.py", line 136, in kernel_jit
    return Dispatcher(func, sigs, bind=bind, targetoptions=targetoptions)
  File "/home/user/git/numba/numba/cuda/compiler.py", line 804, in __init__
    self.compile(sigs[0])
  File "/home/user/git/numba/numba/cuda/compiler.py", line 921, in compile
    kernel.bind()
  File "/home/user/git/numba/numba/cuda/compiler.py", line 573, in bind
    self._func.get()
  File "/home/user/git/numba/numba/cuda/compiler.py", line 443, in get
    ptx = self.ptx.get()
  File "/home/user/git/numba/numba/cuda/compiler.py", line 412, in get
    ptx = nvvm.llvm_to_ptx(self.llvmir, arch=arch,
  File "/home/user/git/numba/numba/cuda/cudadrv/nvvm.py", line 507, in llvm_to_ptx
    ptx = cu.compile(**opts)
  File "/home/user/git/numba/numba/cuda/cudadrv/nvvm.py", line 232, in compile
    self._try_error(err, 'Failed to compile\n')
  File "/home/user/git/numba/numba/cuda/cudadrv/nvvm.py", line 250, in _try_error
    self.driver.check_error(err, "%s\n%s" % (msg, self.get_log()))
  File "/home/user/git/numba/numba/cuda/cudadrv/nvvm.py", line 140, in check_error
    raise exc
numba.cuda.cudadrv.error.NvvmError: Failed to compile

<unnamed> (383, 17): parse expected binary operation in atomicrmw
NVVM_ERROR_COMPILATION

======================================================================
ERROR: test_atomic_add_float_2 (numba.cuda.tests.cudapy.test_atomics.TestCudaAtomics)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/user/git/numba/numba/cuda/tests/cudapy/test_atomics.py", line 237, in test_atomic_add_float_2
    cuda_atomic_add2 = cuda.jit('void(float32[:,:])')(atomic_add_float_2)
  File "/home/user/git/numba/numba/cuda/decorators.py", line 136, in kernel_jit
    return Dispatcher(func, sigs, bind=bind, targetoptions=targetoptions)
  File "/home/user/git/numba/numba/cuda/compiler.py", line 804, in __init__
    self.compile(sigs[0])
  File "/home/user/git/numba/numba/cuda/compiler.py", line 921, in compile
    kernel.bind()
  File "/home/user/git/numba/numba/cuda/compiler.py", line 573, in bind
    self._func.get()
  File "/home/user/git/numba/numba/cuda/compiler.py", line 443, in get
    ptx = self.ptx.get()
  File "/home/user/git/numba/numba/cuda/compiler.py", line 412, in get
    ptx = nvvm.llvm_to_ptx(self.llvmir, arch=arch,
  File "/home/user/git/numba/numba/cuda/cudadrv/nvvm.py", line 507, in llvm_to_ptx
    ptx = cu.compile(**opts)
  File "/home/user/git/numba/numba/cuda/cudadrv/nvvm.py", line 232, in compile
    self._try_error(err, 'Failed to compile\n')
  File "/home/user/git/numba/numba/cuda/cudadrv/nvvm.py", line 250, in _try_error
    self.driver.check_error(err, "%s\n%s" % (msg, self.get_log()))
  File "/home/user/git/numba/numba/cuda/cudadrv/nvvm.py", line 140, in check_error
    raise exc
numba.cuda.cudadrv.error.NvvmError: Failed to compile

<unnamed> (470, 17): parse expected binary operation in atomicrmw
NVVM_ERROR_COMPILATION

======================================================================
ERROR: test_atomic_add_float_3 (numba.cuda.tests.cudapy.test_atomics.TestCudaAtomics)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/user/git/numba/numba/cuda/tests/cudapy/test_atomics.py", line 244, in test_atomic_add_float_3
    cuda_atomic_add3 = cuda.jit('void(float32[:,:])')(atomic_add_float_3)
  File "/home/user/git/numba/numba/cuda/decorators.py", line 136, in kernel_jit
    return Dispatcher(func, sigs, bind=bind, targetoptions=targetoptions)
  File "/home/user/git/numba/numba/cuda/compiler.py", line 804, in __init__
    self.compile(sigs[0])
  File "/home/user/git/numba/numba/cuda/compiler.py", line 921, in compile
    kernel.bind()
  File "/home/user/git/numba/numba/cuda/compiler.py", line 573, in bind
    self._func.get()
  File "/home/user/git/numba/numba/cuda/compiler.py", line 443, in get
    ptx = self.ptx.get()
  File "/home/user/git/numba/numba/cuda/compiler.py", line 412, in get
    ptx = nvvm.llvm_to_ptx(self.llvmir, arch=arch,
  File "/home/user/git/numba/numba/cuda/cudadrv/nvvm.py", line 507, in llvm_to_ptx
    ptx = cu.compile(**opts)
  File "/home/user/git/numba/numba/cuda/cudadrv/nvvm.py", line 232, in compile
    self._try_error(err, 'Failed to compile\n')
  File "/home/user/git/numba/numba/cuda/cudadrv/nvvm.py", line 250, in _try_error
    self.driver.check_error(err, "%s\n%s" % (msg, self.get_log()))
  File "/home/user/git/numba/numba/cuda/cudadrv/nvvm.py", line 140, in check_error
    raise exc
numba.cuda.cudadrv.error.NvvmError: Failed to compile

<unnamed> (483, 17): parse expected binary operation in atomicrmw
NVVM_ERROR_COMPILATION

======================================================================
ERROR: test_atomic_add_returns_old (numba.cuda.tests.cudapy.test_atomics.TestCudaAtomics)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/user/git/numba/numba/cuda/tests/cudapy/test_atomics.py", line 528, in test_atomic_add_returns_old
    self._test_atomic_returns_old(kernel, 10)
  File "/home/user/git/numba/numba/cuda/tests/cudapy/test_atomics.py", line 517, in _test_atomic_returns_old
    kernel[1, 1](x)
  File "/home/user/git/numba/numba/cuda/compiler.py", line 766, in __call__
    return self.dispatcher.call(args, self.griddim, self.blockdim,
  File "/home/user/git/numba/numba/cuda/compiler.py", line 857, in call
    kernel = self.compile(argtypes)
  File "/home/user/git/numba/numba/cuda/compiler.py", line 921, in compile
    kernel.bind()
  File "/home/user/git/numba/numba/cuda/compiler.py", line 573, in bind
    self._func.get()
  File "/home/user/git/numba/numba/cuda/compiler.py", line 443, in get
    ptx = self.ptx.get()
  File "/home/user/git/numba/numba/cuda/compiler.py", line 412, in get
    ptx = nvvm.llvm_to_ptx(self.llvmir, arch=arch,
  File "/home/user/git/numba/numba/cuda/cudadrv/nvvm.py", line 507, in llvm_to_ptx
    ptx = cu.compile(**opts)
  File "/home/user/git/numba/numba/cuda/cudadrv/nvvm.py", line 232, in compile
    self._try_error(err, 'Failed to compile\n')
  File "/home/user/git/numba/numba/cuda/cudadrv/nvvm.py", line 250, in _try_error
    self.driver.check_error(err, "%s\n%s" % (msg, self.get_log()))
  File "/home/user/git/numba/numba/cuda/cudadrv/nvvm.py", line 140, in check_error
    raise exc
numba.cuda.cudadrv.error.NvvmError: Failed to compile

<unnamed> (123, 17): parse expected binary operation in atomicrmw
NVVM_ERROR_COMPILATION

----------------------------------------------------------------------
Ran 68 tests in 3.552s

FAILED (errors=10)

@esc esc reopened this Aug 5, 2020
@gmarkall
Copy link
Member Author

gmarkall commented Aug 5, 2020

That looks like Numba without numba/numba#6030 merged - could that be what happened?

@esc
Copy link
Member

esc commented Aug 5, 2020

This happens across all {Linux, Windows} x {Cuda8,9,10} x {Python36,38} tests.

@esc
Copy link
Member

esc commented Aug 5, 2020

I believe the branch I tested this on contains #6030:

$ git --no-pager show fa1d922a48
commit fa1d922a48892e17a8076d33523b3ac70b6c9d6b
Merge: 1c77b13b58 76fc4643c1
Author: Siu Kwan Lam <xxx>
Date:   Thu Jul 30 14:51:30 2020 -0500

    Merge pull request #6030 from gmarkall/grm-disable-o1-for-nvvm

    CUDA: Don't optimize IR before sending it to NVVM

$ git branch --contains fa1d922a48
* 5969
  master

@stuartarchibald
Copy link
Contributor

Discussion here: https://gitter.im/numba/numba-dev?at=5f2a7a78a8636a6f1690eb68 accepted solution is to "trick" the auto-upgrader into not upgrading, @gmarkall is producing a patch to this effect.

@esc
Copy link
Member

esc commented Aug 5, 2020

#6080 fixed it.

@esc esc closed this Aug 5, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants