Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cross target use of target-less @overload leads to lowering errors. #8530

Closed
2 tasks done
stuartarchibald opened this issue Oct 21, 2022 · 0 comments · Fixed by #8554
Closed
2 tasks done

Cross target use of target-less @overload leads to lowering errors. #8530

stuartarchibald opened this issue Oct 21, 2022 · 0 comments · Fixed by #8554
Labels
bug - incorrect behavior Bugs: incorrect behavior

Comments

@stuartarchibald
Copy link
Contributor

Reporting a bug

  • I have tried using the latest released version of Numba (most recent is
    visible in the change log (https://github.com/numba/numba/blob/main/CHANGE_LOG).
  • I have included a self contained code sample to reproduce the problem.
    i.e. it's possible to run as 'python bug.py'.

This incorrect use of @overload is actually a simplified version of the issue in #8529. Note that the @overload has no target= kwarg, as such the CUDA target cannot find it, but also as the target is merely implied as CPU to preserve existing behaviours, this manifests as a lowering error as there's no bar function implementation in the target context for CUDA.

import numpy as np
from numba.cuda import jit
from numba.extending import overload

def bar():
    pass


@overload(bar)
def ol_bar(x):
    def impl(x):
        return 12
    return impl


@jit
def call_bar(arr):
    arr[0] = bar(3)

tmp = np.zeros((1,))
call_bar[1, 1](tmp)
print(tmp)

The error message ends with:

NotImplementedError: No definition for lowering <... Closure>(int64,) -> Literal[int](12)

which is not ideal.

If the example is updated so that the overload is declared as cpu only via @overload(bar, target='cpu'), then the error message is more helpful:

Function resolution cannot find any matches for function '<function bar at 0x<snip>>` for the current target: '<class 'numba.core.target_extension.CUDA'>'.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug - incorrect behavior Bugs: incorrect behavior
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant