Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remove macro expansion and replace uses with FE typing + BE lowering #5465

Merged
merged 25 commits into from
Sep 7, 2020
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
25 commits
Select commit Hold shift + click to select a range
56c0702
[WIP] Typing of threadIdx seems to work a bit but lowering does not
gmarkall Mar 30, 2020
f7e86f9
Typing works, closer to what is needed, but lowering is a problem bec…
gmarkall Mar 30, 2020
df8d5d8
Working implementation of threadIdx.x w/o macro
gmarkall Mar 30, 2020
924851a
Implement tid.y and tid.z
gmarkall Mar 30, 2020
de56e63
Make dim3 signed
gmarkall Mar 30, 2020
c1d1d63
Implement blockIdx, blockDim and gridDim without macros
gmarkall Mar 30, 2020
f959857
Implement laneid and warpsize without macro
gmarkall Mar 30, 2020
2b44afc
Implement grid without a macro
gmarkall Mar 30, 2020
5809893
Implement gridsize without macro
gmarkall Mar 30, 2020
6d5ace6
Fix #5408: test_set_registers_57 fails on Maxwell
gmarkall Mar 26, 2020
7d5e9c9
Implement shared without macro
gmarkall Mar 31, 2020
b490ffc
Implement local array without macro
gmarkall Mar 31, 2020
194f1c9
Implement const.array_like without macros
gmarkall Mar 31, 2020
5c5364a
Rename TestMacro to TestSharedMemoryCreation
gmarkall Mar 31, 2020
9b04dec
Blind attempt to replace macro in HSA backend
gmarkall Mar 31, 2020
c390252
Remove macro expansion pass and associated types
gmarkall Mar 31, 2020
dd488af
Merge remote-tracking branch 'numba/master' into grm-remove-macros
gmarkall Jun 4, 2020
b79c191
Pull some verions of files from master that were missed in the merge
gmarkall Jun 4, 2020
a2f14ed
Remove import of Macro from CUDA stubs
gmarkall Jun 4, 2020
b860e5c
Use parse_shape and parse_dtype in HSA, only accept literals for shme…
gmarkall Jun 4, 2020
aebfdc7
Attempt to remove ir.Intrinsic
gmarkall Jun 4, 2020
a9a3246
Attempt to fix lowering of roc.shared.array
gmarkall Sep 2, 2020
98f987a
Add missing import
gmarkall Sep 2, 2020
6afafcf
Update numba/cuda/tests/cudapy/test_lang.py
gmarkall Sep 2, 2020
5daa538
Update skip message for test on simulator
gmarkall Sep 2, 2020
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
59 changes: 12 additions & 47 deletions docs/source/developer/architecture.rst
Original file line number Diff line number Diff line change
Expand Up @@ -244,43 +244,9 @@ otherwise, the instruction is an no-op.
In :term:`object mode` each variable contains an owned reference to a PyObject.


Stage 3: Macro expansion
------------------------

Now that the function has been translated into the Numba IR, macro expansion can
be performed. Macro expansion converts specific attributes that are known to
Numba into IR nodes representing function calls. This is initiated in the
``numba.compiler.translate_stage`` function, and is implemented in
``numba.macro``.

Examples of attributes that are macro-expanded include the CUDA intrinsics for
grid, block and thread dimensions and indices. For example, the assignment to
``tx`` in the following function::

@cuda.jit(argtypes=[f4[:]])
def f(a):
tx = cuda.threadIdx.x

has the following representation after translation to Numba IR::

$0.1 = global(cuda: <module 'numba.cuda' from '...'>) ['$0.1']
$0.2 = getattr(value=$0.1, attr=threadIdx) ['$0.1', '$0.2']
del $0.1 []
$0.3 = getattr(value=$0.2, attr=x) ['$0.2', '$0.3']
del $0.2 []
tx = $0.3 ['$0.3', 'tx']

After macro expansion, the ``$0.3 = getattr(value=$0.2, attr=x)`` IR node is
translated into::

$0.3 = call tid.x(, ) ['$0.3']

which represents an instance of the ``Intrinsic`` IR node for calling the
``tid.x`` intrinsic function.

.. _`rewrite-untyped-ir`:

Stage 4: Rewrite untyped IR
Stage 3: Rewrite untyped IR
---------------------------

Before running type inference, it may be desired to run certain
Expand Down Expand Up @@ -316,15 +282,14 @@ you'll see the IR being rewritten before the type inference phase::

.. _arch_type_inference:

Stage 5: Infer types
Stage 4: Infer types
--------------------

Now that the Numba IR has been generated and macro-expanded, type analysis
can be performed. The types of the function arguments can be taken either
from the explicit function signature given in the ``@jit`` decorator
(such as ``@jit('float64(float64, float64)')``), or they can be taken from
the types of the actual function arguments if compilation is happening
when the function is first called.
Now that the Numba IR has been generated, type analysis can be performed. The
types of the function arguments can be taken either from the explicit function
signature given in the ``@jit`` decorator (such as ``@jit('float64(float64,
float64)')``), or they can be taken from the types of the actual function
arguments if compilation is happening when the function is first called.

The type inference engine is found in ``numba.typeinfer``. Its job is to
assign a type to every intermediate variable in the Numba IR. The result of
Expand Down Expand Up @@ -365,7 +330,7 @@ types, language features, or functions are used in the function body.

.. _`rewrite-typed-ir`:

Stage 6a: Rewrite typed IR
Stage 5a: Rewrite typed IR
--------------------------

This pass's purpose is to perform any high-level optimizations that still
Expand Down Expand Up @@ -433,7 +398,7 @@ allocates a single result array.

.. _`parallel-accelerator`:

Stage 6b: Perform Automatic Parallelization
Stage 5b: Perform Automatic Parallelization
-------------------------------------------

This pass is only performed if the ``parallel`` option in the :func:`~numba.jit`
Expand Down Expand Up @@ -673,7 +638,7 @@ described in more detail in the following paragraphs.

.. _`lowering`:

Stage 7a: Generate nopython LLVM IR
Stage 6a: Generate nopython LLVM IR
-----------------------------------

If type inference succeeds in finding a Numba type for every intermediate
Expand Down Expand Up @@ -827,7 +792,7 @@ can be seen below::
______________________________________________________________________


Stage 7b: Generate object mode LLVM IR
Stage 6b: Generate object mode LLVM IR
--------------------------------------

If type inference fails to find Numba types for all values inside a function,
Expand Down Expand Up @@ -915,7 +880,7 @@ function. Loop-lifting helps improve the performance of functions that
need to access uncompilable code (such as I/O or plotting code) but still
contain a time-intensive section of compilable code.

Stage 8: Compile LLVM IR to machine code
Stage 7: Compile LLVM IR to machine code
----------------------------------------

In both :term:`object mode` and :term:`nopython mode`, the generated LLVM IR
Expand Down
2 changes: 0 additions & 2 deletions docs/source/developer/repomap.rst
Original file line number Diff line number Diff line change
Expand Up @@ -134,8 +134,6 @@ Compiler Pipeline
print nodes in the IR
- :ghfile:`numba/core/rewrites/static_raise.py` - Converts exceptions with
static arguments into a special form that can be lowered
- :ghfile:`numba/core/rewrites/macros.py` - Generic support for macro expansion
in the Numba IR
- :ghfile:`numba/core/rewrites/static_getitem.py` - Rewrites getitem and setitem
with constant arguments to allow type inference
- :ghfile:`numba/core/rewrites/static_binop.py` - Rewrites binary operations
Expand Down
1 change: 0 additions & 1 deletion numba/core/datamodel/models.py
Original file line number Diff line number Diff line change
Expand Up @@ -302,7 +302,6 @@ def __init__(self, dmm, fe_type):
@register_default(types.Dummy)
@register_default(types.ExceptionInstance)
@register_default(types.ExternalFunction)
@register_default(types.Macro)
@register_default(types.EnumClass)
@register_default(types.IntEnumClass)
@register_default(types.NumberClass)
Expand Down
7 changes: 0 additions & 7 deletions numba/core/errors.py
Original file line number Diff line number Diff line change
Expand Up @@ -560,13 +560,6 @@ class VerificationError(IRError):
pass


class MacroError(NumbaError):
"""
An error occurred during macro expansion.
"""
pass


class DeprecationError(NumbaError):
"""
Functionality is deprecated.
Expand Down
24 changes: 1 addition & 23 deletions numba/core/ir.py
Original file line number Diff line number Diff line change
Expand Up @@ -419,7 +419,7 @@ def unary(cls, fn, value, loc):

@classmethod
def call(cls, func, args, kws, loc, vararg=None):
assert isinstance(func, (Var, Intrinsic))
assert isinstance(func, Var)
assert isinstance(loc, Loc)
op = 'call'
return cls(op=op, loc=loc, func=func, args=args, kws=kws,
Expand Down Expand Up @@ -1033,28 +1033,6 @@ def all_names(self):
"""
return self.versioned_names | {self.unversioned_name,}

class Intrinsic(EqualityCheckMixin):
"""
A low-level "intrinsic" function. Suitable as the callable of a "call"
expression.

The given *name* is backend-defined and will be inserted as-is
in the generated low-level IR.
The *type* is the equivalent Numba signature of calling the intrinsic.
"""

def __init__(self, name, type, args, loc=None):
self.name = name
self.type = type
self.loc = loc
self.args = args

def __repr__(self):
return 'Intrinsic(%s, %s, %s)' % (self.name, self.type, self.loc)

def __str__(self):
return self.name


class Scope(EqualityCheckMixin):
"""
Expand Down
8 changes: 2 additions & 6 deletions numba/core/lowering.py
Original file line number Diff line number Diff line change
Expand Up @@ -781,10 +781,7 @@ def lower_call(self, resty, expr):
if isinstance(signature.return_type, types.Phantom):
return self.context.get_dummy_value()

if isinstance(expr.func, ir.Intrinsic):
fnty = expr.func.name
else:
fnty = self.typeof(expr.func.name)
fnty = self.typeof(expr.func.name)

if isinstance(fnty, types.ObjModeDispatcher):
res = self._lower_call_ObjModeDispatcher(fnty, expr, signature)
Expand Down Expand Up @@ -1023,8 +1020,7 @@ def _lower_call_normal(self, fnty, expr, signature):
# Normal function resolution
self.debug_print("# calling normal function: {0}".format(fnty))
self.debug_print("# signature: {0}".format(signature))
if (isinstance(expr.func, ir.Intrinsic) or
isinstance(fnty, types.ObjModeDispatcher)):
if isinstance(fnty, types.ObjModeDispatcher):
argvals = expr.func.args
else:
argvals = self.fold_call_args(
Expand Down
2 changes: 1 addition & 1 deletion numba/core/rewrites/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,4 +5,4 @@
from .registry import register_rewrite, rewrite_registry, Rewrite
# Register various built-in rewrite passes
from numba.core.rewrites import (static_getitem, static_raise, static_binop,
ir_print, macros)
ir_print)
131 changes: 0 additions & 131 deletions numba/core/rewrites/macros.py

This file was deleted.

7 changes: 1 addition & 6 deletions numba/core/typeinfer.py
Original file line number Diff line number Diff line change
Expand Up @@ -1514,12 +1514,7 @@ def typeof_global(self, inst, target, gvar):

def typeof_expr(self, inst, target, expr):
if expr.op == 'call':
if isinstance(expr.func, ir.Intrinsic):
sig = expr.func.type
self.add_type(target.name, sig.return_type, loc=inst.loc)
self.add_calltype(expr, sig)
else:
self.typeof_call(inst, target, expr)
self.typeof_call(inst, target, expr)
elif expr.op in ('getiter', 'iternext'):
self.typeof_intrinsic_call(inst, target, expr.op, expr.value)
elif expr.op == 'exhaust_iter':
Expand Down
11 changes: 0 additions & 11 deletions numba/core/types/misc.py
Original file line number Diff line number Diff line change
Expand Up @@ -121,17 +121,6 @@ def key(self):
return self.pymod


class Macro(Type):
def __init__(self, template):
self.template = template
cls = type(self)
super(Macro, self).__init__("%s(%s)" % (cls.__name__, template))

@property
def key(self):
return self.template


class MemInfoPointer(Type):
"""
Pointer to a Numba "meminfo" (i.e. the information for a managed
Expand Down
4 changes: 0 additions & 4 deletions numba/core/typing/templates.py
Original file line number Diff line number Diff line change
Expand Up @@ -932,10 +932,6 @@ def generic(_, args, kws):
return wrapper


class MacroTemplate(object):
pass


# -----------------------------

class Registry(object):
Expand Down
1 change: 0 additions & 1 deletion numba/cuda/stubs.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,6 @@
import functools
import llvmlite.llvmpy.core as lc
import operator
from numba.core.rewrites.macros import Macro
from numba.core import types, typing, ir
from .cudadrv import nvvm

Expand Down