Minutes_2024_03_05
esc edited this page Mar 5, 2024
·
1 revision
Attendees: FPOC (last week): FPOC (incoming):
NOTE: All communication is subject to the Numba Code of Conduct.
Please refer to this calendar for the next meeting date.
-
Numba user survey update.
- 91 responses so far 🔥
- https://numba.discourse.group/t/numba-user-survey-2024/2411
- please continue to share 🙏
-
Numba 0.59.1 schedule.
- Aim for patches in this week, test build over the weekend and release next week.
-
Review NumPy 2.0 community communication note.
-
How to stage NumPy 2.0 work and impact on maintenance.
-
Draft of February's NAN is ready for comments from maintainers.
-
Possible "solutions" to the question last week about how to specialise compilation based on a configuration object, and still preserve literals.
from numba import jit, types from numba.extending import overload import functools # Some sort of configuration object class Config(): def __init__(self, a, b): self._a = a self._b = b @property def a(self): return self._a @property def b(self): return self._b # Perhaps use a cache so that the identical Config instances return the same # jit function? This will prevent recompilation of the entry point for two # identical config instances as the jit function passed as the argument will be # the same. @functools.cache def obj2strkeydict(obj, config_name): # unpack object to freevars and close over them tmp_a = obj.a tmp_b = obj.b assert isinstance(config_name, str) tmp_force_heterogeneous = config_name @jit def configurator(): d = {'a': tmp_a, 'b': tmp_b, 'config_name': tmp_force_heterogeneous} return d # return a configuration function that returns a string-key-dict # representation of the configuration object. return configurator # Define some "work" def work(pred): # ... elided, put python implementation here pass @overload(work) def ol_work(pred): assert isinstance(pred, types.Literal) print('Calling work with type', pred) return lambda pred: pred @jit def physics(cfig_func): # This is the main entry point to the application, it takes a configuration # function as an argument. It will specialise on each configuration # function. # call the function, config is a string-key-dict config = cfig_func() # unpack config, these types will be preserved as literals. a = config['a'] b = config['b'] # call some work to check the types. return work(b) + a # demo def demo(): # Create two different Python based configuration objects with literal # entries. configuration1 = Config(10, True) configuration2 = Config(12.3, False) # Create corresponding converted configuration objects... jit_config1 = obj2strkeydict(configuration1, 'config1') jit_config2 = obj2strkeydict(configuration2, 'config2') # create "another" configuration1 instance, memoization prevents # duplication. jit_config1_again = obj2strkeydict(configuration1, 'config1') # Call the `physics` application, it will specialize on config. physics(jit_config1) physics(jit_config2) # should not trigger a 3rd compilation, config is memoized. physics(jit_config1_again) physics.inspect_types() if __name__ == "__main__": demo()
- LLVM upgrade:
- discourse thread: https://numba.discourse.group/t/llvmlite-upgrade-cadence/2436
- For 15: Passes not supported on legacy pass manager:
- ArgumentPromotionPass: https://reviews.llvm.org/D128536
- LoopUnswitchPass: https://reviews.llvm.org/D124376
- Effort to migrate to new pass manager estimates?
- ???
- Other tasks:
- llvmdev recipe update
- https://github.com/numba/llvmlite/tree/main/conda-recipes/llvmdev
- Required patch when building LLVM for testing:
- Beyond LLVM 15:
- Opaque pointer support
- Other changes?
- numba#9471 - CUDA crashes when passed complex record array
- numba#9472 - [ANN] Numba User Survey 2024
-
numba#9473 - want
int64
rather thanOptional(int64)
whendict.get(key, default_int64)
and dict value type isint64
- numba#9475 - Infer Numba Types from Python Type Hints
-
numba#9476 - [feature request] assign a
tuple
to one row ofrecarray
-
llvmlite#1034 - Add
numba
andllvmlite
musl
wheels foralpine
support
- numba#9477 - np.searchsorted no longer work for datetime64[ns] array type
- numba#9470 - Add ability to link CUDA functions with in-memory PTX.
- numba#9474 - (Do Merge) NumPy 2: Removed dead code conflicting with binary builds against NumPy 2
(last numba: 9477; llvmlite 1034)
2024-gantt: TBD 2023-gantt: https://github.com/numba/numba/issues/8971