Skip to content
esc edited this page Mar 5, 2024 · 1 revision

Numba Meeting: 2024-03-05

Attendees: FPOC (last week): FPOC (incoming):

NOTE: All communication is subject to the Numba Code of Conduct.

Please refer to this calendar for the next meeting date.

0. Discussion

  • Numba user survey update.

  • Numba 0.59.1 schedule.

    • Aim for patches in this week, test build over the weekend and release next week.
  • Review NumPy 2.0 community communication note.

  • How to stage NumPy 2.0 work and impact on maintenance.

  • Draft of February's NAN is ready for comments from maintainers.

  • Possible "solutions" to the question last week about how to specialise compilation based on a configuration object, and still preserve literals.

    from numba import jit, types
    from numba.extending import overload
    import functools
    
    
    # Some sort of configuration object
    class Config():
       def __init__(self, a, b):
           self._a = a
           self._b = b
    
       @property
       def a(self):
           return self._a
    
       @property
       def b(self):
           return self._b
    
    
    # Perhaps use a cache so that the identical Config instances return the same
    # jit function? This will prevent recompilation of the entry point for two
    # identical config instances as the jit function passed as the argument will be
    # the same.
    @functools.cache
    def obj2strkeydict(obj, config_name):
    
       # unpack object to freevars and close over them
       tmp_a = obj.a
       tmp_b = obj.b
       assert isinstance(config_name, str)
       tmp_force_heterogeneous = config_name
    
       @jit
       def configurator():
           d = {'a': tmp_a,
                'b': tmp_b,
                'config_name': tmp_force_heterogeneous}
           return d
    
       # return a configuration function that returns a string-key-dict
       # representation of the configuration object.
       return configurator
    
    
    # Define some "work"
    def work(pred):
       # ... elided, put python implementation here
       pass
    
    
    @overload(work)
    def ol_work(pred):
       assert isinstance(pred, types.Literal)
       print('Calling work with type', pred)
       return lambda pred: pred
    
    
    @jit
    def physics(cfig_func):
       # This is the main entry point to the application, it takes a configuration
       # function as an argument. It will specialise on each configuration
       # function.
    
       # call the function, config is a string-key-dict
       config = cfig_func()
       # unpack config, these types will be preserved as literals.
       a = config['a']
       b = config['b']
       # call some work to check the types.
       return work(b) + a
    
    
    # demo
    def demo():
    
       # Create two different Python based configuration objects with literal
       # entries.
       configuration1 = Config(10, True)
       configuration2 = Config(12.3, False)
    
       # Create corresponding converted configuration objects...
       jit_config1 = obj2strkeydict(configuration1, 'config1')
       jit_config2 = obj2strkeydict(configuration2, 'config2')
       # create "another" configuration1 instance, memoization prevents
       # duplication.
       jit_config1_again = obj2strkeydict(configuration1, 'config1')
    
       # Call the `physics` application, it will specialize on config.
       physics(jit_config1)
       physics(jit_config2)
       # should not trigger a 3rd compilation, config is memoized.
       physics(jit_config1_again)
       physics.inspect_types()
    
    
    if __name__ == "__main__":
       demo()

New "Ready for Review" PRs

1. New Issues

  • numba#9471 - CUDA crashes when passed complex record array
  • numba#9472 - [ANN] Numba User Survey 2024
  • numba#9473 - want int64 rather than Optional(int64) when dict.get(key, default_int64) and dict value type is int64
  • numba#9475 - Infer Numba Types from Python Type Hints
  • numba#9476 - [feature request] assign a tuple to one row of recarray
  • llvmlite#1034 - Add numba and llvmlite musl wheels for alpine support

Closed Issues

  • numba#9477 - np.searchsorted no longer work for datetime64[ns] array type

2. New PRs

  • numba#9470 - Add ability to link CUDA functions with in-memory PTX.
  • numba#9474 - (Do Merge) NumPy 2: Removed dead code conflicting with binary builds against NumPy 2

Closed PRs

(last numba: 9477; llvmlite 1034)

3. Short-term Roadmap

2024-gantt: TBD 2023-gantt: https://github.com/numba/numba/issues/8971

Clone this wiki locally