Skip to content
Valentin Haenel edited this page Jun 23, 2020 · 1 revision

Numba Meeting: 2020-06-23

Attendees: Todd, Val, Stuart, Pearu, Aaron, Siu, Guilherme, Hameer Apologies: Graham

0. Feature Discussion

  • 0.50.1 Wed

    • BUG: get_terminal_size fail in some terminal
    • BUG: deprecation notice bump
    • BUG: inliner issue w/ literally
    • BUG: error messages problem w/ CUDA target
  • discourse feedback?

  • Numba release candidate and llvmlite

    • Pearu reported pip packaging conflict but now it works
    • probably caught between wheel uploading before it finishes
  • Guilherme questions:

    • @guvectorize compile new versions
    • sound like generalize dufunc to work like guvectorize
    • Siu suggest short term workaround maybe to wrap guvectorize function to generate new gufunc for each new signature.
    • Siu thiks this is good work to attempt but not easy.

1. New Issues

  • #5902 - cuda.jit does NOT preserve the original __doc__ and __module__ of the decorated function
  • #5901 - cuda: inconsistency between data type specifications in device_array and local.array
  • #5899 - Add support for programming Tensor Cores in CUDA kernel
  • #5898 - isinstance fails with complex numba types
  • #5897 - Reflected-list and typed List produce slow concatenate for list of equal sized arrays
  • #5896 - 'NoneType' object has no attribute 'args'
  • #5895 - Support for numpy.dot in higher dimensions
  • #5894 - Support for np.ix_
  • #5891 - Weird warnings when using @jit on version 0.50
  • #5890 - Seemingly random segfault on macOS if function is in larger library
  • #5887 - Regression with literally and overloads with inline='always'
    • has a patch
  • #5885 - Add support for overloading __call__ on custom types with extending.overload
  • #5883 - Reminder to remove the numba.jitclass shim
  • #5881 - Cannot run on simple for loop
  • #5880 - Bug: Argwhere returns a storage
  • ** #5878 - ConcurrentBag-like data structure for parallel appends
    • Todd:
      • thinks it's not far off
      • does not involve array_analysis
      • gufunc limitation will be a bigger issue
    • Siu:
      • bigger feature to tackle on gufunc replacement/updates
  • ** #5875 - libdispatch-based (GCD) workqueue
  • #5874 - Support literal string in structured dtype
  • #5873 - Implement type key-word-argument for numba.typed.List constructor
  • #5872 - cannot write to numpy array in namedtuple with parallel=True
  • #5871 - Bug on pyapi.call when args is None
  • #5870 - np.linalg.pinv Unintended mutation of input
    • has patch

Closed Issues

  • #5888 - how to specify the number of threads when using prange

2. New PRs

  • #5900 - Enable CUDA Unified Memory
  • #5893 - Allocate 1D iteration space one at a time for more even distribution.
  • #5892 - Better handling of tuple arguments whose fields are modified.
  • #5889 - Fixes literally not forcing re-dispatch for inline='always'
  • #5886 - Add support for overloading the call operator
  • #5884 - Update the deprecation notices for 0.50.1
  • #5879 - Fix erroneous input mutation in linalg routines
  • #5877 - Jitclass builtin methods
  • #5876 - Improve undefined variable error message

Closed PRs

  • #5882 - Type check function in jit decorator

3. Next Release: Version 0.51.0, RC=22 July, Final 29 July?

  • Requests for 0.51

  • high risk stuff for 0.51.

  • 0.51 potential tasks (To be updated)

4. Upcoming tasks

  • Opening up the numba meeting
Clone this wiki locally