Skip to content
esc edited this page Aug 3, 2022 · 1 revision

Numba Meeting: 2022-08-02

Attendees: Alexander Kalistratov, Graham Markall, Sebastian Ernst, Todd A. Anderson, Siu Kwan Lam, Val, Benjamin Graham, Andre Masella, LI Da, Nicholas Riasanovsky, Shannon Quinn, Guilherme Leobas

NOTE: All communication is subject to the Numba Code of Conduct.

Please refer to this calendar for the next meeting date.

0. Discussion

1. New Issues

  • #8286 - Bytes type fails len when parallel
  • #8291 - parallel=True in nopython mode seems to make memory leak
  • #8296 - Add citation support
  • #8298 - Function compilation not being updated for specific types on 32-bit systems
    • Can potentially be fixed easily at the typeof level.
  • #8300 - No implementation of CUDA shared.array error
  • #8301 - Worst case performance of njitted any vs. plain numpy any
    • Numba is faster when the input is not just zeros
    • np.any version is vectorized?
    • Check the assembly of both functions
  • #8303 - Allow multiple outputs for guvectorize on CUDA target
  • #8304 - Python 3.11

llvmlite:

  • #868 - Apple M1, poetry and tensorflow

Closed Issues

  • #8288 - main failing in parfors testing as of 6f70229
  • #8293 - Numba appears ~10X slower running in docker versus native on OSX

2. New PRs

  • #8287 - Drop CUDA 10.2
  • #8290 - CUDA: Replace use of deprecated NVVM IR features, questionable constructs
  • #8294 - CUDA: Add trig ufunc support
  • #8295 - Add get_const_mem_size method to Dispatcher
  • #8297 - Add __name__ attribute to CUDAUFuncDispatcher and test case
  • #8299 - Fix build for mingw toolchain
  • #8302 - CUDA: Revert numba_nvvm intrinsic name workaround

llvmlite:

  • #869 - bump max Python version to 3.11

Closed PRs

  • merged - #8289 - Revert #8265.
  • merged - #8292 - update checklist

llvmlite:

  • merged - #866 - Cherry pick for 0.39 release
  • merged - #867 - Update CHANGE_LOG 0.39.0 final.

3. Next Release: Version 0.57.0/0.40.0, RC Jan 2023

Clone this wiki locally