Skip to content
Valentin Haenel edited this page Mar 3, 2021 · 1 revision

Numba Meeting: 2021-03-02

Attendees: Val, Siu, Ehsan, Graham, Jim, Luk, P-Ortmann, Stuart, Todd

NOTE: All communication is subject to the Numba Code of Conduct.

0. Feature Discussion/admin

  • RC2 status
  • Discussion on Roles
    • presentation: https://hackmd.io/fApKIH4QSc22wtTbppZ8Fg
    • Val: how to incentivize maintainers to stay?
    • Ehsan: thinks incentives are there. Problem is to get started to be a maintainer.
    • Val on how to get started:
      • trying fixing issues, become a triager to help other to fix issue.
      • training materials, courses?
    • Ehsan on training:
      • Good high-level arch doc; the flow
        • expand the numba dev docs
      • Good coding convention
        • e.g. flake8, docstrings, ...
        • congnitive complexity; smaller modules/functions
        • less magic; more structures
    • Graham:
      • not lost due to size of things but cross-modules; hard to follow
      • because of the complexity; only high-stake contributors will stay around
    • Siu: restructuring and more interfaces will be needed but the refactoring will take time
    • Stuart: refactoring is hard.
    • Graham: experience on being a Triager is a good way to see different places of numba
    • Stuart: triage pairing/mentor

1. New Issues

  • #6774 - parallel bugs with bool arrays?
  • #6773 - Regression in 0.53rc2: axis= parameter in gufuncs
  • #6772 - Numba 0.53.0rc2 function-as-argument caching fails some of the time
    • function argument caching were never supported.
  • #6768 - numpy.arange and numpy.linspace not accurate in numba
    • diff in numpy vs numba impl
    • numba impl may be accumulating round-off errors
  • #6754 - Add the ability to alias a function/module within the compiler registry
    • Graham to respond
  • #6752 - After adding nopython=True in @jit() getting error.
    • We may be able to detect user error that uses a class numba doesn't support
  • #6749 - Parallel accelerator yields incorrect results for high dimensional arrays
    • has PR. small scheduler bug
  • #6747 - error installing numba
  • #6746 - build fails on AArch64, Fedora 33
  • #6745 - Feature request: np.searchsorted(array, range)
  • #6744 - LiteralSlice.indices errors

Closed Issues

  • #6771 - Jitclasses and Structrefs Memory Leak Object-Like fields
  • #6765 - CUDA: Cooperative groups in Windows without TCC mode
  • #6764 - Numba 0.53.0rc2 checklist

2. New PRs

  • #6770 - Support changing dispatcher options via a with context
  • #6769 - CUDA: Replace CachedPTX and CachedCUFunction with CUDACodeLibrary functionality
  • #6767 - Build script and CI cleaning
  • #6766 - Fix DeviceNDArray null shape issue
  • #6762 - Glue wrappers to create @overload from split typing and lowering.
  • #6760 - Fix scheduler bug where it rounds to 0 divisions for a chunk.
  • #6753 - The check for internal types in RewriteArrayExprs

Closed PRs

  • #6763 - use an alternative constraint for the conda packages
  • #6761 - Fix llvmlite dep
  • #6759 - Update CHANGE_LOG for 0.53.0rc2
  • #6758 - patch to compile _devicearray.cpp with c++11
  • #6757 - Cherrypick patches for 0.53.0rc2
  • #6756 - Cross 6750 6755
  • #6755 - install llvmlite from numba/label/dev
  • #6751 - Suppress typeguard warnings that affect testing.
  • #6750 - Bump to llvmlite 0.37 series
  • #6748 - Pr 6741 continued
  • #6743 - void

3. Next Release: Version 0.54.0/0.37.0, RC=May 2021

4. Upcoming tasks

Clone this wiki locally