Minutes_2021_12_07
esc edited this page Dec 7, 2021
·
1 revision
Attendees: Siu Kwan Lam, Graham Markall, Nick Riasanovsky, stuart, Todd A. Anderson, Val, Guilherme Leobas, Luk, brandon willard
NOTE: All communication is subject to the Numba Code of Conduct.
-
PSA: numba devs Holiday hours
- Next week is last meeting for 2021
- After Dec 17 until Jan 11 of 2022, reduce support on Gitter, GH, Discourse...
- Back to normal after Jan 11
-
Getting 0.55.0 out the door this side of the holiday
-
#7560 refactor
with
detection for py3.10 merging soon - build/packaging problems: missing numpy packages for Python3.10
- we will not ship py3.10 conda packages during the RC because of this
- Pending PRs:
- last of Py3.10
isinstance
- and a few others if time allow
-
#7560 refactor
-
Ask community about dropping py3.7 support.
- per NEP 29, 0.55.0 can be the last version to support it.
-
Last to discuss (10 min timebox): llvmlite to support multiple LLVM verions?
- Support for Numba jit usecase, and non-Numba usecase.
- Numba requires additional patches on stock LLVM
- The plan:
- make it easier to install llvmlite with user provided LLVM
- maintainers will not make extra builds
- Communicate to users what are supported use-cases and what aren't and how these will be supported in terms of priority
- Potentially we could use conda-forge LLVM for testing "secondary" LLVMs e.g. in case of pull-requests for "secondary" LLVMs
- If changes start being more on the C++ rather than on Python side, we may need to abandon ship?
- Do we have an "exit strategy"? --> going back to "only a single version"
- We will only accept secondary future versions, e.g. if we now support 11, we will accept 12 and 13
-
Debugging LLVM optimization:
- Gitter discussion: https://gitter.im/numba/numba?at=61ae7cca197fa95a1cacc28d
-
opt
-
conda install numba::llvmdev
installs LLVM with its cmdline utils -
NUMBA_OPT=0
turns optimization off for the process.func.inspect_llvm(signature)
gets the LLVM IR for a function given its signature - Useful commands for seeing what LLVM optimization passes did:
opt -O3 -print-after-all
;opt -view-cfg
-
- #7607 - CUDA: Linking PTX only works for eager compilation
- #7611 - njit with parallel=True causes fatal Python error and aborted
- #7612 - CUDA Benchmarking
- #7613 - CUDA typing mode
- #7614 - Make sure types.Type instances are not mutated
- #7615 - Significant slow down on njit'd functions when optional arguments not provided
- #7616 - Local variable gets overwritten causing wrong gdb debugging display
- #7620 - clamp tbb to 2021 on the 0.55 release branch
- #7609 - Test suite on mainline@8be3456 is failing.
- #7608 - [WIP] Updating TBB backend calls for tbb=2022.1
-
#7617 - Numba gdb-python extension for printing
- Graham will be testing this on CUDA
- #7619 - CUDA: Fix linking with PTX when compiling lazily
-
#7621 - Add support for linking CUDA C / C++ with
@cuda.jit
kernels- Maybe even to look into NVRTC for internal use
- #7622 - Support fortran loop ordering for ufunc generation
- #7610 - Fix #7609. Type should not be mutated.
- #7618 - Fix the doc build: docutils 0.18 not compatible with pinned sphinx
- Request for 0.55
-
broadcast_to
https://github.com/numba/numba/pull/7119- driven by need for
vectorize
support in aesara - Now merged
- driven by need for
-