Minutes_2019_12_17
Valentin Haenel edited this page Mar 30, 2020
·
1 revision
Attendees: Stan, Siu, Aaron, Graham, Pearu, Stuart, Todd, Ehsan,
- Next meeting: Jan 7
- threadmask status
- affected by a bug on master on array analysis
- stuart doesn't feel confident to include this in this release
- things still need to look at:
- MKL interaction with threadmask
- Boundschecking
- Should be able to merge tmr
- Siu needs to re-review
- @gmarkall intends to work on #3247, interface for CUDA GPU memory manager, soon
- Release status
- issue with propagation of
_disable_reflected_list
flag- may need to "solve" with docs
- issue with propagation of
- Plan for 2020:
- Finish out any PRs that are close, but didn't make the 0.47 cut
- Do another release? TBD
- Then start doing the major codebase refactoring (drop Py2.7, 3.5, run everything through black, etc)
- **** #4970 - Unicode equality overload regression
- release blocker
- maybe solution:
- add
UserTypeMixin
- add
-
#4969 - Vectorized function not working in nopython mode
- already updated docs to clarify
-
#4968 - LoweringError
- not enough info, likely using global list
-
#4966 - Numpy argmin not compiling
- can't replicate
- **** #4963 - parfors issue with recent patch
-
#4960 - Jitclass with List crashes when pop element
- refct bug
-
#4959 - Cache look-up for functions with optional arguments fails ...
- confirmed
-
#4956 - Add ability to specify types for lazily compiled function
- from datashader
- #4954 - Running only CUDA tests results in strange crashes / failures
- #4953 - Why numba decorators is not compatible with some numpy functions?
-
#4952 - CUDA guvectorize target does not copy back modified input arrays
- PR opened to clarify non-support
-
#4951 - Slow performance with numpy.record data type (only when passed as parameter)
- need investigation
-
#4950 - Invalid use of Function
- replied with fix
-
#4949 -
float64()
ctor doesn't support1d
arrays.- bug
- incorrectly casting to scalar.
- #4948 - typed-list fails to refine via setitem
-
#4945 - implement
ndarray.flat.__getitem__(slice)
- feature request
- **** #4944 - master: @overload_method issues with *args
- has open PR https://github.com/numba/numba/pull/4978
- #4943 - DeviceNDArray contiguity differs from Numpy array contiguity
-
#4940 - How make a python class jitclass compatible when it contains itself jitclass classes?
- need to reply
- #4972 - How does jitclass work?
- #4971 - what's the differences between numba.cuda.jit and numba.jit?
- #4965 - Unknown attribute 'set_trace' of type Module(<module 'pdb' from '/home/xxx/.conda/envs/torch12/lib/python3.6/pdb.py'>)
- #4973 - Fixes a bug in the relabelling logic in literal_unroll.
- **** #4967 - A prototype of first-class functions [WIP]
- #4964 - Fix #4628: Add more appropriate typing for CUDA device arrays
- #4957 - Add notes on overwriting gufunc inputs to docs
- **** #4942 - Prevent some parfor aliasing. Rename copied function var to prevent recursive type locking.
-
#4975 - Make
device_array_like
create contiguous arrays (Fixes #4832)
-
#4941 -
__future__
import for disable reflected list - #4962 - Fix test error on windows
- #4961 - Update hash(tuple) for Python 3.8.
-
#4958 - Add docs for
try..except
- #4955 - Move overload of literal_unroll to avoid circular dependency that breaks Python 2.7
- #4947 - Document jitclass with numba.typed use.
-
#4946 - Improve the error message for
raise <string>
.
-
CPython 3.8
-
Requests for 0.47 (last release for the year) - jitclass performance issues - llvm 9 trial - CTK libcudadevrt.a - CI needs to take 50% of current time - Val & Stu already looking at this - also checking Azure CI config to avoid wasting compute time - Caching: - transitive dependency - other issues: i.e. function as argument,
with objmode
- distributing cache - Immutable list and deprecating reflected list - Switch to pytest (see above) - Using Numba to generate LLVM/NVVM IR for different targets https://github.com/numba/numba/issues/4546 -@overload
for gpu