Minutes_2020_06_23
Valentin Haenel edited this page Jun 23, 2020
·
1 revision
Attendees: Todd, Val, Stuart, Pearu, Aaron, Siu, Guilherme, Hameer Apologies: Graham
-
0.50.1 Wed
- BUG:
get_terminal_size
fail in some terminal - BUG: deprecation notice bump
- BUG: inliner issue w/ literally
- BUG: error messages problem w/ CUDA target
- BUG:
-
discourse feedback?
- issue template updated to redirect to discourse
- add sub-category: https://numba.discourse.group/t/testing-with-a-suggestion-on-site-organization/28
-
Numba release candidate and llvmlite
- Pearu reported pip packaging conflict but now it works
- probably caught between wheel uploading before it finishes
-
Guilherme questions:
-
@guvectorize
compile new versions - sound like generalize
dufunc
to work likeguvectorize
- Siu suggest short term workaround maybe to wrap guvectorize function to generate new gufunc for each new signature.
- Siu thiks this is good work to attempt but not easy.
-
-
#5902 - cuda.jit does NOT preserve the original
__doc__
and__module__
of the decorated function -
#5901 - cuda: inconsistency between data type specifications in
device_array
andlocal.array
- #5899 - Add support for programming Tensor Cores in CUDA kernel
- #5898 - isinstance fails with complex numba types
- #5897 - Reflected-list and typed List produce slow concatenate for list of equal sized arrays
- #5896 - 'NoneType' object has no attribute 'args'
- #5895 - Support for numpy.dot in higher dimensions
- #5894 - Support for np.ix_
- #5891 - Weird warnings when using @jit on version 0.50
- #5890 - Seemingly random segfault on macOS if function is in larger library
-
#5887 - Regression with literally and overloads with inline='always'
- has a patch
-
#5885 - Add support for overloading
__call__
on custom types withextending.overload
-
#5883 - Reminder to remove the
numba.jitclass
shim - #5881 - Cannot run on simple for loop
-
#5880 - Bug: Argwhere returns
a
storage - ** #5878 - ConcurrentBag-like data structure for parallel appends
- Todd:
- thinks it's not far off
- does not involve
array_analysis
- gufunc limitation will be a bigger issue
- Siu:
- bigger feature to tackle on gufunc replacement/updates
- Todd:
- ** #5875 - libdispatch-based (GCD) workqueue
- #5874 - Support literal string in structured dtype
-
#5873 - Implement
type
key-word-argument fornumba.typed.List
constructor - #5872 - cannot write to numpy array in namedtuple with parallel=True
- #5871 - Bug on pyapi.call when args is None
-
#5870 - np.linalg.pinv Unintended mutation of input
- has patch
- #5888 - how to specify the number of threads when using prange
- #5900 - Enable CUDA Unified Memory
- #5893 - Allocate 1D iteration space one at a time for more even distribution.
- #5892 - Better handling of tuple arguments whose fields are modified.
- #5889 - Fixes literally not forcing re-dispatch for inline='always'
- #5886 - Add support for overloading the call operator
- #5884 - Update the deprecation notices for 0.50.1
- #5879 - Fix erroneous input mutation in linalg routines
- #5877 - Jitclass builtin methods
- #5876 - Improve undefined variable error message
- #5882 - Type check function in jit decorator
-
Requests for 0.51
-
0.51 potential tasks (To be updated)
- Opening up the numba meeting