Skip to content
Valentin Haenel edited this page Mar 30, 2020 · 1 revision

Numba Meeting: 2019-10-15

Attendees: Ehsan, Pearu, Val, Stan, Siu, Todd, Stuart, Aaron

0. Feature Discussion

  • Numba 0.46.1

    • Python 3.8 was released on Monday
    • Critical bugs
      • inliner bug @stuart
  • Copied from last 2 weeks

    • Requests for 0.47 (last release for the year)
      • jitclass performance issues
      • llvm 9 trial
      • CTK libcudadevrt.a
      • CI needs to take 50% of current time
        • Val & Stu already looking at this
        • also checking Azure CI config to avoid wasting compute time
      • Caching:
        • transitive dependency
        • other issues: i.e. function as argument, with objmode
        • distributing cache
      • Immutable list and deprecating reflected list
      • Switch to pytest (see above)
      • Using Numba to generate LLVM/NVVM IR for different targets https://github.com/numba/numba/issues/4546
        • @overload for gpu

1. New issues

  • #4702 - Problem installing numba 0.46 and numpy 1.17 in conda

    • we still build with numpy pinned in farm
    • official builds are not affected
    • new CI farm is not affected
  • #4701 - Numerical differences when using numpy.linalg.norm

    • numpy bug?
    • add FAQ entry on floating point precision
  • **** #4698 - Serializing typed Lists and Dicts?

    • pickle-able Lists and Dicts make sense
    • Stan thinks:
      • focus on portability not performance
      • make pickle work
    • Val thinks:
      • require reasonable performance
    • Need more thinking.
  • #4697 - nested dictionaries failed

  • **** #4696 - Inconsistent behavior of stencil decorator

    • parfors
    • Todd will investigate
  • #4694 - Default None argument treatment is different in type-inference and in lowering

  • IR inliner/rewrite-passes related

    • Issues:
      • **** #4693 - raising Exceptions in inlined functions fails on compile
        • Need to introduce pass grouping?
      • **** #4692 - generated_jit doesn't get passed correct arguments with inline='always' argument
        • Should generated_jit be deprecated in favor of overload?
          • or just merge the impl
      • **** #4691 - tuple slicing fails in inlined functions
    • Action
      • Issues are not critical
      • Do better fix as suggested by Ehsan
      • better separation of analysis vs transformation
  • #4690 - numba 0.46, useless(?) code in parfor init block

    • waiting for reproducer
  • **** #4689 - The fastmath SVML option has been broken since somewhere between 0.43.1 and 0.44

    • initializing the LLVM engine before SVML option is set will freeze the no-SVML setting for subsequent compilations
    • not detected by unit tests because compile_isolated creates a new LLVM engine every time
  • #4688 - Add support for @classmethod in jitclass

    • put on the jitclass wishlist
  • #4687 - inline_closure_call may ignore glbls parameter

    • replied
  • #4684 - Missing arg in TypeError

  • #4683 - Help for VMProf with Numba

    • need more information
    • maybe point them to numba/stacktrace?
  • #4681 - Error in py_call_impl(callable, dots$args, dots$keywords)

    • need more info
  • #4679 - Numba 0.46.0 final checklist

    • closed
  • **** #4676 - CUDA decorator/compiler argument to set literal precision

    • need to fix the integer case in 0.47
      • see NBEP
    • what about float?
      • literal only?
      • __truediv__(int, int) or math.log(int)->float??
      • what does C do?
  • #4674 - Same code works on macOS Python 3.7.2 but not on Ubuntu 16.04 Python 3.5.2

    • Siu looking into to this.
    • Linux py3.5 only bug so far.
    • problem in IR mutation in parfor transformation
  • #4671 - can't pickle DUFunc objects

    • see PR
  • #4670 - I have a question about half precision float support in numba cuda

    • only just started
  • #4668 - Round segfaults in cuda jitted function if passed ndigits optional parameter

    • need to catch this in typing

Already Closed

  • #4700 - Numba makes Large Numbers Negative
  • #4699 - Request for f-string support and numpy.isscalar.
  • #4686 - Cannot parallelize a loop
  • #4678 - @jit decorator with signature including List(List(int64)) not working

2. New Open PRs

  • #4703 - Fix numba.jit parameter name signature_or_function
  • **** #4695 - Refactor overload* and support jit_options and inline
  • #4677 - Add support for np.setxor1d
  • #4673 - Extend test timeout and add identifier cmdline
  • #4672 - Fix pickling of dufunc
  • #4669 - Add link to ParallelAccelerator paper.

Already merged/closed

  • #4685 - Apply #4682 to 0.46 release branch
  • #4682 - Update changelog for 0.46 final release
  • #4680 - Apply #4675 to on 0.46 release branch
  • #4675 - Bump cuda array interface to version 2

4. Next Release: Version 0.46.1, RC=

  • CPython 3.8
Clone this wiki locally