Skip to content

Conversation

@LeiWang1999
Copy link
Contributor

@LeiWang1999 LeiWang1999 commented Aug 23, 2024

This pull request includes a variety of changes to update dependencies, add support for bfloat16 data type, and improve functionality in several modules. Below are the most important changes grouped by theme:

Dependency Updates:

  • Updated the tvm submodule to a new commit (3c6317a1ea614b7277ffe0b4ede18b4652afad1c).
  • Incremented the version number to 0.0.1.dev14 in VERSION and bitblas/__init__.py [1] [2].

Support for bfloat16 Data Type:

  • Added bfloat16 support in various functions and modules, including bitblas/base/utils.py, bitblas/gpu/gemv_dequantize.py, and bitblas/ops/general_matmul/__init__.py [1] [2] [3].
  • Modified type mappings and assertions to include bfloat16 in bitblas/builder/wrapper/tir.py, bitblas/gpu/matmul_analysis.py, and bitblas/wrapper/general.py [1] [2] [3].

Future TODO Items

  • Update Supported Matrix in README.md

@LeiWang1999 LeiWang1999 changed the title [Dev] Support Numeric Precision BFloat16 as activation type. [Dev] Support Numeric Precision BFloat16 as activation type Aug 23, 2024
@LeiWang1999 LeiWang1999 merged commit 673290b into microsoft:main Aug 23, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant