Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Revert "[ParamManager] Use BundleModelParams for transform_dequantize" #1120

Conversation

junrushao
Copy link
Member

Reverts #1056

This causes a regression impacting all MLC LLM nightlies as it violates the existing calling convention in MLC Chat runtime. An example: #1060 (comment)

@junrushao junrushao marked this pull request as ready for review October 24, 2023 15:59
@junrushao junrushao merged commit 2aa6809 into main Oct 24, 2023
10 checks passed
@junrushao junrushao deleted the revert-1056-separate_parameter_bundling_from_transfrom_dequantize branch October 24, 2023 16:32
masahi added a commit to octoml/mlc-llm that referenced this pull request Oct 30, 2023
* [API] Add GenerationConfig (mlc-ai#1024)

* Fix two bugs in kv-cache backtrack loop (mlc-ai#856)

Fix two bugs in kv-cache pop loop

Bug 1: old code would stop early because output_ids was shortened in-place during the loop

Bug 2: off-by-one in backoff size due to break

* [Build] Added --pdb flag to build.py, drop into pdb on error (mlc-ai#1017)

This commit adds an optional `--pdb` flag to the `build.py` script. If
passed, any exception raised that would otherwise terminate the script
will first enter a pdb post-mortem, allowing the error to be
inspected.

* [Android] Use `AlertDialog` instead of `Toast` (mlc-ai#1039)

* Add doc for ChatConfig, ConvConfig, GenerationConfig, BuildArgs (mlc-ai#1040)

Add doc for ChatConfig, ConvConfig, GenerationConfig, BuildArgs, build model

* [Android] Add Llama2 q4f16_0 (mlc-ai#1041)

llama2 q4f160

* [Docs] Model prebuilts tracking page revamp (mlc-ai#1000)

* Update compile_models.rst (mlc-ai#1038)

fix permission issue

* Support for the Stable LM 3B model (mlc-ai#1008)

Support for the stablelm-3b-4e1t model

* [Docs] Iterate model prebuilts docs (mlc-ai#1043)

* Iterate model prebuilts docs

* small fix

* Update README.md

* [CPP] Separate common utils out from llm_chat.cc (mlc-ai#1044)

This PR separates out the tokenizer creation function, the
random number generator out from `llm_chat.cc` as a preparation
step for batching inference support, since these functions/modules
are also used in the same way in batching inference.

* Update README.md (mlc-ai#1045)

Update README.md

* add verbose stats to mlc-chat REST API (mlc-ai#1049)

* add verbose stats to mlc-chat REST API

* update docs

* [Transform] Apply split_rotary optimization on prefill (mlc-ai#1033)

* [Transform] Apply split_rotary optimization on prefill

Prior to this commit, the `transform.fuse_split_rotary_embedding`
function was only applicable to the `decode` function of a Llama-type
model.  This was due to the sequence length being restricted to one,
both in the pattern-match rule and in the `split_rotary` function, and
the function being restricted to operate only on the `decode`
function.

This commit updates the `transform.fuse_split_rotary_embedding` pass
to be a `tvm.ir.transform.Pass`, operating on all applicable matched
in the `IRModule`.  The `split_rotary` function is now produced as a
fully-generic function, with static parameters substituted in
afterwards.  At this stage, the sequence length is retained as a
dynamic parameter, such that it can be used by the `prefill` function.

* Avoid multiple kernel launches for split_rotary

* [Docs] Add `mlc.ai/package` to `DEPENDENCY INSTALLATION` group (mlc-ai#1055)

Co-authored-by: Junru Shao <junrushao1994@gmail.com>

* Revert "[Transform] Apply split_rotary optimization on prefill (mlc-ai#1033)" (mlc-ai#1058)

This reverts commit b9179cf as elaborated here mlc-ai#1033 (comment)

* [BugFix] Set the right `max_sequence_length` for both Llama-1 and Llama-2 families (mlc-ai#1032)

* fix

* reflect feedback

---------

Co-authored-by: “Sunghyun <sunggg@umich.com>

* [Doc] Use -U instead of --force-reinstall (mlc-ai#1062)

`--force-reinstall` will reinstall all dependencies to a python package,
which is unnecessary. `-U` is a better choice in this case.

* [Model] Initial batching support for Llama (mlc-ai#1048)

This PR introduces the initial batched input support for llama
models. To make the code managable, we keep both the single-sequence
handling flow and the batching handling flow in the Llama modeling.

Now, with `--enable-batching` as a build argument, we build Llama
for the batched version.

NOTE: The paged attention kernel/TIR func are not included in this PR,
so currently the built library with batching enabled is not runnable.
We will follow up with the attention kernel in the future.

This PR guarantees that the existing single-sequence inference (Python
API, CLI, etc.) is not broken.

P.S.. The batching flow is subject to bug fixes as we integrate with
the attention function and run the e2e flow in the future.

* Fix Stable LM 3B build (mlc-ai#1061)

* [stablelm 3b] Rename dynamic vocab size from "v" to "vocab_size"

* Add get_num_key_value_heads method to StableLM3bConfig

* [Core] Remove duplication in MODEL.get_model calls (mlc-ai#1054)

This commit removes the `if`/`elif` chain in `core.py`, where the body
of each conditional assigns the same `mod, param_manager, params,
model_config`, and is identical except for the choice of model being
built.

* [ParamManager] Cleanup creation of quantization IRModule (mlc-ai#1053)

This commit replaces the single-parameter
`relax_model.param_manager.create_quantize_func` function with a
method on the `ParamManager`, `create_parameter_transformation`.  This
avoids potential typos between `param_manager` as the imported Python
module `mlc_llm.relax_model.param_manager` and an instance of the
`ParamManager` class named `param_manager`, and makes the
functionality easier to find.

This function also takes an optional `optimize_parameter_order` flag,
defaulting to `True`, which applies the `ReorderTransformFunc` pass.
Since the `ReorderTransformFunc` is intended to be used with several
configuration objects owned by `ParamManager`, this simplifies the
common path of producing an optimally-ordered parameter transformation
module.

* Minor typo fix (mlc-ai#1064)

* Add links to Python API Reference (mlc-ai#1068)

* [Fix] ChatModule incorrect temperature buffer shape (mlc-ai#1070)

PR mlc-ai#1048 updated the signature of softmax in the built model library
and changed the temperature buffer shape in ChatModule. This causes
some existing demo unable to run since we did not do a round of model
library update.

This PR reverts the ChatModule change, and adds back the softmax
function in non-batching case. With this PR, the regression should
be fixed.

* [ParamManager] Added progress bar for get_item/set_item (mlc-ai#1063)

* [Python] Extract common device str parse function in ChatModule (mlc-ai#1074)

This PR lifts the device string parsing (just a few of lines)
to a standalone function, so that on the serving side the serving
can make use of this function as well.

Tested Python API and it does not seem to incur regression.

* [Bugfix] Compilation Error in q4f32_1 (mlc-ai#1078)

The pass `fuse-split-rotary` assumes the compute dtype is fp16, which
usually is, but in certain cases, e.g. `q0f32` and `q4f32_1`, the
compute is based on fp32 instead. This PR strengthens the check guard.

* Establish `mlc_chat.compiler` (mlc-ai#1082)

This PR establishes the compiler components in MLC-Chat Python API,
which currently includes two primary components: models and parameters.

The models are `nn.Module`-based definition of an LLM, which, as the
very first stab, contains only `LlamaForCasualLM`. It is decomposed into
three files:
- `llama_config.py`: common configurations for Llama, where we define
  relevant configurations of its architecture, as well as include
  standard config file for Llama2-7B/13B/70B for convenient testing;
- `llama.py`: the model architecture of Llama, based on the PyTorch-like
`nn.Module` API;
- `llama_parameter.py`: defines the mapping between MLC parameters and
  pytorch parameters.

The parameters contains the basic functionality of parameter mapping,
and the loaders that effectively convert parameters from PyTorch to MLC
according to the mapping specified. Currently, only `HFTorchLoader` is
implemented, but loaders like SafeTensor, GPTQ or AWQ should be quite
straightforward according to the existing design.

On top of this PR, on-the-fly quantization could be defined as a loading
time transformation on MLC parameters, while pre-quantized parameter
loading is effectively parameter loading after MLC's `nn.Module` is
quantized.

Two unittests examplify how the infrastructure works:
- `./tests/python/model/test_llama.py` shows how to create an `nn.Module`
using the new infra, and then convert it to TVM IRModule;
- `./tests/python/parameter/hf_torch_loader.py` shows how to load
parameters from HuggingFace PyTorch format.

Besides, `mlc_chat.support` is established for utility functions, which
now contains two utils:
- `config.py` which supports reading configurations into dataclasses
from JSON file or Python dict. On top of Python dataclass, it throws
irrelevant fields into `cls.kwargs`, which is helpful when loading
HuggingFace configuration file;
- `tqdm.py` which contains tqdm-related utilities, primarily redirecting
logging and printing to work nicely with tqdm.

* Update README.md for Multi-GPU (mlc-ai#1090)

* Support lib_path override in C++. Improvements on docs and error messages (mlc-ai#1086)

* Support lib_path option in C++ CLI. Disable ChatConfig.model_lib override in Python API. Improvements on helper messages and error messages

* Update docs

* Rename lib_path -> model_lib_path

* StreamIterator (mlc-ai#1057)

Co-authored-by: Varshith <varshith.bathini@sprinklr.com>

* Update `benchmark.py` according to mlc-ai#1086 (mlc-ai#1091)

Update `benchmark.py`

* Disable Disco for q4f16_ft and q8f16_ft quantization (mlc-ai#1094)

* [Format] Apply isort and black for `python/` (mlc-ai#1097)

[Format] Apply isort and black on `python/`

The commands I am using are:

```
isort --profile black python/
black python/
```

It is always recommended to format the code before submission, given we
don't have a linter CI yet.

* More formatting (mlc-ai#1099)

* Enable Python Linter (mlc-ai#1098)

This PR enables two Python formatters "black" and "isort" on the following directory:
- `./python/`
- `./tests/python/`

Enabling pylint and mypy is left for future work

* Add Basic Pylint and Mypy Tooling (mlc-ai#1100)

Add pylint/mypy tooling into pyproject.toml

This PR establishes the initial Python tooling infra with Pylint and
Mypy. Currently only the newest modules, i.e. `mlc_chat.support` and
`mlc_chat.compiler` are covered, and we expect to cover the entire
package, as being tracked in mlc-ai#1101.

* [CI] Add clang-format (mlc-ai#1103)

* [Slim-LM] Smart path finding for config and weight (mlc-ai#1088)

* [Transform] Provide IRModule transform for rewrite_attention (mlc-ai#1052)

Prior to this commit, `mlc_llm.transform.rewrite_attention` updated a
single function.  This commit modifies it to instead be a transform
operating on any pattern matches within an `IRModule`.

* [ParamManager] Use BundleModelParams for transform_dequantize (mlc-ai#1056)

* [ParamManager] Use BundleModelParams for transform_quantize

Prior to this commit, `ParamManager.transform_quantize` function took
as input functions with separate parameters for each weight tensor,
and produced output functions with a tuple parameter for all weights.
Because `LiftTransformParams` had the same convention, neither could
be applied as part of the same build flow.

This commit updates `ParamManager.transform_quantize` pass to produce
outputs with separate tensor parameters, using the `BundleModelParams`
transform to later combine them into a single tuple parameter.  The
analogous change was also performed for `LiftTransformParams` as part
of apache/tvm#15657.

In addition, prior to this commit, the
`ParamManager.transform_dequantize` function operated directly on a
`IRModule` object.  As a result, any debug instrumentation
(e.g. before/after printouts for each pass, before/after verification
with `relax.analysis.well_formed`, etc.) did not apply to this
`transform_dequantize`.  This commit updates
`ParamManager.transform_dequantize` to return a `ir.transform.Pass`.

* Correct type annotation

* [Slim-LM] Introduce HFLoad for loading Pytorch and SafeTensor weights (mlc-ai#1113)

* [WINDOWS] reduce noise in windows build (mlc-ai#1115)

* Add CLI commands for compilation (mlc-ai#1109)

* Auto updated submodule references

* fix mismatched argument name (mlc-ai#1117)

fix error introduced by recent code changes

fixes mlc-ai#1116

* [Docs] Add doc for max and mean gen len, shift factor; and buildArgs (mlc-ai#1119)

* Add doc for max and mean gen len, shift factor

* Update python docs for BuildArgs

* Revert "[ParamManager] Use BundleModelParams for transform_dequantize" (mlc-ai#1120)

Revert "[ParamManager] Use BundleModelParams for transform_dequantize (mlc-ai#1056)"

This reverts commit e5927ce.

This causes a regression impacting all MLC LLM nightlies as it violates the existing calling convention in MLC Chat runtime. An example: mlc-ai#1060 (comment)

* Remove inaccurate warning message (mlc-ai#1121)

This PR removes an inaccurate warning from mlc-ai#1086, which warns about
`model_lib` overriding regardless of whether or not it's actually
overridden. With this commit, we only warn if its value is not None.

* [REST] OpenAI compatible Rest API (mlc-ai#1107)

* add presence and frequency penalty

* Added support for passing conversation history in /v1/chat/completions endpoint

* Added support for RestAPI parameters max_gen_len, n, and stop_str

* * add presence and frequency penalty to generation config
* refactor generation config

* Added documentation for parameters

* replace lib_path with model_lib_path in rest.py

* fixed black isort issues

* fix lib_path

* Add --opt flag parsing to CLI (mlc-ai#1123)

* [ParamManager][Redo] Use BundleModelParams for transform_dequantize (mlc-ai#1127)

Prior to this commit, `ParamManager.transform_quantize` function took
as input functions with separate parameters for each weight tensor,
and produced output functions with a tuple parameter for all weights.
Because `LiftTransformParams` had the same convention, neither could
be applied as part of the same build flow.

This commit updates `ParamManager.transform_quantize` pass to produce
outputs with separate tensor parameters, using the `BundleModelParams`
transform to later combine them into a single tuple parameter.  The
analogous change was also performed for `LiftTransformParams` as part
of apache/tvm#15657.

In addition, prior to this commit, the
`ParamManager.transform_dequantize` function operated directly on a
`IRModule` object.  As a result, any debug instrumentation
(e.g. before/after printouts for each pass, before/after verification
with `relax.analysis.well_formed`, etc.) did not apply to this
`transform_dequantize`.  This commit updates
`ParamManager.transform_dequantize` to return a `ir.transform.Pass`.

This commit is a repeat of the reverted PR
mlc-ai#1056.  This PR resolves the bug
in the earlier implementation by removing the call to
`.without_attr("num_input")` in `ParamReplacer.rewrite_func`.  This
follows an analogous update in `LiftTransformParams`, preserving the
`"num_input"` attribute for use in `BundleModelParams`.

* added details to windows installation (mlc-ai#1133)

32bit version of the zstd.dll library was causing issues, so updated the doc to be more specific and download the 64bit version.

* Grammatical and Typographical improvements (mlc-ai#1139)

* Update faq.rst

* Update guideline.rst

* Update compile_models.rst

* Update distribute_compiled_models.rst

* Update get-vicuna-weight.rst

* Update python.rst

* Update android.rst

* Update cli.rst

* Update ios.rst

* Update javascript.rst

* Update python.rst

* Update rest.rst

* Minor enhancements to `ChatModule` (mlc-ai#1132)

Some minor enhancements to `ChatModule`, mainly handle the device parsing solely in `_parse_device_str` instead of handling it both in the member function and the `__init__` function to avoid redundancy; and some type annotation fix.

* Updating tvm install docs (mlc-ai#1143)

Updating the tvm install docs to assist a user in finding and copying zstd.dll to the correct folder.

* Make the help info consistent with program name (mlc-ai#1137)

When user use command `mlc_chat_cli --help`, the output will be
something like

Usage: mlc_chat [--help] ...

That's because the program name specified in `cli_main.cc` is "mlc_chat".
It will be less confusing if the output of help info shows

Usage: mlc_chat_cli [--help] ...

* Support parameter packing (mlc-ai#1146)

* [Slim-LM] Enable Group Quant (mlc-ai#1129)

* Enable group quant via new interface.

* Minor fix.

* Linting.

* Fix isort.

* Fix mypy.

* TE compute working.

* Skip embed.

* Support cpu+gpu quantization.

* Add target option to tests.

* Linting.

* Enable Mypy and Pylint in mlc_chat Python Package (mlc-ai#1149)

* Migrate Compiler Passes (mlc-ai#1150)

* Compile Model Preset without External `config.json` (mlc-ai#1151)

This PR adds support for compiling a preset of models without
having to provide a `config.json` on disk using the commands below:

```diff
python -m mlc_chat.cli.compile \
       --quantization q4f16_1 -o /tmp/1.so \
-       --config /models/Llama-2-7b-chat-hf
+       --config llama2_7b
```

This allows easier testing and binary distribution without having to
depend on external model directory.

* Update attention layer (mlc-ai#1153)

Existing dlight optimization only works for NT matmul, but not NN. As a
result, the new `nn.Module`-based implementation, which uses NN matmul,
fails compilation at HEAD for now. This PR fixes this issue by tweaking
`k` to the preferred layout.

The following commands now work with the new compilation pipeline:

```bash
python -m mlc_chat.cli.compile --config llama2_7b  --quantization q4f16_1 -o /tmp/1.so
python -m mlc_chat.cli.compile --config llama2_13b --quantization q4f16_1 -o /tmp/1.so
python -m mlc_chat.cli.compile --config llama2_70b --quantization q4f16_1 -o /tmp/1.so
```

Note that the quantization algorithm per se, `q4f16_1`, has not been
implemented yet, meaning this code path is not yet ready for use so far.

* Add batched Llama model definition using vLLM paged attention (mlc-ai#1134)

* Add batched Llama model with vllm paged attention

* update core.py

* doc

* minor

* add e2e test

* mv file

* clean

* Check if TVM has been built with USE_VLLM

* update BuildArgs docstring

* [Transform][Redo] Apply split_rotary optimization on prefill (mlc-ai#1125)

Prior to this commit, the `transform.fuse_split_rotary_embedding`
function was only applicable to the `decode` function of a Llama-type
model.  This was due to the sequence length being restricted to one,
both in the pattern-match rule and in the `split_rotary` function, and
the function being restricted to operate only on the `decode`
function.

This commit updates the `transform.fuse_split_rotary_embedding` pass
to be a `tvm.ir.transform.Pass`, operating on all applicable matched
in the `IRModule`.  The `split_rotary` function is now produced as a
fully-generic function, with static parameters substituted in
afterwards.  At this stage, the sequence length is retained as a
dynamic parameter, such that it can be used by the `prefill` function.

This commit reapplies the reverted commit
mlc-ai#1033.  The error in the
previous implementation was in the definition of
`rotary_embedding_offset`, which provided the `query_sequence_length`
instead of `kv_sequence_length`.  This was able to pass the validity
tests described
[here](mlc-ai#1058 (comment)),
as these two sequence lengths are identical for the first call.

* Apply rewrite for normal attention and MQA (mlc-ai#1138)

Fixes a bug introduced in mlc-ai#1052,
where use of the `--use-flash-attn-mqa` flag on a model that doesn't
use MQA would prevent the use of CUTLASS attention at all.

* [Rest] Fix emoji handling in Rest API. (mlc-ai#1142)

* [Utility] Check for isinstance(exc, Exception) before entering pdb (mlc-ai#1095)

This is a follow-up to mlc-ai#1017, which added a `--pdb` flag to enter a
debugger on exit.  This commit checks the type of the raised
exception, and only enters the debugger if it is a subclass of
`Exception`.  This ensures that implementation-details, such as a
thrown `SystemExit` or `KeyboardInterrupt`, do not cause an erroneous
entry to pdb.

* [Utils] Remove conversion to numpy array in utils.save_params (mlc-ai#1083)

Prior to this commit, each parameter was converted to a numpy-owned
array as part of a total size computation.  This commit computes the
size directly, removing the conversion.

* [Fix][REST] Use lowered-cased "app" (mlc-ai#1159)

* sync llama def

* fi

---------

Co-authored-by: David Pissarra <61968959+davidpissarra@users.noreply.github.com>
Co-authored-by: Roee Shenberg <shenberg@gmail.com>
Co-authored-by: Eric Lunderberg <Lunderberg@users.noreply.github.com>
Co-authored-by: Yaxing Cai <caiyaxing666@gmail.com>
Co-authored-by: Charlie Ruan <53290280+CharlieFRuan@users.noreply.github.com>
Co-authored-by: Bohan Hou <bohanhou@andrew.cmu.edu>
Co-authored-by: yongjer <54315206+yongjer@users.noreply.github.com>
Co-authored-by: Jeethu Rao <jeethu@jeethurao.com>
Co-authored-by: Junru Shao <junrushao@apache.org>
Co-authored-by: Ruihang Lai <ruihangl@cs.cmu.edu>
Co-authored-by: Denise Kutnick <boca.denise@gmail.com>
Co-authored-by: Lesheng Jin <34279105+LeshengJin@users.noreply.github.com>
Co-authored-by: Junru Shao <junrushao1994@gmail.com>
Co-authored-by: Sunghyun Park <sunggg@umich.edu>
Co-authored-by: “Sunghyun <sunggg@umich.com>
Co-authored-by: Rick Zhou <riczhou@linkedin.com>
Co-authored-by: Varshith Bathini <varshith15@gmail.com>
Co-authored-by: Varshith <varshith.bathini@sprinklr.com>
Co-authored-by: Tianqi Chen <tqchen@users.noreply.github.com>
Co-authored-by: Git bot <bot@noreply.github.com>
Co-authored-by: SingLi <Sing-Li@users.noreply.github.com>
Co-authored-by: Kartik Khandelwal <kartikkhandelwal1998@gmail.com>
Co-authored-by: Goutham Tamilselvan <goutham2688@gmail.com>
Co-authored-by: S A G A R <110724849+tmsagarofficial@users.noreply.github.com>
Co-authored-by: Yuchen Jin <yuchenj@cs.washington.edu>
Co-authored-by: DavidSharma <68979667+David-Sharma@users.noreply.github.com>
Co-authored-by: fennecJ <hwahwa649@gmail.com>
Co-authored-by: Xiyou Zhou <xiyou.zhou@gmail.com>
masahi added a commit to octoml/mlc-llm that referenced this pull request Nov 7, 2023
* [API] Add GenerationConfig (mlc-ai#1024)

* Fix two bugs in kv-cache backtrack loop (mlc-ai#856)

Fix two bugs in kv-cache pop loop

Bug 1: old code would stop early because output_ids was shortened in-place during the loop

Bug 2: off-by-one in backoff size due to break

* [Build] Added --pdb flag to build.py, drop into pdb on error (mlc-ai#1017)

This commit adds an optional `--pdb` flag to the `build.py` script. If
passed, any exception raised that would otherwise terminate the script
will first enter a pdb post-mortem, allowing the error to be
inspected.

* [Android] Use `AlertDialog` instead of `Toast` (mlc-ai#1039)

* Add doc for ChatConfig, ConvConfig, GenerationConfig, BuildArgs (mlc-ai#1040)

Add doc for ChatConfig, ConvConfig, GenerationConfig, BuildArgs, build model

* [Android] Add Llama2 q4f16_0 (mlc-ai#1041)

llama2 q4f160

* [Docs] Model prebuilts tracking page revamp (mlc-ai#1000)

* Update compile_models.rst (mlc-ai#1038)

fix permission issue

* Support for the Stable LM 3B model (mlc-ai#1008)

Support for the stablelm-3b-4e1t model

* [Docs] Iterate model prebuilts docs (mlc-ai#1043)

* Iterate model prebuilts docs

* small fix

* Update README.md

* [CPP] Separate common utils out from llm_chat.cc (mlc-ai#1044)

This PR separates out the tokenizer creation function, the
random number generator out from `llm_chat.cc` as a preparation
step for batching inference support, since these functions/modules
are also used in the same way in batching inference.

* Update README.md (mlc-ai#1045)

Update README.md

* add verbose stats to mlc-chat REST API (mlc-ai#1049)

* add verbose stats to mlc-chat REST API

* update docs

* [Transform] Apply split_rotary optimization on prefill (mlc-ai#1033)

* [Transform] Apply split_rotary optimization on prefill

Prior to this commit, the `transform.fuse_split_rotary_embedding`
function was only applicable to the `decode` function of a Llama-type
model.  This was due to the sequence length being restricted to one,
both in the pattern-match rule and in the `split_rotary` function, and
the function being restricted to operate only on the `decode`
function.

This commit updates the `transform.fuse_split_rotary_embedding` pass
to be a `tvm.ir.transform.Pass`, operating on all applicable matched
in the `IRModule`.  The `split_rotary` function is now produced as a
fully-generic function, with static parameters substituted in
afterwards.  At this stage, the sequence length is retained as a
dynamic parameter, such that it can be used by the `prefill` function.

* Avoid multiple kernel launches for split_rotary

* [Docs] Add `mlc.ai/package` to `DEPENDENCY INSTALLATION` group (mlc-ai#1055)

Co-authored-by: Junru Shao <junrushao1994@gmail.com>

* Revert "[Transform] Apply split_rotary optimization on prefill (mlc-ai#1033)" (mlc-ai#1058)

This reverts commit b9179cf as elaborated here mlc-ai#1033 (comment)

* [BugFix] Set the right `max_sequence_length` for both Llama-1 and Llama-2 families (mlc-ai#1032)

* fix

* reflect feedback

---------

Co-authored-by: “Sunghyun <sunggg@umich.com>

* [Doc] Use -U instead of --force-reinstall (mlc-ai#1062)

`--force-reinstall` will reinstall all dependencies to a python package,
which is unnecessary. `-U` is a better choice in this case.

* [Model] Initial batching support for Llama (mlc-ai#1048)

This PR introduces the initial batched input support for llama
models. To make the code managable, we keep both the single-sequence
handling flow and the batching handling flow in the Llama modeling.

Now, with `--enable-batching` as a build argument, we build Llama
for the batched version.

NOTE: The paged attention kernel/TIR func are not included in this PR,
so currently the built library with batching enabled is not runnable.
We will follow up with the attention kernel in the future.

This PR guarantees that the existing single-sequence inference (Python
API, CLI, etc.) is not broken.

P.S.. The batching flow is subject to bug fixes as we integrate with
the attention function and run the e2e flow in the future.

* Fix Stable LM 3B build (mlc-ai#1061)

* [stablelm 3b] Rename dynamic vocab size from "v" to "vocab_size"

* Add get_num_key_value_heads method to StableLM3bConfig

* [Core] Remove duplication in MODEL.get_model calls (mlc-ai#1054)

This commit removes the `if`/`elif` chain in `core.py`, where the body
of each conditional assigns the same `mod, param_manager, params,
model_config`, and is identical except for the choice of model being
built.

* [ParamManager] Cleanup creation of quantization IRModule (mlc-ai#1053)

This commit replaces the single-parameter
`relax_model.param_manager.create_quantize_func` function with a
method on the `ParamManager`, `create_parameter_transformation`.  This
avoids potential typos between `param_manager` as the imported Python
module `mlc_llm.relax_model.param_manager` and an instance of the
`ParamManager` class named `param_manager`, and makes the
functionality easier to find.

This function also takes an optional `optimize_parameter_order` flag,
defaulting to `True`, which applies the `ReorderTransformFunc` pass.
Since the `ReorderTransformFunc` is intended to be used with several
configuration objects owned by `ParamManager`, this simplifies the
common path of producing an optimally-ordered parameter transformation
module.

* Minor typo fix (mlc-ai#1064)

* Add links to Python API Reference (mlc-ai#1068)

* [Fix] ChatModule incorrect temperature buffer shape (mlc-ai#1070)

PR mlc-ai#1048 updated the signature of softmax in the built model library
and changed the temperature buffer shape in ChatModule. This causes
some existing demo unable to run since we did not do a round of model
library update.

This PR reverts the ChatModule change, and adds back the softmax
function in non-batching case. With this PR, the regression should
be fixed.

* [ParamManager] Added progress bar for get_item/set_item (mlc-ai#1063)

* [Python] Extract common device str parse function in ChatModule (mlc-ai#1074)

This PR lifts the device string parsing (just a few of lines)
to a standalone function, so that on the serving side the serving
can make use of this function as well.

Tested Python API and it does not seem to incur regression.

* [Bugfix] Compilation Error in q4f32_1 (mlc-ai#1078)

The pass `fuse-split-rotary` assumes the compute dtype is fp16, which
usually is, but in certain cases, e.g. `q0f32` and `q4f32_1`, the
compute is based on fp32 instead. This PR strengthens the check guard.

* Establish `mlc_chat.compiler` (mlc-ai#1082)

This PR establishes the compiler components in MLC-Chat Python API,
which currently includes two primary components: models and parameters.

The models are `nn.Module`-based definition of an LLM, which, as the
very first stab, contains only `LlamaForCasualLM`. It is decomposed into
three files:
- `llama_config.py`: common configurations for Llama, where we define
  relevant configurations of its architecture, as well as include
  standard config file for Llama2-7B/13B/70B for convenient testing;
- `llama.py`: the model architecture of Llama, based on the PyTorch-like
`nn.Module` API;
- `llama_parameter.py`: defines the mapping between MLC parameters and
  pytorch parameters.

The parameters contains the basic functionality of parameter mapping,
and the loaders that effectively convert parameters from PyTorch to MLC
according to the mapping specified. Currently, only `HFTorchLoader` is
implemented, but loaders like SafeTensor, GPTQ or AWQ should be quite
straightforward according to the existing design.

On top of this PR, on-the-fly quantization could be defined as a loading
time transformation on MLC parameters, while pre-quantized parameter
loading is effectively parameter loading after MLC's `nn.Module` is
quantized.

Two unittests examplify how the infrastructure works:
- `./tests/python/model/test_llama.py` shows how to create an `nn.Module`
using the new infra, and then convert it to TVM IRModule;
- `./tests/python/parameter/hf_torch_loader.py` shows how to load
parameters from HuggingFace PyTorch format.

Besides, `mlc_chat.support` is established for utility functions, which
now contains two utils:
- `config.py` which supports reading configurations into dataclasses
from JSON file or Python dict. On top of Python dataclass, it throws
irrelevant fields into `cls.kwargs`, which is helpful when loading
HuggingFace configuration file;
- `tqdm.py` which contains tqdm-related utilities, primarily redirecting
logging and printing to work nicely with tqdm.

* Update README.md for Multi-GPU (mlc-ai#1090)

* Support lib_path override in C++. Improvements on docs and error messages (mlc-ai#1086)

* Support lib_path option in C++ CLI. Disable ChatConfig.model_lib override in Python API. Improvements on helper messages and error messages

* Update docs

* Rename lib_path -> model_lib_path

* StreamIterator (mlc-ai#1057)

Co-authored-by: Varshith <varshith.bathini@sprinklr.com>

* Update `benchmark.py` according to mlc-ai#1086 (mlc-ai#1091)

Update `benchmark.py`

* Disable Disco for q4f16_ft and q8f16_ft quantization (mlc-ai#1094)

* [Format] Apply isort and black for `python/` (mlc-ai#1097)

[Format] Apply isort and black on `python/`

The commands I am using are:

```
isort --profile black python/
black python/
```

It is always recommended to format the code before submission, given we
don't have a linter CI yet.

* More formatting (mlc-ai#1099)

* Enable Python Linter (mlc-ai#1098)

This PR enables two Python formatters "black" and "isort" on the following directory:
- `./python/`
- `./tests/python/`

Enabling pylint and mypy is left for future work

* Add Basic Pylint and Mypy Tooling (mlc-ai#1100)

Add pylint/mypy tooling into pyproject.toml

This PR establishes the initial Python tooling infra with Pylint and
Mypy. Currently only the newest modules, i.e. `mlc_chat.support` and
`mlc_chat.compiler` are covered, and we expect to cover the entire
package, as being tracked in mlc-ai#1101.

* [CI] Add clang-format (mlc-ai#1103)

* [Slim-LM] Smart path finding for config and weight (mlc-ai#1088)

* [Transform] Provide IRModule transform for rewrite_attention (mlc-ai#1052)

Prior to this commit, `mlc_llm.transform.rewrite_attention` updated a
single function.  This commit modifies it to instead be a transform
operating on any pattern matches within an `IRModule`.

* [ParamManager] Use BundleModelParams for transform_dequantize (mlc-ai#1056)

* [ParamManager] Use BundleModelParams for transform_quantize

Prior to this commit, `ParamManager.transform_quantize` function took
as input functions with separate parameters for each weight tensor,
and produced output functions with a tuple parameter for all weights.
Because `LiftTransformParams` had the same convention, neither could
be applied as part of the same build flow.

This commit updates `ParamManager.transform_quantize` pass to produce
outputs with separate tensor parameters, using the `BundleModelParams`
transform to later combine them into a single tuple parameter.  The
analogous change was also performed for `LiftTransformParams` as part
of apache/tvm#15657.

In addition, prior to this commit, the
`ParamManager.transform_dequantize` function operated directly on a
`IRModule` object.  As a result, any debug instrumentation
(e.g. before/after printouts for each pass, before/after verification
with `relax.analysis.well_formed`, etc.) did not apply to this
`transform_dequantize`.  This commit updates
`ParamManager.transform_dequantize` to return a `ir.transform.Pass`.

* Correct type annotation

* [Slim-LM] Introduce HFLoad for loading Pytorch and SafeTensor weights (mlc-ai#1113)

* [WINDOWS] reduce noise in windows build (mlc-ai#1115)

* Add CLI commands for compilation (mlc-ai#1109)

* Auto updated submodule references

* fix mismatched argument name (mlc-ai#1117)

fix error introduced by recent code changes

fixes mlc-ai#1116

* [Docs] Add doc for max and mean gen len, shift factor; and buildArgs (mlc-ai#1119)

* Add doc for max and mean gen len, shift factor

* Update python docs for BuildArgs

* Revert "[ParamManager] Use BundleModelParams for transform_dequantize" (mlc-ai#1120)

Revert "[ParamManager] Use BundleModelParams for transform_dequantize (mlc-ai#1056)"

This reverts commit e5927ce.

This causes a regression impacting all MLC LLM nightlies as it violates the existing calling convention in MLC Chat runtime. An example: mlc-ai#1060 (comment)

* Remove inaccurate warning message (mlc-ai#1121)

This PR removes an inaccurate warning from mlc-ai#1086, which warns about
`model_lib` overriding regardless of whether or not it's actually
overridden. With this commit, we only warn if its value is not None.

* [REST] OpenAI compatible Rest API (mlc-ai#1107)

* add presence and frequency penalty

* Added support for passing conversation history in /v1/chat/completions endpoint

* Added support for RestAPI parameters max_gen_len, n, and stop_str

* * add presence and frequency penalty to generation config
* refactor generation config

* Added documentation for parameters

* replace lib_path with model_lib_path in rest.py

* fixed black isort issues

* fix lib_path

* Add --opt flag parsing to CLI (mlc-ai#1123)

* [ParamManager][Redo] Use BundleModelParams for transform_dequantize (mlc-ai#1127)

Prior to this commit, `ParamManager.transform_quantize` function took
as input functions with separate parameters for each weight tensor,
and produced output functions with a tuple parameter for all weights.
Because `LiftTransformParams` had the same convention, neither could
be applied as part of the same build flow.

This commit updates `ParamManager.transform_quantize` pass to produce
outputs with separate tensor parameters, using the `BundleModelParams`
transform to later combine them into a single tuple parameter.  The
analogous change was also performed for `LiftTransformParams` as part
of apache/tvm#15657.

In addition, prior to this commit, the
`ParamManager.transform_dequantize` function operated directly on a
`IRModule` object.  As a result, any debug instrumentation
(e.g. before/after printouts for each pass, before/after verification
with `relax.analysis.well_formed`, etc.) did not apply to this
`transform_dequantize`.  This commit updates
`ParamManager.transform_dequantize` to return a `ir.transform.Pass`.

This commit is a repeat of the reverted PR
mlc-ai#1056.  This PR resolves the bug
in the earlier implementation by removing the call to
`.without_attr("num_input")` in `ParamReplacer.rewrite_func`.  This
follows an analogous update in `LiftTransformParams`, preserving the
`"num_input"` attribute for use in `BundleModelParams`.

* added details to windows installation (mlc-ai#1133)

32bit version of the zstd.dll library was causing issues, so updated the doc to be more specific and download the 64bit version.

* Grammatical and Typographical improvements (mlc-ai#1139)

* Update faq.rst

* Update guideline.rst

* Update compile_models.rst

* Update distribute_compiled_models.rst

* Update get-vicuna-weight.rst

* Update python.rst

* Update android.rst

* Update cli.rst

* Update ios.rst

* Update javascript.rst

* Update python.rst

* Update rest.rst

* Minor enhancements to `ChatModule` (mlc-ai#1132)

Some minor enhancements to `ChatModule`, mainly handle the device parsing solely in `_parse_device_str` instead of handling it both in the member function and the `__init__` function to avoid redundancy; and some type annotation fix.

* Updating tvm install docs (mlc-ai#1143)

Updating the tvm install docs to assist a user in finding and copying zstd.dll to the correct folder.

* Make the help info consistent with program name (mlc-ai#1137)

When user use command `mlc_chat_cli --help`, the output will be
something like

Usage: mlc_chat [--help] ...

That's because the program name specified in `cli_main.cc` is "mlc_chat".
It will be less confusing if the output of help info shows

Usage: mlc_chat_cli [--help] ...

* Support parameter packing (mlc-ai#1146)

* [Slim-LM] Enable Group Quant (mlc-ai#1129)

* Enable group quant via new interface.

* Minor fix.

* Linting.

* Fix isort.

* Fix mypy.

* TE compute working.

* Skip embed.

* Support cpu+gpu quantization.

* Add target option to tests.

* Linting.

* Enable Mypy and Pylint in mlc_chat Python Package (mlc-ai#1149)

* Migrate Compiler Passes (mlc-ai#1150)

* Compile Model Preset without External `config.json` (mlc-ai#1151)

This PR adds support for compiling a preset of models without
having to provide a `config.json` on disk using the commands below:

```diff
python -m mlc_chat.cli.compile \
       --quantization q4f16_1 -o /tmp/1.so \
-       --config /models/Llama-2-7b-chat-hf
+       --config llama2_7b
```

This allows easier testing and binary distribution without having to
depend on external model directory.

* Update attention layer (mlc-ai#1153)

Existing dlight optimization only works for NT matmul, but not NN. As a
result, the new `nn.Module`-based implementation, which uses NN matmul,
fails compilation at HEAD for now. This PR fixes this issue by tweaking
`k` to the preferred layout.

The following commands now work with the new compilation pipeline:

```bash
python -m mlc_chat.cli.compile --config llama2_7b  --quantization q4f16_1 -o /tmp/1.so
python -m mlc_chat.cli.compile --config llama2_13b --quantization q4f16_1 -o /tmp/1.so
python -m mlc_chat.cli.compile --config llama2_70b --quantization q4f16_1 -o /tmp/1.so
```

Note that the quantization algorithm per se, `q4f16_1`, has not been
implemented yet, meaning this code path is not yet ready for use so far.

* Add batched Llama model definition using vLLM paged attention (mlc-ai#1134)

* Add batched Llama model with vllm paged attention

* update core.py

* doc

* minor

* add e2e test

* mv file

* clean

* Check if TVM has been built with USE_VLLM

* update BuildArgs docstring

* [Transform][Redo] Apply split_rotary optimization on prefill (mlc-ai#1125)

Prior to this commit, the `transform.fuse_split_rotary_embedding`
function was only applicable to the `decode` function of a Llama-type
model.  This was due to the sequence length being restricted to one,
both in the pattern-match rule and in the `split_rotary` function, and
the function being restricted to operate only on the `decode`
function.

This commit updates the `transform.fuse_split_rotary_embedding` pass
to be a `tvm.ir.transform.Pass`, operating on all applicable matched
in the `IRModule`.  The `split_rotary` function is now produced as a
fully-generic function, with static parameters substituted in
afterwards.  At this stage, the sequence length is retained as a
dynamic parameter, such that it can be used by the `prefill` function.

This commit reapplies the reverted commit
mlc-ai#1033.  The error in the
previous implementation was in the definition of
`rotary_embedding_offset`, which provided the `query_sequence_length`
instead of `kv_sequence_length`.  This was able to pass the validity
tests described
[here](mlc-ai#1058 (comment)),
as these two sequence lengths are identical for the first call.

* Apply rewrite for normal attention and MQA (mlc-ai#1138)

Fixes a bug introduced in mlc-ai#1052,
where use of the `--use-flash-attn-mqa` flag on a model that doesn't
use MQA would prevent the use of CUTLASS attention at all.

* [Rest] Fix emoji handling in Rest API. (mlc-ai#1142)

* [Utility] Check for isinstance(exc, Exception) before entering pdb (mlc-ai#1095)

This is a follow-up to mlc-ai#1017, which added a `--pdb` flag to enter a
debugger on exit.  This commit checks the type of the raised
exception, and only enters the debugger if it is a subclass of
`Exception`.  This ensures that implementation-details, such as a
thrown `SystemExit` or `KeyboardInterrupt`, do not cause an erroneous
entry to pdb.

* [Utils] Remove conversion to numpy array in utils.save_params (mlc-ai#1083)

Prior to this commit, each parameter was converted to a numpy-owned
array as part of a total size computation.  This commit computes the
size directly, removing the conversion.

* [Fix][REST] Use lowered-cased "app" (mlc-ai#1159)

* [Rest] Document emoji handling (mlc-ai#1160)

Followup PR of mlc-ai#1142 to document the emoji handling.

* Enable group quant transform with nn.Module (mlc-ai#1154)

* Enable group quant transform with nn.Module

This PR completes the group quantization support for `nn.Module` based model.

* remove deprecated tests

* Update

* wip

* remove deprecated test

* fix lint

* fix lint

* fix lint

---------

Co-authored-by: Junru Shao <junrushao@apache.org>

* Misc Cleanups of Compilation Pipeline (mlc-ai#1165)

* Support CUDA Multi-Arch Compilation (mlc-ai#1166)

* [Bugfix] Cannot find global function `mlc.llm_chat_create` (mlc-ai#1167)

* Fix RWKV Support (mlc-ai#1136)

I successfully ran the rwkv-world-3b fp16 model on my Xiaomi phone. This PR is to fix a bug on the main branch where the rwkv model outputs only one word and then stop.

![image](https://github.com/mlc-ai/mlc-llm/assets/35585791/6514d6ef-c93c-4ad2-8e76-8ffa0663080f)

* Auto updated submodule references

* Fix Android app Permission denied error on Android 10  (mlc-ai#1175)

Use scoped storage instead of Downloads directory

Co-authored-by: Animesh Bohara <abohara@cs.cmu.edu>

* [SLM] Fix group quantization (mlc-ai#1172)

This PR fixes the group quantization and add related unit tests.

* [Fix] TIR block name of dequantization (mlc-ai#1177)

* [SLM][AutoLLM] Enable Command Line Weight Conversion (mlc-ai#1170)

This PR enables weight conversion in command line.
Sample command: `python3 -m mlc_chat.cli.convert_weight --config dist/models/llama-2-13b-chat-hf/ --quantization "q4f16_1" --output dist/test/`

* [Fix][SLM] Update q4f16 quantization with the new mutator name rule (mlc-ai#1178)

[Fix] Update q4f16 quantization with the new mutator name rule

* [Model Support][SWA] Add support for sliding window attention for Mistral (mlc-ai#1087)

* mistral base

* Add sliding window mask making and its tests

* Small changes for sliding window mask

* Clean up mask making

* Remove kv_seq_len

* Add prefill chunking, handle max window size in SWA

* Add interleave kv

* Temporary fix for kv seq len

* Pass in more shapes to SWA prefill and decode in runtime

* mistral var fix

* Small changes regarding shape passing

* Small fix on chunk size

* Add build args, fix mlc chat config dump

* mistral system prompt
---------

Co-authored-by: David Pissarra <david.pissarra@tecnico.ulisboa.pt>
Co-authored-by: David Pissarra <61968959+davidpissarra@users.noreply.github.com>

* Add Python API for Weight Conversion (mlc-ai#1182)

This PR primarily does a major refactoring to introduce Python API that
is consistent with the CLI API. Besides, it includes the following
fixes and enhancements:

- More info provided to `isort` for better formatting in `pyproject.toml`;
- Print out the default value of all arguments in argparse command line;
- Ensure `--device` is always available locally when doing weight
  conversion;
- Add argument echoing in weight conversion to be consistent with its
  counterpart in compilation;
- Add a consistency checker to make sure the shapes/dtypes of all
  tensors from weight conversion is consistent with compilation;
- Echo the total size of parameters;
- Better logging of each parameter's shape and dtype, and either or not
  its quantized;
- More structure robustification, renaming `parameter/` to `loader/` to
  be more explicit about its intention;
- Inline and remove `ParamQuantizer` into the loader to improve logging
  and the logic flow;
- Always add instructions "Use `--xxx` to override" for any options that
  are auto detected to be more informative to end users;
- Fix wrong shape calculation when quantizing `nn.Embedding`;
- Fix wrong dtype calculation in group quantization when the input dtype
  is different from model dtype (e.g. "float32" in torch, but the model
  dtype in quantization is fp16 in `q4f16_1`);
- Fix inconsistent param names in layers such as `GroupQuantizeLinear`;
- Fix dtype inconsistency when a parameter is not quantized;
- Fix existing unittests.

* Merge `llama_config.CONFIG` into `MODEL_PRESETS` (mlc-ai#1188)

* Merge llama_config.py into llama_model.py (mlc-ai#1189)

* Add CodeLlama as part of model presets (mlc-ai#1190)

* [Docs] Clarify zstd installation on Windows (mlc-ai#1191)

* [Docs] Clarify zstd installation on Windows (mlc-ai#1196)

Update zstd installation

* Support overriding `--max-sequence-length` in command line (mlc-ai#1197)

* [RestAPI] Added docs (mlc-ai#1193)

Add docs for RestAPI

Co-authored-by: Animesh Bohara <abohara@cs.cmu.edu>

* [API] ```llm-vscode``` extension support (mlc-ai#1198)

This PR enables ```llm-vscode``` extension API support for copilot-like code completion, following [HF's LSP](https://github.com/huggingface/llm-ls). Fully compatible with ```CodeLlama``` and ```starcoder``` on mlc-llm. 

- huggingface/llm-vscode#103 enhances extension user experience when used with mlc-llm rest api.

Thanks @ pacman100, who came up with this on his latest blogpost: https://huggingface.co/blog/personal-copilot

* [Fix] Use `fabs` as floating point abs function in C++ (mlc-ai#1202)

* Integrating MLC runtime with the new compilation workflow (mlc-ai#1203)

* [Fix] Remove Redundant Warnings (mlc-ai#1204)

PR mlc-ai#1203 introduces some unnecessary and redundant logging messages.
This PR gets them removed.

* Try fix macOS build with picojson (mlc-ai#1206)

The error message below

```
/usr/share/miniconda/envs/mlc-llm-build/conda-bld/mlc-chat-cli-nightly-package_1699286394016/work/3rdparty/tvm/3rdparty/picojson/picojson.h: In member function 'std::string picojson::value::to_str() const':
/usr/share/miniconda/envs/mlc-llm-build/conda-bld/mlc-chat-cli-nightly-package_1699286394016/work/3rdparty/tvm/3rdparty/picojson/picojson.h:494:37: error: expected ')' before 'PRId64'
  494 |       SNPRINTF(buf, sizeof(buf), "%" PRId64, u_.int64_);
      |               ~                     ^~~~~~~
      |                                     )
/usr/share/miniconda/envs/mlc-llm-build/conda-bld/mlc-chat-cli-nightly-package_1699286394016/work/3rdparty/tvm/3rdparty/picojson/picojson.h:81:1: note: 'PRId64' is defined in header '<cinttypes>'; did you forget to '#include <cinttypes>'?
   80 | #include <errno.h>
  +++ |+#include <cinttypes>
   81 | #include <inttypes.h>

```

indicates that the `__STDC_FORMAT_MACROS` flag is not turned on for some
reason.

* Try fix macOS build with picojson again (mlc-ai#1207)

Try fix macOS build with picojson

* Auto updated submodule references

* [Fix] Keep update-to-date with upstream API change (mlc-ai#1209)

* Detect `mtriple` via LLVM (mlc-ai#1211)

* Fix Python3.8 compatibility breakage (mlc-ai#1210)

The breakage was resulting from newer syntax being used for type
annotations, as part of mlc-ai#592.
So long as `mlc_chat.interface.openai_api` wasn't imported, the
breaking changes were not encountered.  In
mlc-ai#1107, the addition of `from
.interface.openai_api import ChatMessage` caused this module to be
imported, breaking compatibility of `mlc_chat.ChatModule` with
Python3.8.

This commit updates the type annotations to the supported syntax.

* [Slim-LM] Enable loading from AWQ pre-quantized weight. (mlc-ai#1114)

* [SLM] Enable loading from AWQ pre-quantized weight.

* remove awq_loader.py

* Update to the latest commit

* Delete llama_parameter.py

* update unittest

* fix lint

* upd

* add Llama-2-7B-AWQ

* fix

* rm

---------

Co-authored-by: David Pissarra <61968959+davidpissarra@users.noreply.github.com>
Co-authored-by: Roee Shenberg <shenberg@gmail.com>
Co-authored-by: Eric Lunderberg <Lunderberg@users.noreply.github.com>
Co-authored-by: Yaxing Cai <caiyaxing666@gmail.com>
Co-authored-by: Charlie Ruan <53290280+CharlieFRuan@users.noreply.github.com>
Co-authored-by: Bohan Hou <bohanhou@andrew.cmu.edu>
Co-authored-by: yongjer <54315206+yongjer@users.noreply.github.com>
Co-authored-by: Jeethu Rao <jeethu@jeethurao.com>
Co-authored-by: Junru Shao <junrushao@apache.org>
Co-authored-by: Ruihang Lai <ruihangl@cs.cmu.edu>
Co-authored-by: Denise Kutnick <boca.denise@gmail.com>
Co-authored-by: Lesheng Jin <34279105+LeshengJin@users.noreply.github.com>
Co-authored-by: Junru Shao <junrushao1994@gmail.com>
Co-authored-by: Sunghyun Park <sunggg@umich.edu>
Co-authored-by: “Sunghyun <sunggg@umich.com>
Co-authored-by: Rick Zhou <riczhou@linkedin.com>
Co-authored-by: Varshith Bathini <varshith15@gmail.com>
Co-authored-by: Varshith <varshith.bathini@sprinklr.com>
Co-authored-by: Tianqi Chen <tqchen@users.noreply.github.com>
Co-authored-by: Git bot <bot@noreply.github.com>
Co-authored-by: SingLi <Sing-Li@users.noreply.github.com>
Co-authored-by: Kartik Khandelwal <kartikkhandelwal1998@gmail.com>
Co-authored-by: Goutham Tamilselvan <goutham2688@gmail.com>
Co-authored-by: S A G A R <110724849+tmsagarofficial@users.noreply.github.com>
Co-authored-by: Yuchen Jin <yuchenj@cs.washington.edu>
Co-authored-by: DavidSharma <68979667+David-Sharma@users.noreply.github.com>
Co-authored-by: fennecJ <hwahwa649@gmail.com>
Co-authored-by: Xiyou Zhou <xiyou.zhou@gmail.com>
Co-authored-by: Xiaoyu Zhang <35585791+BBuf@users.noreply.github.com>
Co-authored-by: Animesh Bohara <ani.bohara@gmail.com>
Co-authored-by: Animesh Bohara <abohara@cs.cmu.edu>
Co-authored-by: David Pissarra <david.pissarra@tecnico.ulisboa.pt>
masahi added a commit to masahi/mlc-llm that referenced this pull request Nov 7, 2023
commit 44f4cbdfed7941e7ed060d74b23d870d026a57c9
Author: Eric Lunderberg <elunderberg@octoml.ai>
Date:   Fri Oct 20 13:41:40 2023 +0000

    Support execution using pre-sharded weights

commit 35644870e2daf829c29ebb5431d357ceaa4e2793
Author: Eric Lunderberg <elunderberg@octoml.ai>
Date:   Fri Oct 13 20:35:39 2023 +0000

    Support writing of pre-sharded weights

commit 97572226d331ebd9ef49e4a2c1dad42344d09bac
Author: Eric Lunderberg <elunderberg@octoml.ai>
Date:   Fri Oct 13 20:55:34 2023 +0000

    Extract ParamManager.create_parameter_transformation call from convert_weights

commit e1d3217f7b0c87c49ba3721567cb24df436618a3
Author: Eric Lunderberg <elunderberg@octoml.ai>
Date:   Fri Oct 13 22:39:00 2023 +0000

    Extract a ParamManager.optimize_transform_param_order method

commit b2a9e1c7e83c0886e3a0ebed02f4c8416dfbfb5f
Author: Eric Lunderberg <elunderberg@octoml.ai>
Date:   Fri Oct 13 18:30:59 2023 +0000

    [MultiGPU] Cleanup create_shard_info_func

    - De-duplicate the `if param.shard_strategy == foo` if/else chain

    - Return a `tvm.IRModule` instead of modifying an existing module

commit f67d47a57fbca21df48867a9dfc10e430a3a3b04
Author: Eric Lunderberg <elunderberg@octoml.ai>
Date:   Mon Oct 16 16:59:43 2023 +0000

    [Bugfix] Include LegalizeOps in utils.convert_weights

    Prior to this commit, `utils.convert_weights` assumes that the
    parameter transformation module is already legalized, and uses no
    relax operations that require legalization.  This commit adds a call
    to `relax.transform.LegalizeOps` to remove this assumption.

commit a98f9cf45a4cc7a2412f68a4f4306d97e7239a13
Author: Eric Lunderberg <elunderberg@octoml.ai>
Date:   Mon Oct 16 16:49:44 2023 +0000

    [Bugfix] Correct input shape for shard info function

    Prior to this commit, the sharding functions sharded axis converted
    from `orig_size * num_shards` to `orig_size // num_shards`.  This
    commit updates the sharding functions to instead convert from
    `orig_size` to `orig_size // num_shards`.

commit 4042626
Author: Lesheng Jin <34279105+LeshengJin@users.noreply.github.com>
Date:   Mon Nov 6 15:43:21 2023 -0800

    [Slim-LM] Enable loading from AWQ pre-quantized weight. (mlc-ai#1114)

    * [SLM] Enable loading from AWQ pre-quantized weight.

    * remove awq_loader.py

    * Update to the latest commit

    * Delete llama_parameter.py

    * update unittest

    * fix lint

    * upd

    * add Llama-2-7B-AWQ

commit 9869ca6
Author: Eric Lunderberg <Lunderberg@users.noreply.github.com>
Date:   Mon Nov 6 16:03:12 2023 -0600

    Fix Python3.8 compatibility breakage (mlc-ai#1210)

    The breakage was resulting from newer syntax being used for type
    annotations, as part of mlc-ai#592.
    So long as `mlc_chat.interface.openai_api` wasn't imported, the
    breaking changes were not encountered.  In
    mlc-ai#1107, the addition of `from
    .interface.openai_api import ChatMessage` caused this module to be
    imported, breaking compatibility of `mlc_chat.ChatModule` with
    Python3.8.

    This commit updates the type annotations to the supported syntax.

commit e00220c
Author: Junru Shao <junrushao@apache.org>
Date:   Mon Nov 6 13:04:36 2023 -0800

    Detect `mtriple` via LLVM (mlc-ai#1211)

commit e2c99a8
Author: Junru Shao <junrushao@apache.org>
Date:   Mon Nov 6 12:01:51 2023 -0800

    [Fix] Keep update-to-date with upstream API change (mlc-ai#1209)

commit a7f1183
Author: Git bot <bot@noreply.github.com>
Date:   Mon Nov 6 18:53:07 2023 +0000

    Auto updated submodule references

commit 51d6f9c
Author: Junru Shao <junrushao@apache.org>
Date:   Mon Nov 6 09:28:57 2023 -0800

    Try fix macOS build with picojson again (mlc-ai#1207)

    Try fix macOS build with picojson

commit 01d4339
Author: Junru Shao <junrushao@apache.org>
Date:   Mon Nov 6 09:08:58 2023 -0800

    Try fix macOS build with picojson (mlc-ai#1206)

    The error message below

    ```
    /usr/share/miniconda/envs/mlc-llm-build/conda-bld/mlc-chat-cli-nightly-package_1699286394016/work/3rdparty/tvm/3rdparty/picojson/picojson.h: In member function 'std::string picojson::value::to_str() const':
    /usr/share/miniconda/envs/mlc-llm-build/conda-bld/mlc-chat-cli-nightly-package_1699286394016/work/3rdparty/tvm/3rdparty/picojson/picojson.h:494:37: error: expected ')' before 'PRId64'
      494 |       SNPRINTF(buf, sizeof(buf), "%" PRId64, u_.int64_);
          |               ~                     ^~~~~~~
          |                                     )
    /usr/share/miniconda/envs/mlc-llm-build/conda-bld/mlc-chat-cli-nightly-package_1699286394016/work/3rdparty/tvm/3rdparty/picojson/picojson.h:81:1: note: 'PRId64' is defined in header '<cinttypes>'; did you forget to '#include <cinttypes>'?
       80 | #include <errno.h>
      +++ |+#include <cinttypes>
       81 | #include <inttypes.h>

    ```

    indicates that the `__STDC_FORMAT_MACROS` flag is not turned on for some
    reason.

commit 65478c8
Author: Junru Shao <junrushao@apache.org>
Date:   Sun Nov 5 19:52:53 2023 -0800

    [Fix] Remove Redundant Warnings (mlc-ai#1204)

    PR mlc-ai#1203 introduces some unnecessary and redundant logging messages.
    This PR gets them removed.

commit 7ccb51a
Author: Junru Shao <junrushao@apache.org>
Date:   Sun Nov 5 18:33:49 2023 -0800

    Integrating MLC runtime with the new compilation workflow (mlc-ai#1203)

commit 3413d17
Author: Junru Shao <junrushao@apache.org>
Date:   Sun Nov 5 12:03:33 2023 -0800

    [Fix] Use `fabs` as floating point abs function in C++ (mlc-ai#1202)

commit 145a984
Author: David Pissarra <61968959+davidpissarra@users.noreply.github.com>
Date:   Sun Nov 5 06:18:47 2023 +0000

    [API] ```llm-vscode``` extension support (mlc-ai#1198)

    This PR enables ```llm-vscode``` extension API support for copilot-like code completion, following [HF's LSP](https://github.com/huggingface/llm-ls). Fully compatible with ```CodeLlama``` and ```starcoder``` on mlc-llm.

    - huggingface/llm-vscode#103 enhances extension user experience when used with mlc-llm rest api.

    Thanks @ pacman100, who came up with this on his latest blogpost: https://huggingface.co/blog/personal-copilot

commit 0e08845
Author: Animesh Bohara <ani.bohara@gmail.com>
Date:   Sun Nov 5 01:01:26 2023 -0400

    [RestAPI] Added docs (mlc-ai#1193)

    Add docs for RestAPI

    Co-authored-by: Animesh Bohara <abohara@cs.cmu.edu>

commit 3417505
Author: Junru Shao <junrushao@apache.org>
Date:   Sat Nov 4 19:44:25 2023 -0700

    Support overriding `--max-sequence-length` in command line (mlc-ai#1197)

commit 5d63f7e
Author: Junru Shao <junrushao@apache.org>
Date:   Sat Nov 4 15:42:19 2023 -0700

    [Docs] Clarify zstd installation on Windows (mlc-ai#1196)

    Update zstd installation

commit 78424f0
Author: Junru Shao <junrushao@apache.org>
Date:   Sat Nov 4 02:13:21 2023 -0700

    [Docs] Clarify zstd installation on Windows (mlc-ai#1191)

commit 4832c2f
Author: Junru Shao <junrushao@apache.org>
Date:   Sat Nov 4 01:58:55 2023 -0700

    Add CodeLlama as part of model presets (mlc-ai#1190)

commit 5d1dc34
Author: Junru Shao <junrushao@apache.org>
Date:   Sat Nov 4 01:46:19 2023 -0700

    Merge llama_config.py into llama_model.py (mlc-ai#1189)

commit 9d20575
Author: Junru Shao <junrushao@apache.org>
Date:   Sat Nov 4 01:30:15 2023 -0700

    Merge `llama_config.CONFIG` into `MODEL_PRESETS` (mlc-ai#1188)

commit 4716704
Author: Junru Shao <junrushao@apache.org>
Date:   Sat Nov 4 01:20:43 2023 -0700

    Add Python API for Weight Conversion (mlc-ai#1182)

    This PR primarily does a major refactoring to introduce Python API that
    is consistent with the CLI API. Besides, it includes the following
    fixes and enhancements:

    - More info provided to `isort` for better formatting in `pyproject.toml`;
    - Print out the default value of all arguments in argparse command line;
    - Ensure `--device` is always available locally when doing weight
      conversion;
    - Add argument echoing in weight conversion to be consistent with its
      counterpart in compilation;
    - Add a consistency checker to make sure the shapes/dtypes of all
      tensors from weight conversion is consistent with compilation;
    - Echo the total size of parameters;
    - Better logging of each parameter's shape and dtype, and either or not
      its quantized;
    - More structure robustification, renaming `parameter/` to `loader/` to
      be more explicit about its intention;
    - Inline and remove `ParamQuantizer` into the loader to improve logging
      and the logic flow;
    - Always add instructions "Use `--xxx` to override" for any options that
      are auto detected to be more informative to end users;
    - Fix wrong shape calculation when quantizing `nn.Embedding`;
    - Fix wrong dtype calculation in group quantization when the input dtype
      is different from model dtype (e.g. "float32" in torch, but the model
      dtype in quantization is fp16 in `q4f16_1`);
    - Fix inconsistent param names in layers such as `GroupQuantizeLinear`;
    - Fix dtype inconsistency when a parameter is not quantized;
    - Fix existing unittests.

commit 6ae02dd
Author: Charlie Ruan <53290280+CharlieFRuan@users.noreply.github.com>
Date:   Fri Nov 3 15:34:29 2023 -0400

    [Model Support][SWA] Add support for sliding window attention for Mistral (mlc-ai#1087)

    * mistral base

    * Add sliding window mask making and its tests

    * Small changes for sliding window mask

    * Clean up mask making

    * Remove kv_seq_len

    * Add prefill chunking, handle max window size in SWA

    * Add interleave kv

    * Temporary fix for kv seq len

    * Pass in more shapes to SWA prefill and decode in runtime

    * mistral var fix

    * Small changes regarding shape passing

    * Small fix on chunk size

    * Add build args, fix mlc chat config dump

    * mistral system prompt
    ---------

    Co-authored-by: David Pissarra <david.pissarra@tecnico.ulisboa.pt>
    Co-authored-by: David Pissarra <61968959+davidpissarra@users.noreply.github.com>

commit 2dc8183
Author: Lesheng Jin <34279105+LeshengJin@users.noreply.github.com>
Date:   Fri Nov 3 00:36:52 2023 -0700

    [Fix][SLM] Update q4f16 quantization with the new mutator name rule (mlc-ai#1178)

    [Fix] Update q4f16 quantization with the new mutator name rule

commit 53060af
Author: Xiyou Zhou <xiyou.zhou@gmail.com>
Date:   Thu Nov 2 13:08:11 2023 -0700

    [SLM][AutoLLM] Enable Command Line Weight Conversion (mlc-ai#1170)

    This PR enables weight conversion in command line.
    Sample command: `python3 -m mlc_chat.cli.convert_weight --config dist/models/llama-2-13b-chat-hf/ --quantization "q4f16_1" --output dist/test/`

commit 2ca7d15
Author: Junru Shao <junrushao@apache.org>
Date:   Thu Nov 2 11:30:28 2023 -0700

    [Fix] TIR block name of dequantization (mlc-ai#1177)

commit 1757777
Author: Yaxing Cai <caiyaxing666@gmail.com>
Date:   Wed Nov 1 15:52:19 2023 -0700

    [SLM] Fix group quantization (mlc-ai#1172)

    This PR fixes the group quantization and add related unit tests.

commit 9831135
Author: Animesh Bohara <ani.bohara@gmail.com>
Date:   Wed Nov 1 15:16:09 2023 -0400

    Fix Android app Permission denied error on Android 10  (mlc-ai#1175)

    Use scoped storage instead of Downloads directory

    Co-authored-by: Animesh Bohara <abohara@cs.cmu.edu>

commit 200653a
Author: Git bot <bot@noreply.github.com>
Date:   Wed Nov 1 14:53:54 2023 +0000

    Auto updated submodule references

commit f5b2e88
Author: Xiaoyu Zhang <35585791+BBuf@users.noreply.github.com>
Date:   Wed Nov 1 12:23:40 2023 +0800

    Fix RWKV Support (mlc-ai#1136)

    I successfully ran the rwkv-world-3b fp16 model on my Xiaomi phone. This PR is to fix a bug on the main branch where the rwkv model outputs only one word and then stop.

    ![image](https://github.com/mlc-ai/mlc-llm/assets/35585791/6514d6ef-c93c-4ad2-8e76-8ffa0663080f)

commit e0cd3f6
Author: Junru Shao <junrushao@apache.org>
Date:   Tue Oct 31 12:56:28 2023 -0700

    [Bugfix] Cannot find global function `mlc.llm_chat_create` (mlc-ai#1167)

commit 02d1e57
Author: Junru Shao <junrushao@apache.org>
Date:   Tue Oct 31 12:43:17 2023 -0700

    Support CUDA Multi-Arch Compilation (mlc-ai#1166)

commit 8438b27
Author: Junru Shao <junrushao@apache.org>
Date:   Tue Oct 31 12:09:16 2023 -0700

    Misc Cleanups of Compilation Pipeline (mlc-ai#1165)

commit b5bfa5b
Author: Yaxing Cai <caiyaxing666@gmail.com>
Date:   Tue Oct 31 11:39:44 2023 -0700

    Enable group quant transform with nn.Module (mlc-ai#1154)

    * Enable group quant transform with nn.Module

    This PR completes the group quantization support for `nn.Module` based model.

    * remove deprecated tests

    * Update

    * wip

    * remove deprecated test

    * fix lint

    * fix lint

    * fix lint

    ---------

    Co-authored-by: Junru Shao <junrushao@apache.org>

commit 9076d01
Author: Yuchen Jin <yuchenj@cs.washington.edu>
Date:   Mon Oct 30 22:40:53 2023 -0700

    [Rest] Document emoji handling (mlc-ai#1160)

    Followup PR of mlc-ai#1142 to document the emoji handling.

commit 425a2cb
Author: Junru Shao <junrushao@apache.org>
Date:   Mon Oct 30 15:00:24 2023 -0700

    [Fix][REST] Use lowered-cased "app" (mlc-ai#1159)

commit 0a9d6c7
Author: Eric Lunderberg <Lunderberg@users.noreply.github.com>
Date:   Mon Oct 30 14:44:44 2023 -0500

    [Utils] Remove conversion to numpy array in utils.save_params (mlc-ai#1083)

    Prior to this commit, each parameter was converted to a numpy-owned
    array as part of a total size computation.  This commit computes the
    size directly, removing the conversion.

commit 3cf5605
Author: Eric Lunderberg <Lunderberg@users.noreply.github.com>
Date:   Mon Oct 30 14:43:51 2023 -0500

    [Utility] Check for isinstance(exc, Exception) before entering pdb (mlc-ai#1095)

    This is a follow-up to mlc-ai#1017, which added a `--pdb` flag to enter a
    debugger on exit.  This commit checks the type of the raised
    exception, and only enters the debugger if it is a subclass of
    `Exception`.  This ensures that implementation-details, such as a
    thrown `SystemExit` or `KeyboardInterrupt`, do not cause an erroneous
    entry to pdb.

commit 8ca0176
Author: Yuchen Jin <yuchenj@cs.washington.edu>
Date:   Mon Oct 30 12:26:58 2023 -0700

    [Rest] Fix emoji handling in Rest API. (mlc-ai#1142)

commit b190578
Author: Eric Lunderberg <Lunderberg@users.noreply.github.com>
Date:   Mon Oct 30 13:58:18 2023 -0500

    Apply rewrite for normal attention and MQA (mlc-ai#1138)

    Fixes a bug introduced in mlc-ai#1052,
    where use of the `--use-flash-attn-mqa` flag on a model that doesn't
    use MQA would prevent the use of CUTLASS attention at all.

commit ece97b1
Author: Eric Lunderberg <Lunderberg@users.noreply.github.com>
Date:   Mon Oct 30 13:58:08 2023 -0500

    [Transform][Redo] Apply split_rotary optimization on prefill (mlc-ai#1125)

    Prior to this commit, the `transform.fuse_split_rotary_embedding`
    function was only applicable to the `decode` function of a Llama-type
    model.  This was due to the sequence length being restricted to one,
    both in the pattern-match rule and in the `split_rotary` function, and
    the function being restricted to operate only on the `decode`
    function.

    This commit updates the `transform.fuse_split_rotary_embedding` pass
    to be a `tvm.ir.transform.Pass`, operating on all applicable matched
    in the `IRModule`.  The `split_rotary` function is now produced as a
    fully-generic function, with static parameters substituted in
    afterwards.  At this stage, the sequence length is retained as a
    dynamic parameter, such that it can be used by the `prefill` function.

    This commit reapplies the reverted commit
    mlc-ai#1033.  The error in the
    previous implementation was in the definition of
    `rotary_embedding_offset`, which provided the `query_sequence_length`
    instead of `kv_sequence_length`.  This was able to pass the validity
    tests described
    [here](mlc-ai#1058 (comment)),
    as these two sequence lengths are identical for the first call.

commit fee2cb5
Author: masahi <masahi129@gmail.com>
Date:   Tue Oct 31 01:32:06 2023 +0900

    Add batched Llama model definition using vLLM paged attention (mlc-ai#1134)

    * Add batched Llama model with vllm paged attention

    * update core.py

    * doc

    * minor

    * add e2e test

    * mv file

    * clean

    * Check if TVM has been built with USE_VLLM

    * update BuildArgs docstring

commit ba67835
Author: Junru Shao <junrushao@apache.org>
Date:   Sun Oct 29 23:54:12 2023 -0700

    Update attention layer (mlc-ai#1153)

    Existing dlight optimization only works for NT matmul, but not NN. As a
    result, the new `nn.Module`-based implementation, which uses NN matmul,
    fails compilation at HEAD for now. This PR fixes this issue by tweaking
    `k` to the preferred layout.

    The following commands now work with the new compilation pipeline:

    ```bash
    python -m mlc_chat.cli.compile --config llama2_7b  --quantization q4f16_1 -o /tmp/1.so
    python -m mlc_chat.cli.compile --config llama2_13b --quantization q4f16_1 -o /tmp/1.so
    python -m mlc_chat.cli.compile --config llama2_70b --quantization q4f16_1 -o /tmp/1.so
    ```

    Note that the quantization algorithm per se, `q4f16_1`, has not been
    implemented yet, meaning this code path is not yet ready for use so far.

commit 1a79a53
Author: Junru Shao <junrushao@apache.org>
Date:   Sun Oct 29 21:51:36 2023 -0700

    Compile Model Preset without External `config.json` (mlc-ai#1151)

    This PR adds support for compiling a preset of models without
    having to provide a `config.json` on disk using the commands below:

    ```diff
    python -m mlc_chat.cli.compile \
           --quantization q4f16_1 -o /tmp/1.so \
    -       --config /models/Llama-2-7b-chat-hf
    +       --config llama2_7b
    ```

    This allows easier testing and binary distribution without having to
    depend on external model directory.

commit 0a25374
Author: Junru Shao <junrushao@apache.org>
Date:   Sun Oct 29 21:17:38 2023 -0700

    Migrate Compiler Passes (mlc-ai#1150)

commit 2193767
Author: Junru Shao <junrushao@apache.org>
Date:   Sun Oct 29 16:35:07 2023 -0700

    Enable Mypy and Pylint in mlc_chat Python Package (mlc-ai#1149)

commit c0c3a8d
Author: Xiyou Zhou <xiyou.zhou@gmail.com>
Date:   Sun Oct 29 13:16:46 2023 -0700

    [Slim-LM] Enable Group Quant (mlc-ai#1129)

    * Enable group quant via new interface.

    * Minor fix.

    * Linting.

    * Fix isort.

    * Fix mypy.

    * TE compute working.

    * Skip embed.

    * Support cpu+gpu quantization.

    * Add target option to tests.

    * Linting.

commit 878ae84
Author: Junru Shao <junrushao@apache.org>
Date:   Sun Oct 29 00:19:20 2023 -0700

    Support parameter packing (mlc-ai#1146)

commit 2b6d832
Author: fennecJ <hwahwa649@gmail.com>
Date:   Sun Oct 29 14:59:10 2023 +0800

    Make the help info consistent with program name (mlc-ai#1137)

    When user use command `mlc_chat_cli --help`, the output will be
    something like

    Usage: mlc_chat [--help] ...

    That's because the program name specified in `cli_main.cc` is "mlc_chat".
    It will be less confusing if the output of help info shows

    Usage: mlc_chat_cli [--help] ...

commit 27ac5ac
Author: DavidSharma <68979667+David-Sharma@users.noreply.github.com>
Date:   Sat Oct 28 20:18:16 2023 -0400

    Updating tvm install docs (mlc-ai#1143)

    Updating the tvm install docs to assist a user in finding and copying zstd.dll to the correct folder.

commit 2ec0cc8
Author: Yuchen Jin <yuchenj@cs.washington.edu>
Date:   Sat Oct 28 15:13:48 2023 -0700

    Minor enhancements to `ChatModule` (mlc-ai#1132)

    Some minor enhancements to `ChatModule`, mainly handle the device parsing solely in `_parse_device_str` instead of handling it both in the member function and the `__init__` function to avoid redundancy; and some type annotation fix.

commit 2c492e5
Author: S A G A R <110724849+tmsagarofficial@users.noreply.github.com>
Date:   Sun Oct 29 03:43:15 2023 +0530

    Grammatical and Typographical improvements (mlc-ai#1139)

    * Update faq.rst

    * Update guideline.rst

    * Update compile_models.rst

    * Update distribute_compiled_models.rst

    * Update get-vicuna-weight.rst

    * Update python.rst

    * Update android.rst

    * Update cli.rst

    * Update ios.rst

    * Update javascript.rst

    * Update python.rst

    * Update rest.rst

commit 24f795e
Author: Goutham Tamilselvan <goutham2688@gmail.com>
Date:   Fri Oct 27 03:25:59 2023 -0400

    added details to windows installation (mlc-ai#1133)

    32bit version of the zstd.dll library was causing issues, so updated the doc to be more specific and download the 64bit version.

commit 973f9fc
Author: Eric Lunderberg <Lunderberg@users.noreply.github.com>
Date:   Wed Oct 25 10:14:46 2023 -0500

    [ParamManager][Redo] Use BundleModelParams for transform_dequantize (mlc-ai#1127)

    Prior to this commit, `ParamManager.transform_quantize` function took
    as input functions with separate parameters for each weight tensor,
    and produced output functions with a tuple parameter for all weights.
    Because `LiftTransformParams` had the same convention, neither could
    be applied as part of the same build flow.

    This commit updates `ParamManager.transform_quantize` pass to produce
    outputs with separate tensor parameters, using the `BundleModelParams`
    transform to later combine them into a single tuple parameter.  The
    analogous change was also performed for `LiftTransformParams` as part
    of apache/tvm#15657.

    In addition, prior to this commit, the
    `ParamManager.transform_dequantize` function operated directly on a
    `IRModule` object.  As a result, any debug instrumentation
    (e.g. before/after printouts for each pass, before/after verification
    with `relax.analysis.well_formed`, etc.) did not apply to this
    `transform_dequantize`.  This commit updates
    `ParamManager.transform_dequantize` to return a `ir.transform.Pass`.

    This commit is a repeat of the reverted PR
    mlc-ai#1056.  This PR resolves the bug
    in the earlier implementation by removing the call to
    `.without_attr("num_input")` in `ParamReplacer.rewrite_func`.  This
    follows an analogous update in `LiftTransformParams`, preserving the
    `"num_input"` attribute for use in `BundleModelParams`.

commit a4279e3
Author: Junru Shao <junrushao@apache.org>
Date:   Tue Oct 24 21:05:24 2023 -0700

    Add --opt flag parsing to CLI (mlc-ai#1123)

commit 9166edb
Author: Kartik Khandelwal <kartikkhandelwal1998@gmail.com>
Date:   Tue Oct 24 15:07:23 2023 -0400

    [REST] OpenAI compatible Rest API (mlc-ai#1107)

    * add presence and frequency penalty

    * Added support for passing conversation history in /v1/chat/completions endpoint

    * Added support for RestAPI parameters max_gen_len, n, and stop_str

    * * add presence and frequency penalty to generation config
    * refactor generation config

    * Added documentation for parameters

    * replace lib_path with model_lib_path in rest.py

    * fixed black isort issues

    * fix lib_path

commit 9cb8e8e
Author: Junru Shao <junrushao@apache.org>
Date:   Tue Oct 24 09:04:45 2023 -0700

    Remove inaccurate warning message (mlc-ai#1121)

    This PR removes an inaccurate warning from mlc-ai#1086, which warns about
    `model_lib` overriding regardless of whether or not it's actually
    overridden. With this commit, we only warn if its value is not None.

commit 2aa6809
Author: Junru Shao <junrushao@apache.org>
Date:   Tue Oct 24 09:03:38 2023 -0700

    Revert "[ParamManager] Use BundleModelParams for transform_dequantize" (mlc-ai#1120)

    Revert "[ParamManager] Use BundleModelParams for transform_dequantize (mlc-ai#1056)"

    This reverts commit e5927ce.

    This causes a regression impacting all MLC LLM nightlies as it violates the existing calling convention in MLC Chat runtime. An example: mlc-ai#1060 (comment)

commit 206103b
Author: Charlie Ruan <53290280+CharlieFRuan@users.noreply.github.com>
Date:   Tue Oct 24 11:54:01 2023 -0400

    [Docs] Add doc for max and mean gen len, shift factor; and buildArgs (mlc-ai#1119)

    * Add doc for max and mean gen len, shift factor

    * Update python docs for BuildArgs

commit 488017d
Author: SingLi <Sing-Li@users.noreply.github.com>
Date:   Tue Oct 24 08:19:31 2023 -0500

    fix mismatched argument name (mlc-ai#1117)

    fix error introduced by recent code changes

    fixes mlc-ai#1116

commit 8ce7793
Author: Git bot <bot@noreply.github.com>
Date:   Tue Oct 24 07:30:53 2023 +0000

    Auto updated submodule references

commit 61179a0
Author: Junru Shao <junrushao@apache.org>
Date:   Mon Oct 23 23:58:01 2023 -0700

    Add CLI commands for compilation (mlc-ai#1109)

commit 5a7dcd8
Author: Tianqi Chen <tqchen@users.noreply.github.com>
Date:   Tue Oct 24 00:00:41 2023 -0400

    [WINDOWS] reduce noise in windows build (mlc-ai#1115)

commit 7ae8c6d
Author: Lesheng Jin <34279105+LeshengJin@users.noreply.github.com>
Date:   Mon Oct 23 15:33:00 2023 -0700

    [Slim-LM] Introduce HFLoad for loading Pytorch and SafeTensor weights (mlc-ai#1113)

commit e5927ce
Author: Eric Lunderberg <Lunderberg@users.noreply.github.com>
Date:   Mon Oct 23 13:31:44 2023 -0500

    [ParamManager] Use BundleModelParams for transform_dequantize (mlc-ai#1056)

    * [ParamManager] Use BundleModelParams for transform_quantize

    Prior to this commit, `ParamManager.transform_quantize` function took
    as input functions with separate parameters for each weight tensor,
    and produced output functions with a tuple parameter for all weights.
    Because `LiftTransformParams` had the same convention, neither could
    be applied as part of the same build flow.

    This commit updates `ParamManager.transform_quantize` pass to produce
    outputs with separate tensor parameters, using the `BundleModelParams`
    transform to later combine them into a single tuple parameter.  The
    analogous change was also performed for `LiftTransformParams` as part
    of apache/tvm#15657.

    In addition, prior to this commit, the
    `ParamManager.transform_dequantize` function operated directly on a
    `IRModule` object.  As a result, any debug instrumentation
    (e.g. before/after printouts for each pass, before/after verification
    with `relax.analysis.well_formed`, etc.) did not apply to this
    `transform_dequantize`.  This commit updates
    `ParamManager.transform_dequantize` to return a `ir.transform.Pass`.

    * Correct type annotation

commit f57c9c9
Author: Eric Lunderberg <Lunderberg@users.noreply.github.com>
Date:   Mon Oct 23 13:31:24 2023 -0500

    [Transform] Provide IRModule transform for rewrite_attention (mlc-ai#1052)

    Prior to this commit, `mlc_llm.transform.rewrite_attention` updated a
    single function.  This commit modifies it to instead be a transform
    operating on any pattern matches within an `IRModule`.

commit 16dd2ae
Author: Lesheng Jin <34279105+LeshengJin@users.noreply.github.com>
Date:   Sun Oct 22 19:51:10 2023 -0700

    [Slim-LM] Smart path finding for config and weight (mlc-ai#1088)

commit 6159cc4
Author: Junru Shao <junrushao@apache.org>
Date:   Sun Oct 22 02:22:55 2023 -0700

    [CI] Add clang-format (mlc-ai#1103)

commit 46d11e6
Author: Junru Shao <junrushao@apache.org>
Date:   Fri Oct 20 23:39:28 2023 -0700

    Add Basic Pylint and Mypy Tooling (mlc-ai#1100)

    Add pylint/mypy tooling into pyproject.toml

    This PR establishes the initial Python tooling infra with Pylint and
    Mypy. Currently only the newest modules, i.e. `mlc_chat.support` and
    `mlc_chat.compiler` are covered, and we expect to cover the entire
    package, as being tracked in mlc-ai#1101.

commit 03c641a
Author: Junru Shao <junrushao@apache.org>
Date:   Fri Oct 20 21:51:51 2023 -0700

    Enable Python Linter (mlc-ai#1098)

    This PR enables two Python formatters "black" and "isort" on the following directory:
    - `./python/`
    - `./tests/python/`

    Enabling pylint and mypy is left for future work

commit e9b85ce
Author: Junru Shao <junrushao@apache.org>
Date:   Fri Oct 20 21:36:28 2023 -0700

    More formatting (mlc-ai#1099)

commit cf39bf6
Author: Junru Shao <junrushao@apache.org>
Date:   Fri Oct 20 15:38:49 2023 -0700

    [Format] Apply isort and black for `python/` (mlc-ai#1097)

    [Format] Apply isort and black on `python/`

    The commands I am using are:

    ```
    isort --profile black python/
    black python/
    ```

    It is always recommended to format the code before submission, given we
    don't have a linter CI yet.

commit 62d0c03
Author: Lesheng Jin <34279105+LeshengJin@users.noreply.github.com>
Date:   Fri Oct 20 15:33:51 2023 -0700

    Disable Disco for q4f16_ft and q8f16_ft quantization (mlc-ai#1094)

commit 9bf5723
Author: Junru Shao <junrushao@apache.org>
Date:   Thu Oct 19 15:49:40 2023 -0700

    Update `benchmark.py` according to mlc-ai#1086 (mlc-ai#1091)

    Update `benchmark.py`

commit 830656f
Author: Varshith Bathini <varshith15@gmail.com>
Date:   Fri Oct 20 00:40:14 2023 +0530

    StreamIterator (mlc-ai#1057)

    Co-authored-by: Varshith <varshith.bathini@sprinklr.com>

commit b0373d1
Author: Rick Zhou <riczhou@linkedin.com>
Date:   Thu Oct 19 12:09:12 2023 -0700

    Support lib_path override in C++. Improvements on docs and error messages (mlc-ai#1086)

    * Support lib_path option in C++ CLI. Disable ChatConfig.model_lib override in Python API. Improvements on helper messages and error messages

    * Update docs

    * Rename lib_path -> model_lib_path

commit 56a8004
Author: Junru Shao <junrushao@apache.org>
Date:   Thu Oct 19 10:37:24 2023 -0700

    Update README.md for Multi-GPU (mlc-ai#1090)

commit 2625945
Author: Junru Shao <junrushao@apache.org>
Date:   Thu Oct 19 08:57:50 2023 -0700

    Establish `mlc_chat.compiler` (mlc-ai#1082)

    This PR establishes the compiler components in MLC-Chat Python API,
    which currently includes two primary components: models and parameters.

    The models are `nn.Module`-based definition of an LLM, which, as the
    very first stab, contains only `LlamaForCasualLM`. It is decomposed into
    three files:
    - `llama_config.py`: common configurations for Llama, where we define
      relevant configurations of its architecture, as well as include
      standard config file for Llama2-7B/13B/70B for convenient testing;
    - `llama.py`: the model architecture of Llama, based on the PyTorch-like
    `nn.Module` API;
    - `llama_parameter.py`: defines the mapping between MLC parameters and
      pytorch parameters.

    The parameters contains the basic functionality of parameter mapping,
    and the loaders that effectively convert parameters from PyTorch to MLC
    according to the mapping specified. Currently, only `HFTorchLoader` is
    implemented, but loaders like SafeTensor, GPTQ or AWQ should be quite
    straightforward according to the existing design.

    On top of this PR, on-the-fly quantization could be defined as a loading
    time transformation on MLC parameters, while pre-quantized parameter
    loading is effectively parameter loading after MLC's `nn.Module` is
    quantized.

    Two unittests examplify how the infrastructure works:
    - `./tests/python/model/test_llama.py` shows how to create an `nn.Module`
    using the new infra, and then convert it to TVM IRModule;
    - `./tests/python/parameter/hf_torch_loader.py` shows how to load
    parameters from HuggingFace PyTorch format.

    Besides, `mlc_chat.support` is established for utility functions, which
    now contains two utils:
    - `config.py` which supports reading configurations into dataclasses
    from JSON file or Python dict. On top of Python dataclass, it throws
    irrelevant fields into `cls.kwargs`, which is helpful when loading
    HuggingFace configuration file;
    - `tqdm.py` which contains tqdm-related utilities, primarily redirecting
    logging and printing to work nicely with tqdm.

commit 3aefd9f
Author: Junru Shao <junrushao@apache.org>
Date:   Mon Oct 16 21:16:27 2023 -0700

    [Bugfix] Compilation Error in q4f32_1 (mlc-ai#1078)

    The pass `fuse-split-rotary` assumes the compute dtype is fp16, which
    usually is, but in certain cases, e.g. `q0f32` and `q4f32_1`, the
    compute is based on fp32 instead. This PR strengthens the check guard.

commit 9872c48
Author: Ruihang Lai <ruihangl@cs.cmu.edu>
Date:   Mon Oct 16 14:56:24 2023 -0400

    [Python] Extract common device str parse function in ChatModule (mlc-ai#1074)

    This PR lifts the device string parsing (just a few of lines)
    to a standalone function, so that on the serving side the serving
    can make use of this function as well.

    Tested Python API and it does not seem to incur regression.

commit d202077
Author: Eric Lunderberg <Lunderberg@users.noreply.github.com>
Date:   Mon Oct 16 08:06:26 2023 -0500

    [ParamManager] Added progress bar for get_item/set_item (mlc-ai#1063)

commit 204860b
Author: Ruihang Lai <ruihangl@cs.cmu.edu>
Date:   Sun Oct 15 14:02:12 2023 -0400

    [Fix] ChatModule incorrect temperature buffer shape (mlc-ai#1070)

    PR mlc-ai#1048 updated the signature of softmax in the built model library
    and changed the temperature buffer shape in ChatModule. This causes
    some existing demo unable to run since we did not do a round of model
    library update.

    This PR reverts the ChatModule change, and adds back the softmax
    function in non-batching case. With this PR, the regression should
    be fixed.

commit b0bfc88
Author: Junru Shao <junrushao@apache.org>
Date:   Sun Oct 15 00:24:24 2023 -0700

    Add links to Python API Reference (mlc-ai#1068)

commit 9010d48
Author: Jeethu Rao <jeethu@jeethurao.com>
Date:   Sun Oct 15 06:42:24 2023 +0100

    Minor typo fix (mlc-ai#1064)

commit 8184431
Author: Eric Lunderberg <Lunderberg@users.noreply.github.com>
Date:   Sat Oct 14 00:33:15 2023 -0500

    [ParamManager] Cleanup creation of quantization IRModule (mlc-ai#1053)

    This commit replaces the single-parameter
    `relax_model.param_manager.create_quantize_func` function with a
    method on the `ParamManager`, `create_parameter_transformation`.  This
    avoids potential typos between `param_manager` as the imported Python
    module `mlc_llm.relax_model.param_manager` and an instance of the
    `ParamManager` class named `param_manager`, and makes the
    functionality easier to find.

    This function also takes an optional `optimize_parameter_order` flag,
    defaulting to `True`, which applies the `ReorderTransformFunc` pass.
    Since the `ReorderTransformFunc` is intended to be used with several
    configuration objects owned by `ParamManager`, this simplifies the
    common path of producing an optimally-ordered parameter transformation
    module.

commit 481cd92
Author: Eric Lunderberg <Lunderberg@users.noreply.github.com>
Date:   Sat Oct 14 00:32:36 2023 -0500

    [Core] Remove duplication in MODEL.get_model calls (mlc-ai#1054)

    This commit removes the `if`/`elif` chain in `core.py`, where the body
    of each conditional assigns the same `mod, param_manager, params,
    model_config`, and is identical except for the choice of model being
    built.

commit c2b8cbc
Author: Jeethu Rao <jeethu@jeethurao.com>
Date:   Sat Oct 14 06:32:05 2023 +0100

    Fix Stable LM 3B build (mlc-ai#1061)

    * [stablelm 3b] Rename dynamic vocab size from "v" to "vocab_size"

    * Add get_num_key_value_heads method to StableLM3bConfig

commit d854105
Author: Ruihang Lai <ruihangl@cs.cmu.edu>
Date:   Fri Oct 13 20:45:58 2023 -0400

    [Model] Initial batching support for Llama (mlc-ai#1048)

    This PR introduces the initial batched input support for llama
    models. To make the code managable, we keep both the single-sequence
    handling flow and the batching handling flow in the Llama modeling.

    Now, with `--enable-batching` as a build argument, we build Llama
    for the batched version.

    NOTE: The paged attention kernel/TIR func are not included in this PR,
    so currently the built library with batching enabled is not runnable.
    We will follow up with the attention kernel in the future.

    This PR guarantees that the existing single-sequence inference (Python
    API, CLI, etc.) is not broken.

    P.S.. The batching flow is subject to bug fixes as we integrate with
    the attention function and run the e2e flow in the future.

commit edab9b5
Author: Junru Shao <junrushao@apache.org>
Date:   Fri Oct 13 09:57:46 2023 -0700

    [Doc] Use -U instead of --force-reinstall (mlc-ai#1062)

    `--force-reinstall` will reinstall all dependencies to a python package,
    which is unnecessary. `-U` is a better choice in this case.

commit ca8c11b
Author: Sunghyun Park <sunggg@umich.edu>
Date:   Fri Oct 13 09:00:21 2023 -0700

    [BugFix] Set the right `max_sequence_length` for both Llama-1 and Llama-2 families (mlc-ai#1032)

    * fix

    * reflect feedback

    ---------

    Co-authored-by: “Sunghyun <sunggg@umich.com>

commit bfaa5b9
Author: Ruihang Lai <ruihangl@cs.cmu.edu>
Date:   Thu Oct 12 17:40:54 2023 -0400

    Revert "[Transform] Apply split_rotary optimization on prefill (mlc-ai#1033)" (mlc-ai#1058)

    This reverts commit b9179cf as elaborated here mlc-ai#1033 (comment)

commit 98ebd28
Author: Lesheng Jin <34279105+LeshengJin@users.noreply.github.com>
Date:   Thu Oct 12 13:24:10 2023 -0700

    [Docs] Add `mlc.ai/package` to `DEPENDENCY INSTALLATION` group (mlc-ai#1055)

    Co-authored-by: Junru Shao <junrushao1994@gmail.com>

commit b9179cf
Author: Eric Lunderberg <Lunderberg@users.noreply.github.com>
Date:   Thu Oct 12 12:15:44 2023 -0500

    [Transform] Apply split_rotary optimization on prefill (mlc-ai#1033)

    * [Transform] Apply split_rotary optimization on prefill

    Prior to this commit, the `transform.fuse_split_rotary_embedding`
    function was only applicable to the `decode` function of a Llama-type
    model.  This was due to the sequence length being restricted to one,
    both in the pattern-match rule and in the `split_rotary` function, and
    the function being restricted to operate only on the `decode`
    function.

    This commit updates the `transform.fuse_split_rotary_embedding` pass
    to be a `tvm.ir.transform.Pass`, operating on all applicable matched
    in the `IRModule`.  The `split_rotary` function is now produced as a
    fully-generic function, with static parameters substituted in
    afterwards.  At this stage, the sequence length is retained as a
    dynamic parameter, such that it can be used by the `prefill` function.

    * Avoid multiple kernel launches for split_rotary

commit 1e6fb11
Author: Denise Kutnick <boca.denise@gmail.com>
Date:   Wed Oct 11 00:06:46 2023 -0700

    add verbose stats to mlc-chat REST API (mlc-ai#1049)

    * add verbose stats to mlc-chat REST API

    * update docs

commit 20131fb
Author: Junru Shao <junrushao@apache.org>
Date:   Mon Oct 9 16:53:56 2023 -0700

    Update README.md (mlc-ai#1045)

    Update README.md

commit bdd9d9b
Author: Ruihang Lai <ruihangl@cs.cmu.edu>
Date:   Mon Oct 9 19:08:14 2023 -0400

    [CPP] Separate common utils out from llm_chat.cc (mlc-ai#1044)

    This PR separates out the tokenizer creation function, the
    random number generator out from `llm_chat.cc` as a preparation
    step for batching inference support, since these functions/modules
    are also used in the same way in batching inference.

commit a58605f
Author: Junru Shao <junrushao@apache.org>
Date:   Mon Oct 9 15:05:34 2023 -0700

    Update README.md

commit a032d40
Author: Charlie Ruan <53290280+CharlieFRuan@users.noreply.github.com>
Date:   Mon Oct 9 18:03:24 2023 -0400

    [Docs] Iterate model prebuilts docs (mlc-ai#1043)

    * Iterate model prebuilts docs

    * small fix

commit 85001ed
Author: Jeethu Rao <jeethu@jeethurao.com>
Date:   Mon Oct 9 20:40:52 2023 +0100

    Support for the Stable LM 3B model (mlc-ai#1008)

    Support for the stablelm-3b-4e1t model

commit c02fdaf
Author: yongjer <54315206+yongjer@users.noreply.github.com>
Date:   Tue Oct 10 00:58:51 2023 +0800

    Update compile_models.rst (mlc-ai#1038)

    fix permission issue

commit bed9e60
Author: Charlie Ruan <53290280+CharlieFRuan@users.noreply.github.com>
Date:   Mon Oct 9 12:58:36 2023 -0400

    [Docs] Model prebuilts tracking page revamp (mlc-ai#1000)

commit 3a9849a
Author: Bohan Hou <bohanhou@andrew.cmu.edu>
Date:   Mon Oct 9 12:27:58 2023 -0400

    [Android] Add Llama2 q4f16_0 (mlc-ai#1041)

    llama2 q4f160

commit b44f679
Author: Charlie Ruan <53290280+CharlieFRuan@users.noreply.github.com>
Date:   Mon Oct 9 11:35:58 2023 -0400

    Add doc for ChatConfig, ConvConfig, GenerationConfig, BuildArgs (mlc-ai#1040)

    Add doc for ChatConfig, ConvConfig, GenerationConfig, BuildArgs, build model

commit bae37b3
Author: Yaxing Cai <caiyaxing666@gmail.com>
Date:   Sun Oct 8 16:55:16 2023 -0700

    [Android] Use `AlertDialog` instead of `Toast` (mlc-ai#1039)

commit 6e40c21
Author: Eric Lunderberg <Lunderberg@users.noreply.github.com>
Date:   Sat Oct 7 22:07:09 2023 -0500

    [Build] Added --pdb flag to build.py, drop into pdb on error (mlc-ai#1017)

    This commit adds an optional `--pdb` flag to the `build.py` script. If
    passed, any exception raised that would otherwise terminate the script
    will first enter a pdb post-mortem, allowing the error to be
    inspected.

commit ad3a6b9
Author: Roee Shenberg <shenberg@gmail.com>
Date:   Sun Oct 8 04:59:15 2023 +0200

    Fix two bugs in kv-cache backtrack loop (mlc-ai#856)

    Fix two bugs in kv-cache pop loop

    Bug 1: old code would stop early because output_ids was shortened in-place during the loop

    Bug 2: off-by-one in backoff size due to break

commit 898db76
Author: David Pissarra <61968959+davidpissarra@users.noreply.github.com>
Date:   Sun Oct 8 03:36:19 2023 +0100

    [API] Add GenerationConfig (mlc-ai#1024)
masahi added a commit to octoml/mlc-llm that referenced this pull request Nov 7, 2023
* Merge Eric PR

commit 44f4cbdfed7941e7ed060d74b23d870d026a57c9
Author: Eric Lunderberg <elunderberg@octoml.ai>
Date:   Fri Oct 20 13:41:40 2023 +0000

    Support execution using pre-sharded weights

commit 35644870e2daf829c29ebb5431d357ceaa4e2793
Author: Eric Lunderberg <elunderberg@octoml.ai>
Date:   Fri Oct 13 20:35:39 2023 +0000

    Support writing of pre-sharded weights

commit 97572226d331ebd9ef49e4a2c1dad42344d09bac
Author: Eric Lunderberg <elunderberg@octoml.ai>
Date:   Fri Oct 13 20:55:34 2023 +0000

    Extract ParamManager.create_parameter_transformation call from convert_weights

commit e1d3217f7b0c87c49ba3721567cb24df436618a3
Author: Eric Lunderberg <elunderberg@octoml.ai>
Date:   Fri Oct 13 22:39:00 2023 +0000

    Extract a ParamManager.optimize_transform_param_order method

commit b2a9e1c7e83c0886e3a0ebed02f4c8416dfbfb5f
Author: Eric Lunderberg <elunderberg@octoml.ai>
Date:   Fri Oct 13 18:30:59 2023 +0000

    [MultiGPU] Cleanup create_shard_info_func

    - De-duplicate the `if param.shard_strategy == foo` if/else chain

    - Return a `tvm.IRModule` instead of modifying an existing module

commit f67d47a57fbca21df48867a9dfc10e430a3a3b04
Author: Eric Lunderberg <elunderberg@octoml.ai>
Date:   Mon Oct 16 16:59:43 2023 +0000

    [Bugfix] Include LegalizeOps in utils.convert_weights

    Prior to this commit, `utils.convert_weights` assumes that the
    parameter transformation module is already legalized, and uses no
    relax operations that require legalization.  This commit adds a call
    to `relax.transform.LegalizeOps` to remove this assumption.

commit a98f9cf45a4cc7a2412f68a4f4306d97e7239a13
Author: Eric Lunderberg <elunderberg@octoml.ai>
Date:   Mon Oct 16 16:49:44 2023 +0000

    [Bugfix] Correct input shape for shard info function

    Prior to this commit, the sharding functions sharded axis converted
    from `orig_size * num_shards` to `orig_size // num_shards`.  This
    commit updates the sharding functions to instead convert from
    `orig_size` to `orig_size // num_shards`.

commit 4042626
Author: Lesheng Jin <34279105+LeshengJin@users.noreply.github.com>
Date:   Mon Nov 6 15:43:21 2023 -0800

    [Slim-LM] Enable loading from AWQ pre-quantized weight. (mlc-ai#1114)

    * [SLM] Enable loading from AWQ pre-quantized weight.

    * remove awq_loader.py

    * Update to the latest commit

    * Delete llama_parameter.py

    * update unittest

    * fix lint

    * upd

    * add Llama-2-7B-AWQ

commit 9869ca6
Author: Eric Lunderberg <Lunderberg@users.noreply.github.com>
Date:   Mon Nov 6 16:03:12 2023 -0600

    Fix Python3.8 compatibility breakage (mlc-ai#1210)

    The breakage was resulting from newer syntax being used for type
    annotations, as part of mlc-ai#592.
    So long as `mlc_chat.interface.openai_api` wasn't imported, the
    breaking changes were not encountered.  In
    mlc-ai#1107, the addition of `from
    .interface.openai_api import ChatMessage` caused this module to be
    imported, breaking compatibility of `mlc_chat.ChatModule` with
    Python3.8.

    This commit updates the type annotations to the supported syntax.

commit e00220c
Author: Junru Shao <junrushao@apache.org>
Date:   Mon Nov 6 13:04:36 2023 -0800

    Detect `mtriple` via LLVM (mlc-ai#1211)

commit e2c99a8
Author: Junru Shao <junrushao@apache.org>
Date:   Mon Nov 6 12:01:51 2023 -0800

    [Fix] Keep update-to-date with upstream API change (mlc-ai#1209)

commit a7f1183
Author: Git bot <bot@noreply.github.com>
Date:   Mon Nov 6 18:53:07 2023 +0000

    Auto updated submodule references

commit 51d6f9c
Author: Junru Shao <junrushao@apache.org>
Date:   Mon Nov 6 09:28:57 2023 -0800

    Try fix macOS build with picojson again (mlc-ai#1207)

    Try fix macOS build with picojson

commit 01d4339
Author: Junru Shao <junrushao@apache.org>
Date:   Mon Nov 6 09:08:58 2023 -0800

    Try fix macOS build with picojson (mlc-ai#1206)

    The error message below

    ```
    /usr/share/miniconda/envs/mlc-llm-build/conda-bld/mlc-chat-cli-nightly-package_1699286394016/work/3rdparty/tvm/3rdparty/picojson/picojson.h: In member function 'std::string picojson::value::to_str() const':
    /usr/share/miniconda/envs/mlc-llm-build/conda-bld/mlc-chat-cli-nightly-package_1699286394016/work/3rdparty/tvm/3rdparty/picojson/picojson.h:494:37: error: expected ')' before 'PRId64'
      494 |       SNPRINTF(buf, sizeof(buf), "%" PRId64, u_.int64_);
          |               ~                     ^~~~~~~
          |                                     )
    /usr/share/miniconda/envs/mlc-llm-build/conda-bld/mlc-chat-cli-nightly-package_1699286394016/work/3rdparty/tvm/3rdparty/picojson/picojson.h:81:1: note: 'PRId64' is defined in header '<cinttypes>'; did you forget to '#include <cinttypes>'?
       80 | #include <errno.h>
      +++ |+#include <cinttypes>
       81 | #include <inttypes.h>

    ```

    indicates that the `__STDC_FORMAT_MACROS` flag is not turned on for some
    reason.

commit 65478c8
Author: Junru Shao <junrushao@apache.org>
Date:   Sun Nov 5 19:52:53 2023 -0800

    [Fix] Remove Redundant Warnings (mlc-ai#1204)

    PR mlc-ai#1203 introduces some unnecessary and redundant logging messages.
    This PR gets them removed.

commit 7ccb51a
Author: Junru Shao <junrushao@apache.org>
Date:   Sun Nov 5 18:33:49 2023 -0800

    Integrating MLC runtime with the new compilation workflow (mlc-ai#1203)

commit 3413d17
Author: Junru Shao <junrushao@apache.org>
Date:   Sun Nov 5 12:03:33 2023 -0800

    [Fix] Use `fabs` as floating point abs function in C++ (mlc-ai#1202)

commit 145a984
Author: David Pissarra <61968959+davidpissarra@users.noreply.github.com>
Date:   Sun Nov 5 06:18:47 2023 +0000

    [API] ```llm-vscode``` extension support (mlc-ai#1198)

    This PR enables ```llm-vscode``` extension API support for copilot-like code completion, following [HF's LSP](https://github.com/huggingface/llm-ls). Fully compatible with ```CodeLlama``` and ```starcoder``` on mlc-llm.

    - huggingface/llm-vscode#103 enhances extension user experience when used with mlc-llm rest api.

    Thanks @ pacman100, who came up with this on his latest blogpost: https://huggingface.co/blog/personal-copilot

commit 0e08845
Author: Animesh Bohara <ani.bohara@gmail.com>
Date:   Sun Nov 5 01:01:26 2023 -0400

    [RestAPI] Added docs (mlc-ai#1193)

    Add docs for RestAPI

    Co-authored-by: Animesh Bohara <abohara@cs.cmu.edu>

commit 3417505
Author: Junru Shao <junrushao@apache.org>
Date:   Sat Nov 4 19:44:25 2023 -0700

    Support overriding `--max-sequence-length` in command line (mlc-ai#1197)

commit 5d63f7e
Author: Junru Shao <junrushao@apache.org>
Date:   Sat Nov 4 15:42:19 2023 -0700

    [Docs] Clarify zstd installation on Windows (mlc-ai#1196)

    Update zstd installation

commit 78424f0
Author: Junru Shao <junrushao@apache.org>
Date:   Sat Nov 4 02:13:21 2023 -0700

    [Docs] Clarify zstd installation on Windows (mlc-ai#1191)

commit 4832c2f
Author: Junru Shao <junrushao@apache.org>
Date:   Sat Nov 4 01:58:55 2023 -0700

    Add CodeLlama as part of model presets (mlc-ai#1190)

commit 5d1dc34
Author: Junru Shao <junrushao@apache.org>
Date:   Sat Nov 4 01:46:19 2023 -0700

    Merge llama_config.py into llama_model.py (mlc-ai#1189)

commit 9d20575
Author: Junru Shao <junrushao@apache.org>
Date:   Sat Nov 4 01:30:15 2023 -0700

    Merge `llama_config.CONFIG` into `MODEL_PRESETS` (mlc-ai#1188)

commit 4716704
Author: Junru Shao <junrushao@apache.org>
Date:   Sat Nov 4 01:20:43 2023 -0700

    Add Python API for Weight Conversion (mlc-ai#1182)

    This PR primarily does a major refactoring to introduce Python API that
    is consistent with the CLI API. Besides, it includes the following
    fixes and enhancements:

    - More info provided to `isort` for better formatting in `pyproject.toml`;
    - Print out the default value of all arguments in argparse command line;
    - Ensure `--device` is always available locally when doing weight
      conversion;
    - Add argument echoing in weight conversion to be consistent with its
      counterpart in compilation;
    - Add a consistency checker to make sure the shapes/dtypes of all
      tensors from weight conversion is consistent with compilation;
    - Echo the total size of parameters;
    - Better logging of each parameter's shape and dtype, and either or not
      its quantized;
    - More structure robustification, renaming `parameter/` to `loader/` to
      be more explicit about its intention;
    - Inline and remove `ParamQuantizer` into the loader to improve logging
      and the logic flow;
    - Always add instructions "Use `--xxx` to override" for any options that
      are auto detected to be more informative to end users;
    - Fix wrong shape calculation when quantizing `nn.Embedding`;
    - Fix wrong dtype calculation in group quantization when the input dtype
      is different from model dtype (e.g. "float32" in torch, but the model
      dtype in quantization is fp16 in `q4f16_1`);
    - Fix inconsistent param names in layers such as `GroupQuantizeLinear`;
    - Fix dtype inconsistency when a parameter is not quantized;
    - Fix existing unittests.

commit 6ae02dd
Author: Charlie Ruan <53290280+CharlieFRuan@users.noreply.github.com>
Date:   Fri Nov 3 15:34:29 2023 -0400

    [Model Support][SWA] Add support for sliding window attention for Mistral (mlc-ai#1087)

    * mistral base

    * Add sliding window mask making and its tests

    * Small changes for sliding window mask

    * Clean up mask making

    * Remove kv_seq_len

    * Add prefill chunking, handle max window size in SWA

    * Add interleave kv

    * Temporary fix for kv seq len

    * Pass in more shapes to SWA prefill and decode in runtime

    * mistral var fix

    * Small changes regarding shape passing

    * Small fix on chunk size

    * Add build args, fix mlc chat config dump

    * mistral system prompt
    ---------

    Co-authored-by: David Pissarra <david.pissarra@tecnico.ulisboa.pt>
    Co-authored-by: David Pissarra <61968959+davidpissarra@users.noreply.github.com>

commit 2dc8183
Author: Lesheng Jin <34279105+LeshengJin@users.noreply.github.com>
Date:   Fri Nov 3 00:36:52 2023 -0700

    [Fix][SLM] Update q4f16 quantization with the new mutator name rule (mlc-ai#1178)

    [Fix] Update q4f16 quantization with the new mutator name rule

commit 53060af
Author: Xiyou Zhou <xiyou.zhou@gmail.com>
Date:   Thu Nov 2 13:08:11 2023 -0700

    [SLM][AutoLLM] Enable Command Line Weight Conversion (mlc-ai#1170)

    This PR enables weight conversion in command line.
    Sample command: `python3 -m mlc_chat.cli.convert_weight --config dist/models/llama-2-13b-chat-hf/ --quantization "q4f16_1" --output dist/test/`

commit 2ca7d15
Author: Junru Shao <junrushao@apache.org>
Date:   Thu Nov 2 11:30:28 2023 -0700

    [Fix] TIR block name of dequantization (mlc-ai#1177)

commit 1757777
Author: Yaxing Cai <caiyaxing666@gmail.com>
Date:   Wed Nov 1 15:52:19 2023 -0700

    [SLM] Fix group quantization (mlc-ai#1172)

    This PR fixes the group quantization and add related unit tests.

commit 9831135
Author: Animesh Bohara <ani.bohara@gmail.com>
Date:   Wed Nov 1 15:16:09 2023 -0400

    Fix Android app Permission denied error on Android 10  (mlc-ai#1175)

    Use scoped storage instead of Downloads directory

    Co-authored-by: Animesh Bohara <abohara@cs.cmu.edu>

commit 200653a
Author: Git bot <bot@noreply.github.com>
Date:   Wed Nov 1 14:53:54 2023 +0000

    Auto updated submodule references

commit f5b2e88
Author: Xiaoyu Zhang <35585791+BBuf@users.noreply.github.com>
Date:   Wed Nov 1 12:23:40 2023 +0800

    Fix RWKV Support (mlc-ai#1136)

    I successfully ran the rwkv-world-3b fp16 model on my Xiaomi phone. This PR is to fix a bug on the main branch where the rwkv model outputs only one word and then stop.

    ![image](https://github.com/mlc-ai/mlc-llm/assets/35585791/6514d6ef-c93c-4ad2-8e76-8ffa0663080f)

commit e0cd3f6
Author: Junru Shao <junrushao@apache.org>
Date:   Tue Oct 31 12:56:28 2023 -0700

    [Bugfix] Cannot find global function `mlc.llm_chat_create` (mlc-ai#1167)

commit 02d1e57
Author: Junru Shao <junrushao@apache.org>
Date:   Tue Oct 31 12:43:17 2023 -0700

    Support CUDA Multi-Arch Compilation (mlc-ai#1166)

commit 8438b27
Author: Junru Shao <junrushao@apache.org>
Date:   Tue Oct 31 12:09:16 2023 -0700

    Misc Cleanups of Compilation Pipeline (mlc-ai#1165)

commit b5bfa5b
Author: Yaxing Cai <caiyaxing666@gmail.com>
Date:   Tue Oct 31 11:39:44 2023 -0700

    Enable group quant transform with nn.Module (mlc-ai#1154)

    * Enable group quant transform with nn.Module

    This PR completes the group quantization support for `nn.Module` based model.

    * remove deprecated tests

    * Update

    * wip

    * remove deprecated test

    * fix lint

    * fix lint

    * fix lint

    ---------

    Co-authored-by: Junru Shao <junrushao@apache.org>

commit 9076d01
Author: Yuchen Jin <yuchenj@cs.washington.edu>
Date:   Mon Oct 30 22:40:53 2023 -0700

    [Rest] Document emoji handling (mlc-ai#1160)

    Followup PR of mlc-ai#1142 to document the emoji handling.

commit 425a2cb
Author: Junru Shao <junrushao@apache.org>
Date:   Mon Oct 30 15:00:24 2023 -0700

    [Fix][REST] Use lowered-cased "app" (mlc-ai#1159)

commit 0a9d6c7
Author: Eric Lunderberg <Lunderberg@users.noreply.github.com>
Date:   Mon Oct 30 14:44:44 2023 -0500

    [Utils] Remove conversion to numpy array in utils.save_params (mlc-ai#1083)

    Prior to this commit, each parameter was converted to a numpy-owned
    array as part of a total size computation.  This commit computes the
    size directly, removing the conversion.

commit 3cf5605
Author: Eric Lunderberg <Lunderberg@users.noreply.github.com>
Date:   Mon Oct 30 14:43:51 2023 -0500

    [Utility] Check for isinstance(exc, Exception) before entering pdb (mlc-ai#1095)

    This is a follow-up to mlc-ai#1017, which added a `--pdb` flag to enter a
    debugger on exit.  This commit checks the type of the raised
    exception, and only enters the debugger if it is a subclass of
    `Exception`.  This ensures that implementation-details, such as a
    thrown `SystemExit` or `KeyboardInterrupt`, do not cause an erroneous
    entry to pdb.

commit 8ca0176
Author: Yuchen Jin <yuchenj@cs.washington.edu>
Date:   Mon Oct 30 12:26:58 2023 -0700

    [Rest] Fix emoji handling in Rest API. (mlc-ai#1142)

commit b190578
Author: Eric Lunderberg <Lunderberg@users.noreply.github.com>
Date:   Mon Oct 30 13:58:18 2023 -0500

    Apply rewrite for normal attention and MQA (mlc-ai#1138)

    Fixes a bug introduced in mlc-ai#1052,
    where use of the `--use-flash-attn-mqa` flag on a model that doesn't
    use MQA would prevent the use of CUTLASS attention at all.

commit ece97b1
Author: Eric Lunderberg <Lunderberg@users.noreply.github.com>
Date:   Mon Oct 30 13:58:08 2023 -0500

    [Transform][Redo] Apply split_rotary optimization on prefill (mlc-ai#1125)

    Prior to this commit, the `transform.fuse_split_rotary_embedding`
    function was only applicable to the `decode` function of a Llama-type
    model.  This was due to the sequence length being restricted to one,
    both in the pattern-match rule and in the `split_rotary` function, and
    the function being restricted to operate only on the `decode`
    function.

    This commit updates the `transform.fuse_split_rotary_embedding` pass
    to be a `tvm.ir.transform.Pass`, operating on all applicable matched
    in the `IRModule`.  The `split_rotary` function is now produced as a
    fully-generic function, with static parameters substituted in
    afterwards.  At this stage, the sequence length is retained as a
    dynamic parameter, such that it can be used by the `prefill` function.

    This commit reapplies the reverted commit
    mlc-ai#1033.  The error in the
    previous implementation was in the definition of
    `rotary_embedding_offset`, which provided the `query_sequence_length`
    instead of `kv_sequence_length`.  This was able to pass the validity
    tests described
    [here](mlc-ai#1058 (comment)),
    as these two sequence lengths are identical for the first call.

commit fee2cb5
Author: masahi <masahi129@gmail.com>
Date:   Tue Oct 31 01:32:06 2023 +0900

    Add batched Llama model definition using vLLM paged attention (mlc-ai#1134)

    * Add batched Llama model with vllm paged attention

    * update core.py

    * doc

    * minor

    * add e2e test

    * mv file

    * clean

    * Check if TVM has been built with USE_VLLM

    * update BuildArgs docstring

commit ba67835
Author: Junru Shao <junrushao@apache.org>
Date:   Sun Oct 29 23:54:12 2023 -0700

    Update attention layer (mlc-ai#1153)

    Existing dlight optimization only works for NT matmul, but not NN. As a
    result, the new `nn.Module`-based implementation, which uses NN matmul,
    fails compilation at HEAD for now. This PR fixes this issue by tweaking
    `k` to the preferred layout.

    The following commands now work with the new compilation pipeline:

    ```bash
    python -m mlc_chat.cli.compile --config llama2_7b  --quantization q4f16_1 -o /tmp/1.so
    python -m mlc_chat.cli.compile --config llama2_13b --quantization q4f16_1 -o /tmp/1.so
    python -m mlc_chat.cli.compile --config llama2_70b --quantization q4f16_1 -o /tmp/1.so
    ```

    Note that the quantization algorithm per se, `q4f16_1`, has not been
    implemented yet, meaning this code path is not yet ready for use so far.

commit 1a79a53
Author: Junru Shao <junrushao@apache.org>
Date:   Sun Oct 29 21:51:36 2023 -0700

    Compile Model Preset without External `config.json` (mlc-ai#1151)

    This PR adds support for compiling a preset of models without
    having to provide a `config.json` on disk using the commands below:

    ```diff
    python -m mlc_chat.cli.compile \
           --quantization q4f16_1 -o /tmp/1.so \
    -       --config /models/Llama-2-7b-chat-hf
    +       --config llama2_7b
    ```

    This allows easier testing and binary distribution without having to
    depend on external model directory.

commit 0a25374
Author: Junru Shao <junrushao@apache.org>
Date:   Sun Oct 29 21:17:38 2023 -0700

    Migrate Compiler Passes (mlc-ai#1150)

commit 2193767
Author: Junru Shao <junrushao@apache.org>
Date:   Sun Oct 29 16:35:07 2023 -0700

    Enable Mypy and Pylint in mlc_chat Python Package (mlc-ai#1149)

commit c0c3a8d
Author: Xiyou Zhou <xiyou.zhou@gmail.com>
Date:   Sun Oct 29 13:16:46 2023 -0700

    [Slim-LM] Enable Group Quant (mlc-ai#1129)

    * Enable group quant via new interface.

    * Minor fix.

    * Linting.

    * Fix isort.

    * Fix mypy.

    * TE compute working.

    * Skip embed.

    * Support cpu+gpu quantization.

    * Add target option to tests.

    * Linting.

commit 878ae84
Author: Junru Shao <junrushao@apache.org>
Date:   Sun Oct 29 00:19:20 2023 -0700

    Support parameter packing (mlc-ai#1146)

commit 2b6d832
Author: fennecJ <hwahwa649@gmail.com>
Date:   Sun Oct 29 14:59:10 2023 +0800

    Make the help info consistent with program name (mlc-ai#1137)

    When user use command `mlc_chat_cli --help`, the output will be
    something like

    Usage: mlc_chat [--help] ...

    That's because the program name specified in `cli_main.cc` is "mlc_chat".
    It will be less confusing if the output of help info shows

    Usage: mlc_chat_cli [--help] ...

commit 27ac5ac
Author: DavidSharma <68979667+David-Sharma@users.noreply.github.com>
Date:   Sat Oct 28 20:18:16 2023 -0400

    Updating tvm install docs (mlc-ai#1143)

    Updating the tvm install docs to assist a user in finding and copying zstd.dll to the correct folder.

commit 2ec0cc8
Author: Yuchen Jin <yuchenj@cs.washington.edu>
Date:   Sat Oct 28 15:13:48 2023 -0700

    Minor enhancements to `ChatModule` (mlc-ai#1132)

    Some minor enhancements to `ChatModule`, mainly handle the device parsing solely in `_parse_device_str` instead of handling it both in the member function and the `__init__` function to avoid redundancy; and some type annotation fix.

commit 2c492e5
Author: S A G A R <110724849+tmsagarofficial@users.noreply.github.com>
Date:   Sun Oct 29 03:43:15 2023 +0530

    Grammatical and Typographical improvements (mlc-ai#1139)

    * Update faq.rst

    * Update guideline.rst

    * Update compile_models.rst

    * Update distribute_compiled_models.rst

    * Update get-vicuna-weight.rst

    * Update python.rst

    * Update android.rst

    * Update cli.rst

    * Update ios.rst

    * Update javascript.rst

    * Update python.rst

    * Update rest.rst

commit 24f795e
Author: Goutham Tamilselvan <goutham2688@gmail.com>
Date:   Fri Oct 27 03:25:59 2023 -0400

    added details to windows installation (mlc-ai#1133)

    32bit version of the zstd.dll library was causing issues, so updated the doc to be more specific and download the 64bit version.

commit 973f9fc
Author: Eric Lunderberg <Lunderberg@users.noreply.github.com>
Date:   Wed Oct 25 10:14:46 2023 -0500

    [ParamManager][Redo] Use BundleModelParams for transform_dequantize (mlc-ai#1127)

    Prior to this commit, `ParamManager.transform_quantize` function took
    as input functions with separate parameters for each weight tensor,
    and produced output functions with a tuple parameter for all weights.
    Because `LiftTransformParams` had the same convention, neither could
    be applied as part of the same build flow.

    This commit updates `ParamManager.transform_quantize` pass to produce
    outputs with separate tensor parameters, using the `BundleModelParams`
    transform to later combine them into a single tuple parameter.  The
    analogous change was also performed for `LiftTransformParams` as part
    of apache/tvm#15657.

    In addition, prior to this commit, the
    `ParamManager.transform_dequantize` function operated directly on a
    `IRModule` object.  As a result, any debug instrumentation
    (e.g. before/after printouts for each pass, before/after verification
    with `relax.analysis.well_formed`, etc.) did not apply to this
    `transform_dequantize`.  This commit updates
    `ParamManager.transform_dequantize` to return a `ir.transform.Pass`.

    This commit is a repeat of the reverted PR
    mlc-ai#1056.  This PR resolves the bug
    in the earlier implementation by removing the call to
    `.without_attr("num_input")` in `ParamReplacer.rewrite_func`.  This
    follows an analogous update in `LiftTransformParams`, preserving the
    `"num_input"` attribute for use in `BundleModelParams`.

commit a4279e3
Author: Junru Shao <junrushao@apache.org>
Date:   Tue Oct 24 21:05:24 2023 -0700

    Add --opt flag parsing to CLI (mlc-ai#1123)

commit 9166edb
Author: Kartik Khandelwal <kartikkhandelwal1998@gmail.com>
Date:   Tue Oct 24 15:07:23 2023 -0400

    [REST] OpenAI compatible Rest API (mlc-ai#1107)

    * add presence and frequency penalty

    * Added support for passing conversation history in /v1/chat/completions endpoint

    * Added support for RestAPI parameters max_gen_len, n, and stop_str

    * * add presence and frequency penalty to generation config
    * refactor generation config

    * Added documentation for parameters

    * replace lib_path with model_lib_path in rest.py

    * fixed black isort issues

    * fix lib_path

commit 9cb8e8e
Author: Junru Shao <junrushao@apache.org>
Date:   Tue Oct 24 09:04:45 2023 -0700

    Remove inaccurate warning message (mlc-ai#1121)

    This PR removes an inaccurate warning from mlc-ai#1086, which warns about
    `model_lib` overriding regardless of whether or not it's actually
    overridden. With this commit, we only warn if its value is not None.

commit 2aa6809
Author: Junru Shao <junrushao@apache.org>
Date:   Tue Oct 24 09:03:38 2023 -0700

    Revert "[ParamManager] Use BundleModelParams for transform_dequantize" (mlc-ai#1120)

    Revert "[ParamManager] Use BundleModelParams for transform_dequantize (mlc-ai#1056)"

    This reverts commit e5927ce.

    This causes a regression impacting all MLC LLM nightlies as it violates the existing calling convention in MLC Chat runtime. An example: mlc-ai#1060 (comment)

commit 206103b
Author: Charlie Ruan <53290280+CharlieFRuan@users.noreply.github.com>
Date:   Tue Oct 24 11:54:01 2023 -0400

    [Docs] Add doc for max and mean gen len, shift factor; and buildArgs (mlc-ai#1119)

    * Add doc for max and mean gen len, shift factor

    * Update python docs for BuildArgs

commit 488017d
Author: SingLi <Sing-Li@users.noreply.github.com>
Date:   Tue Oct 24 08:19:31 2023 -0500

    fix mismatched argument name (mlc-ai#1117)

    fix error introduced by recent code changes

    fixes mlc-ai#1116

commit 8ce7793
Author: Git bot <bot@noreply.github.com>
Date:   Tue Oct 24 07:30:53 2023 +0000

    Auto updated submodule references

commit 61179a0
Author: Junru Shao <junrushao@apache.org>
Date:   Mon Oct 23 23:58:01 2023 -0700

    Add CLI commands for compilation (mlc-ai#1109)

commit 5a7dcd8
Author: Tianqi Chen <tqchen@users.noreply.github.com>
Date:   Tue Oct 24 00:00:41 2023 -0400

    [WINDOWS] reduce noise in windows build (mlc-ai#1115)

commit 7ae8c6d
Author: Lesheng Jin <34279105+LeshengJin@users.noreply.github.com>
Date:   Mon Oct 23 15:33:00 2023 -0700

    [Slim-LM] Introduce HFLoad for loading Pytorch and SafeTensor weights (mlc-ai#1113)

commit e5927ce
Author: Eric Lunderberg <Lunderberg@users.noreply.github.com>
Date:   Mon Oct 23 13:31:44 2023 -0500

    [ParamManager] Use BundleModelParams for transform_dequantize (mlc-ai#1056)

    * [ParamManager] Use BundleModelParams for transform_quantize

    Prior to this commit, `ParamManager.transform_quantize` function took
    as input functions with separate parameters for each weight tensor,
    and produced output functions with a tuple parameter for all weights.
    Because `LiftTransformParams` had the same convention, neither could
    be applied as part of the same build flow.

    This commit updates `ParamManager.transform_quantize` pass to produce
    outputs with separate tensor parameters, using the `BundleModelParams`
    transform to later combine them into a single tuple parameter.  The
    analogous change was also performed for `LiftTransformParams` as part
    of apache/tvm#15657.

    In addition, prior to this commit, the
    `ParamManager.transform_dequantize` function operated directly on a
    `IRModule` object.  As a result, any debug instrumentation
    (e.g. before/after printouts for each pass, before/after verification
    with `relax.analysis.well_formed`, etc.) did not apply to this
    `transform_dequantize`.  This commit updates
    `ParamManager.transform_dequantize` to return a `ir.transform.Pass`.

    * Correct type annotation

commit f57c9c9
Author: Eric Lunderberg <Lunderberg@users.noreply.github.com>
Date:   Mon Oct 23 13:31:24 2023 -0500

    [Transform] Provide IRModule transform for rewrite_attention (mlc-ai#1052)

    Prior to this commit, `mlc_llm.transform.rewrite_attention` updated a
    single function.  This commit modifies it to instead be a transform
    operating on any pattern matches within an `IRModule`.

commit 16dd2ae
Author: Lesheng Jin <34279105+LeshengJin@users.noreply.github.com>
Date:   Sun Oct 22 19:51:10 2023 -0700

    [Slim-LM] Smart path finding for config and weight (mlc-ai#1088)

commit 6159cc4
Author: Junru Shao <junrushao@apache.org>
Date:   Sun Oct 22 02:22:55 2023 -0700

    [CI] Add clang-format (mlc-ai#1103)

commit 46d11e6
Author: Junru Shao <junrushao@apache.org>
Date:   Fri Oct 20 23:39:28 2023 -0700

    Add Basic Pylint and Mypy Tooling (mlc-ai#1100)

    Add pylint/mypy tooling into pyproject.toml

    This PR establishes the initial Python tooling infra with Pylint and
    Mypy. Currently only the newest modules, i.e. `mlc_chat.support` and
    `mlc_chat.compiler` are covered, and we expect to cover the entire
    package, as being tracked in mlc-ai#1101.

commit 03c641a
Author: Junru Shao <junrushao@apache.org>
Date:   Fri Oct 20 21:51:51 2023 -0700

    Enable Python Linter (mlc-ai#1098)

    This PR enables two Python formatters "black" and "isort" on the following directory:
    - `./python/`
    - `./tests/python/`

    Enabling pylint and mypy is left for future work

commit e9b85ce
Author: Junru Shao <junrushao@apache.org>
Date:   Fri Oct 20 21:36:28 2023 -0700

    More formatting (mlc-ai#1099)

commit cf39bf6
Author: Junru Shao <junrushao@apache.org>
Date:   Fri Oct 20 15:38:49 2023 -0700

    [Format] Apply isort and black for `python/` (mlc-ai#1097)

    [Format] Apply isort and black on `python/`

    The commands I am using are:

    ```
    isort --profile black python/
    black python/
    ```

    It is always recommended to format the code before submission, given we
    don't have a linter CI yet.

commit 62d0c03
Author: Lesheng Jin <34279105+LeshengJin@users.noreply.github.com>
Date:   Fri Oct 20 15:33:51 2023 -0700

    Disable Disco for q4f16_ft and q8f16_ft quantization (mlc-ai#1094)

commit 9bf5723
Author: Junru Shao <junrushao@apache.org>
Date:   Thu Oct 19 15:49:40 2023 -0700

    Update `benchmark.py` according to mlc-ai#1086 (mlc-ai#1091)

    Update `benchmark.py`

commit 830656f
Author: Varshith Bathini <varshith15@gmail.com>
Date:   Fri Oct 20 00:40:14 2023 +0530

    StreamIterator (mlc-ai#1057)

    Co-authored-by: Varshith <varshith.bathini@sprinklr.com>

commit b0373d1
Author: Rick Zhou <riczhou@linkedin.com>
Date:   Thu Oct 19 12:09:12 2023 -0700

    Support lib_path override in C++. Improvements on docs and error messages (mlc-ai#1086)

    * Support lib_path option in C++ CLI. Disable ChatConfig.model_lib override in Python API. Improvements on helper messages and error messages

    * Update docs

    * Rename lib_path -> model_lib_path

commit 56a8004
Author: Junru Shao <junrushao@apache.org>
Date:   Thu Oct 19 10:37:24 2023 -0700

    Update README.md for Multi-GPU (mlc-ai#1090)

commit 2625945
Author: Junru Shao <junrushao@apache.org>
Date:   Thu Oct 19 08:57:50 2023 -0700

    Establish `mlc_chat.compiler` (mlc-ai#1082)

    This PR establishes the compiler components in MLC-Chat Python API,
    which currently includes two primary components: models and parameters.

    The models are `nn.Module`-based definition of an LLM, which, as the
    very first stab, contains only `LlamaForCasualLM`. It is decomposed into
    three files:
    - `llama_config.py`: common configurations for Llama, where we define
      relevant configurations of its architecture, as well as include
      standard config file for Llama2-7B/13B/70B for convenient testing;
    - `llama.py`: the model architecture of Llama, based on the PyTorch-like
    `nn.Module` API;
    - `llama_parameter.py`: defines the mapping between MLC parameters and
      pytorch parameters.

    The parameters contains the basic functionality of parameter mapping,
    and the loaders that effectively convert parameters from PyTorch to MLC
    according to the mapping specified. Currently, only `HFTorchLoader` is
    implemented, but loaders like SafeTensor, GPTQ or AWQ should be quite
    straightforward according to the existing design.

    On top of this PR, on-the-fly quantization could be defined as a loading
    time transformation on MLC parameters, while pre-quantized parameter
    loading is effectively parameter loading after MLC's `nn.Module` is
    quantized.

    Two unittests examplify how the infrastructure works:
    - `./tests/python/model/test_llama.py` shows how to create an `nn.Module`
    using the new infra, and then convert it to TVM IRModule;
    - `./tests/python/parameter/hf_torch_loader.py` shows how to load
    parameters from HuggingFace PyTorch format.

    Besides, `mlc_chat.support` is established for utility functions, which
    now contains two utils:
    - `config.py` which supports reading configurations into dataclasses
    from JSON file or Python dict. On top of Python dataclass, it throws
    irrelevant fields into `cls.kwargs`, which is helpful when loading
    HuggingFace configuration file;
    - `tqdm.py` which contains tqdm-related utilities, primarily redirecting
    logging and printing to work nicely with tqdm.

commit 3aefd9f
Author: Junru Shao <junrushao@apache.org>
Date:   Mon Oct 16 21:16:27 2023 -0700

    [Bugfix] Compilation Error in q4f32_1 (mlc-ai#1078)

    The pass `fuse-split-rotary` assumes the compute dtype is fp16, which
    usually is, but in certain cases, e.g. `q0f32` and `q4f32_1`, the
    compute is based on fp32 instead. This PR strengthens the check guard.

commit 9872c48
Author: Ruihang Lai <ruihangl@cs.cmu.edu>
Date:   Mon Oct 16 14:56:24 2023 -0400

    [Python] Extract common device str parse function in ChatModule (mlc-ai#1074)

    This PR lifts the device string parsing (just a few of lines)
    to a standalone function, so that on the serving side the serving
    can make use of this function as well.

    Tested Python API and it does not seem to incur regression.

commit d202077
Author: Eric Lunderberg <Lunderberg@users.noreply.github.com>
Date:   Mon Oct 16 08:06:26 2023 -0500

    [ParamManager] Added progress bar for get_item/set_item (mlc-ai#1063)

commit 204860b
Author: Ruihang Lai <ruihangl@cs.cmu.edu>
Date:   Sun Oct 15 14:02:12 2023 -0400

    [Fix] ChatModule incorrect temperature buffer shape (mlc-ai#1070)

    PR mlc-ai#1048 updated the signature of softmax in the built model library
    and changed the temperature buffer shape in ChatModule. This causes
    some existing demo unable to run since we did not do a round of model
    library update.

    This PR reverts the ChatModule change, and adds back the softmax
    function in non-batching case. With this PR, the regression should
    be fixed.

commit b0bfc88
Author: Junru Shao <junrushao@apache.org>
Date:   Sun Oct 15 00:24:24 2023 -0700

    Add links to Python API Reference (mlc-ai#1068)

commit 9010d48
Author: Jeethu Rao <jeethu@jeethurao.com>
Date:   Sun Oct 15 06:42:24 2023 +0100

    Minor typo fix (mlc-ai#1064)

commit 8184431
Author: Eric Lunderberg <Lunderberg@users.noreply.github.com>
Date:   Sat Oct 14 00:33:15 2023 -0500

    [ParamManager] Cleanup creation of quantization IRModule (mlc-ai#1053)

    This commit replaces the single-parameter
    `relax_model.param_manager.create_quantize_func` function with a
    method on the `ParamManager`, `create_parameter_transformation`.  This
    avoids potential typos between `param_manager` as the imported Python
    module `mlc_llm.relax_model.param_manager` and an instance of the
    `ParamManager` class named `param_manager`, and makes the
    functionality easier to find.

    This function also takes an optional `optimize_parameter_order` flag,
    defaulting to `True`, which applies the `ReorderTransformFunc` pass.
    Since the `ReorderTransformFunc` is intended to be used with several
    configuration objects owned by `ParamManager`, this simplifies the
    common path of producing an optimally-ordered parameter transformation
    module.

commit 481cd92
Author: Eric Lunderberg <Lunderberg@users.noreply.github.com>
Date:   Sat Oct 14 00:32:36 2023 -0500

    [Core] Remove duplication in MODEL.get_model calls (mlc-ai#1054)

    This commit removes the `if`/`elif` chain in `core.py`, where the body
    of each conditional assigns the same `mod, param_manager, params,
    model_config`, and is identical except for the choice of model being
    built.

commit c2b8cbc
Author: Jeethu Rao <jeethu@jeethurao.com>
Date:   Sat Oct 14 06:32:05 2023 +0100

    Fix Stable LM 3B build (mlc-ai#1061)

    * [stablelm 3b] Rename dynamic vocab size from "v" to "vocab_size"

    * Add get_num_key_value_heads method to StableLM3bConfig

commit d854105
Author: Ruihang Lai <ruihangl@cs.cmu.edu>
Date:   Fri Oct 13 20:45:58 2023 -0400

    [Model] Initial batching support for Llama (mlc-ai#1048)

    This PR introduces the initial batched input support for llama
    models. To make the code managable, we keep both the single-sequence
    handling flow and the batching handling flow in the Llama modeling.

    Now, with `--enable-batching` as a build argument, we build Llama
    for the batched version.

    NOTE: The paged attention kernel/TIR func are not included in this PR,
    so currently the built library with batching enabled is not runnable.
    We will follow up with the attention kernel in the future.

    This PR guarantees that the existing single-sequence inference (Python
    API, CLI, etc.) is not broken.

    P.S.. The batching flow is subject to bug fixes as we integrate with
    the attention function and run the e2e flow in the future.

commit edab9b5
Author: Junru Shao <junrushao@apache.org>
Date:   Fri Oct 13 09:57:46 2023 -0700

    [Doc] Use -U instead of --force-reinstall (mlc-ai#1062)

    `--force-reinstall` will reinstall all dependencies to a python package,
    which is unnecessary. `-U` is a better choice in this case.

commit ca8c11b
Author: Sunghyun Park <sunggg@umich.edu>
Date:   Fri Oct 13 09:00:21 2023 -0700

    [BugFix] Set the right `max_sequence_length` for both Llama-1 and Llama-2 families (mlc-ai#1032)

    * fix

    * reflect feedback

    ---------

    Co-authored-by: “Sunghyun <sunggg@umich.com>

commit bfaa5b9
Author: Ruihang Lai <ruihangl@cs.cmu.edu>
Date:   Thu Oct 12 17:40:54 2023 -0400

    Revert "[Transform] Apply split_rotary optimization on prefill (mlc-ai#1033)" (mlc-ai#1058)

    This reverts commit b9179cf as elaborated here mlc-ai#1033 (comment)

commit 98ebd28
Author: Lesheng Jin <34279105+LeshengJin@users.noreply.github.com>
Date:   Thu Oct 12 13:24:10 2023 -0700

    [Docs] Add `mlc.ai/package` to `DEPENDENCY INSTALLATION` group (mlc-ai#1055)

    Co-authored-by: Junru Shao <junrushao1994@gmail.com>

commit b9179cf
Author: Eric Lunderberg <Lunderberg@users.noreply.github.com>
Date:   Thu Oct 12 12:15:44 2023 -0500

    [Transform] Apply split_rotary optimization on prefill (mlc-ai#1033)

    * [Transform] Apply split_rotary optimization on prefill

    Prior to this commit, the `transform.fuse_split_rotary_embedding`
    function was only applicable to the `decode` function of a Llama-type
    model.  This was due to the sequence length being restricted to one,
    both in the pattern-match rule and in the `split_rotary` function, and
    the function being restricted to operate only on the `decode`
    function.

    This commit updates the `transform.fuse_split_rotary_embedding` pass
    to be a `tvm.ir.transform.Pass`, operating on all applicable matched
    in the `IRModule`.  The `split_rotary` function is now produced as a
    fully-generic function, with static parameters substituted in
    afterwards.  At this stage, the sequence length is retained as a
    dynamic parameter, such that it can be used by the `prefill` function.

    * Avoid multiple kernel launches for split_rotary

commit 1e6fb11
Author: Denise Kutnick <boca.denise@gmail.com>
Date:   Wed Oct 11 00:06:46 2023 -0700

    add verbose stats to mlc-chat REST API (mlc-ai#1049)

    * add verbose stats to mlc-chat REST API

    * update docs

commit 20131fb
Author: Junru Shao <junrushao@apache.org>
Date:   Mon Oct 9 16:53:56 2023 -0700

    Update README.md (mlc-ai#1045)

    Update README.md

commit bdd9d9b
Author: Ruihang Lai <ruihangl@cs.cmu.edu>
Date:   Mon Oct 9 19:08:14 2023 -0400

    [CPP] Separate common utils out from llm_chat.cc (mlc-ai#1044)

    This PR separates out the tokenizer creation function, the
    random number generator out from `llm_chat.cc` as a preparation
    step for batching inference support, since these functions/modules
    are also used in the same way in batching inference.

commit a58605f
Author: Junru Shao <junrushao@apache.org>
Date:   Mon Oct 9 15:05:34 2023 -0700

    Update README.md

commit a032d40
Author: Charlie Ruan <53290280+CharlieFRuan@users.noreply.github.com>
Date:   Mon Oct 9 18:03:24 2023 -0400

    [Docs] Iterate model prebuilts docs (mlc-ai#1043)

    * Iterate model prebuilts docs

    * small fix

commit 85001ed
Author: Jeethu Rao <jeethu@jeethurao.com>
Date:   Mon Oct 9 20:40:52 2023 +0100

    Support for the Stable LM 3B model (mlc-ai#1008)

    Support for the stablelm-3b-4e1t model

commit c02fdaf
Author: yongjer <54315206+yongjer@users.noreply.github.com>
Date:   Tue Oct 10 00:58:51 2023 +0800

    Update compile_models.rst (mlc-ai#1038)

    fix permission issue

commit bed9e60
Author: Charlie Ruan <53290280+CharlieFRuan@users.noreply.github.com>
Date:   Mon Oct 9 12:58:36 2023 -0400

    [Docs] Model prebuilts tracking page revamp (mlc-ai#1000)

commit 3a9849a
Author: Bohan Hou <bohanhou@andrew.cmu.edu>
Date:   Mon Oct 9 12:27:58 2023 -0400

    [Android] Add Llama2 q4f16_0 (mlc-ai#1041)

    llama2 q4f160

commit b44f679
Author: Charlie Ruan <53290280+CharlieFRuan@users.noreply.github.com>
Date:   Mon Oct 9 11:35:58 2023 -0400

    Add doc for ChatConfig, ConvConfig, GenerationConfig, BuildArgs (mlc-ai#1040)

    Add doc for ChatConfig, ConvConfig, GenerationConfig, BuildArgs, build model

commit bae37b3
Author: Yaxing Cai <caiyaxing666@gmail.com>
Date:   Sun Oct 8 16:55:16 2023 -0700

    [Android] Use `AlertDialog` instead of `Toast` (mlc-ai#1039)

commit 6e40c21
Author: Eric Lunderberg <Lunderberg@users.noreply.github.com>
Date:   Sat Oct 7 22:07:09 2023 -0500

    [Build] Added --pdb flag to build.py, drop into pdb on error (mlc-ai#1017)

    This commit adds an optional `--pdb` flag to the `build.py` script. If
    passed, any exception raised that would otherwise terminate the script
    will first enter a pdb post-mortem, allowing the error to be
    inspected.

commit ad3a6b9
Author: Roee Shenberg <shenberg@gmail.com>
Date:   Sun Oct 8 04:59:15 2023 +0200

    Fix two bugs in kv-cache backtrack loop (mlc-ai#856)

    Fix two bugs in kv-cache pop loop

    Bug 1: old code would stop early because output_ids was shortened in-place during the loop

    Bug 2: off-by-one in backoff size due to break

commit 898db76
Author: David Pissarra <61968959+davidpissarra@users.noreply.github.com>
Date:   Sun Oct 8 03:36:19 2023 +0100

    [API] Add GenerationConfig (mlc-ai#1024)

* clean
masahi added a commit to octoml/mlc-llm that referenced this pull request Nov 9, 2023
* [API] Add GenerationConfig (mlc-ai#1024)

* Fix two bugs in kv-cache backtrack loop (mlc-ai#856)

Fix two bugs in kv-cache pop loop

Bug 1: old code would stop early because output_ids was shortened in-place during the loop

Bug 2: off-by-one in backoff size due to break

* [Build] Added --pdb flag to build.py, drop into pdb on error (mlc-ai#1017)

This commit adds an optional `--pdb` flag to the `build.py` script. If
passed, any exception raised that would otherwise terminate the script
will first enter a pdb post-mortem, allowing the error to be
inspected.

* [Android] Use `AlertDialog` instead of `Toast` (mlc-ai#1039)

* Add doc for ChatConfig, ConvConfig, GenerationConfig, BuildArgs (mlc-ai#1040)

Add doc for ChatConfig, ConvConfig, GenerationConfig, BuildArgs, build model

* [Android] Add Llama2 q4f16_0 (mlc-ai#1041)

llama2 q4f160

* [Docs] Model prebuilts tracking page revamp (mlc-ai#1000)

* Update compile_models.rst (mlc-ai#1038)

fix permission issue

* Support for the Stable LM 3B model (mlc-ai#1008)

Support for the stablelm-3b-4e1t model

* [Docs] Iterate model prebuilts docs (mlc-ai#1043)

* Iterate model prebuilts docs

* small fix

* Update README.md

* [CPP] Separate common utils out from llm_chat.cc (mlc-ai#1044)

This PR separates out the tokenizer creation function, the
random number generator out from `llm_chat.cc` as a preparation
step for batching inference support, since these functions/modules
are also used in the same way in batching inference.

* Update README.md (mlc-ai#1045)

Update README.md

* add verbose stats to mlc-chat REST API (mlc-ai#1049)

* add verbose stats to mlc-chat REST API

* update docs

* [Transform] Apply split_rotary optimization on prefill (mlc-ai#1033)

* [Transform] Apply split_rotary optimization on prefill

Prior to this commit, the `transform.fuse_split_rotary_embedding`
function was only applicable to the `decode` function of a Llama-type
model.  This was due to the sequence length being restricted to one,
both in the pattern-match rule and in the `split_rotary` function, and
the function being restricted to operate only on the `decode`
function.

This commit updates the `transform.fuse_split_rotary_embedding` pass
to be a `tvm.ir.transform.Pass`, operating on all applicable matched
in the `IRModule`.  The `split_rotary` function is now produced as a
fully-generic function, with static parameters substituted in
afterwards.  At this stage, the sequence length is retained as a
dynamic parameter, such that it can be used by the `prefill` function.

* Avoid multiple kernel launches for split_rotary

* [Docs] Add `mlc.ai/package` to `DEPENDENCY INSTALLATION` group (mlc-ai#1055)

Co-authored-by: Junru Shao <junrushao1994@gmail.com>

* Revert "[Transform] Apply split_rotary optimization on prefill (mlc-ai#1033)" (mlc-ai#1058)

This reverts commit b9179cf as elaborated here mlc-ai#1033 (comment)

* [BugFix] Set the right `max_sequence_length` for both Llama-1 and Llama-2 families (mlc-ai#1032)

* fix

* reflect feedback

---------

Co-authored-by: “Sunghyun <sunggg@umich.com>

* [Doc] Use -U instead of --force-reinstall (mlc-ai#1062)

`--force-reinstall` will reinstall all dependencies to a python package,
which is unnecessary. `-U` is a better choice in this case.

* [Model] Initial batching support for Llama (mlc-ai#1048)

This PR introduces the initial batched input support for llama
models. To make the code managable, we keep both the single-sequence
handling flow and the batching handling flow in the Llama modeling.

Now, with `--enable-batching` as a build argument, we build Llama
for the batched version.

NOTE: The paged attention kernel/TIR func are not included in this PR,
so currently the built library with batching enabled is not runnable.
We will follow up with the attention kernel in the future.

This PR guarantees that the existing single-sequence inference (Python
API, CLI, etc.) is not broken.

P.S.. The batching flow is subject to bug fixes as we integrate with
the attention function and run the e2e flow in the future.

* Fix Stable LM 3B build (mlc-ai#1061)

* [stablelm 3b] Rename dynamic vocab size from "v" to "vocab_size"

* Add get_num_key_value_heads method to StableLM3bConfig

* [Core] Remove duplication in MODEL.get_model calls (mlc-ai#1054)

This commit removes the `if`/`elif` chain in `core.py`, where the body
of each conditional assigns the same `mod, param_manager, params,
model_config`, and is identical except for the choice of model being
built.

* [ParamManager] Cleanup creation of quantization IRModule (mlc-ai#1053)

This commit replaces the single-parameter
`relax_model.param_manager.create_quantize_func` function with a
method on the `ParamManager`, `create_parameter_transformation`.  This
avoids potential typos between `param_manager` as the imported Python
module `mlc_llm.relax_model.param_manager` and an instance of the
`ParamManager` class named `param_manager`, and makes the
functionality easier to find.

This function also takes an optional `optimize_parameter_order` flag,
defaulting to `True`, which applies the `ReorderTransformFunc` pass.
Since the `ReorderTransformFunc` is intended to be used with several
configuration objects owned by `ParamManager`, this simplifies the
common path of producing an optimally-ordered parameter transformation
module.

* Minor typo fix (mlc-ai#1064)

* Add links to Python API Reference (mlc-ai#1068)

* [Fix] ChatModule incorrect temperature buffer shape (mlc-ai#1070)

PR mlc-ai#1048 updated the signature of softmax in the built model library
and changed the temperature buffer shape in ChatModule. This causes
some existing demo unable to run since we did not do a round of model
library update.

This PR reverts the ChatModule change, and adds back the softmax
function in non-batching case. With this PR, the regression should
be fixed.

* [ParamManager] Added progress bar for get_item/set_item (mlc-ai#1063)

* [Python] Extract common device str parse function in ChatModule (mlc-ai#1074)

This PR lifts the device string parsing (just a few of lines)
to a standalone function, so that on the serving side the serving
can make use of this function as well.

Tested Python API and it does not seem to incur regression.

* [Bugfix] Compilation Error in q4f32_1 (mlc-ai#1078)

The pass `fuse-split-rotary` assumes the compute dtype is fp16, which
usually is, but in certain cases, e.g. `q0f32` and `q4f32_1`, the
compute is based on fp32 instead. This PR strengthens the check guard.

* Establish `mlc_chat.compiler` (mlc-ai#1082)

This PR establishes the compiler components in MLC-Chat Python API,
which currently includes two primary components: models and parameters.

The models are `nn.Module`-based definition of an LLM, which, as the
very first stab, contains only `LlamaForCasualLM`. It is decomposed into
three files:
- `llama_config.py`: common configurations for Llama, where we define
  relevant configurations of its architecture, as well as include
  standard config file for Llama2-7B/13B/70B for convenient testing;
- `llama.py`: the model architecture of Llama, based on the PyTorch-like
`nn.Module` API;
- `llama_parameter.py`: defines the mapping between MLC parameters and
  pytorch parameters.

The parameters contains the basic functionality of parameter mapping,
and the loaders that effectively convert parameters from PyTorch to MLC
according to the mapping specified. Currently, only `HFTorchLoader` is
implemented, but loaders like SafeTensor, GPTQ or AWQ should be quite
straightforward according to the existing design.

On top of this PR, on-the-fly quantization could be defined as a loading
time transformation on MLC parameters, while pre-quantized parameter
loading is effectively parameter loading after MLC's `nn.Module` is
quantized.

Two unittests examplify how the infrastructure works:
- `./tests/python/model/test_llama.py` shows how to create an `nn.Module`
using the new infra, and then convert it to TVM IRModule;
- `./tests/python/parameter/hf_torch_loader.py` shows how to load
parameters from HuggingFace PyTorch format.

Besides, `mlc_chat.support` is established for utility functions, which
now contains two utils:
- `config.py` which supports reading configurations into dataclasses
from JSON file or Python dict. On top of Python dataclass, it throws
irrelevant fields into `cls.kwargs`, which is helpful when loading
HuggingFace configuration file;
- `tqdm.py` which contains tqdm-related utilities, primarily redirecting
logging and printing to work nicely with tqdm.

* Update README.md for Multi-GPU (mlc-ai#1090)

* Support lib_path override in C++. Improvements on docs and error messages (mlc-ai#1086)

* Support lib_path option in C++ CLI. Disable ChatConfig.model_lib override in Python API. Improvements on helper messages and error messages

* Update docs

* Rename lib_path -> model_lib_path

* StreamIterator (mlc-ai#1057)

Co-authored-by: Varshith <varshith.bathini@sprinklr.com>

* Update `benchmark.py` according to mlc-ai#1086 (mlc-ai#1091)

Update `benchmark.py`

* Disable Disco for q4f16_ft and q8f16_ft quantization (mlc-ai#1094)

* [Format] Apply isort and black for `python/` (mlc-ai#1097)

[Format] Apply isort and black on `python/`

The commands I am using are:

```
isort --profile black python/
black python/
```

It is always recommended to format the code before submission, given we
don't have a linter CI yet.

* More formatting (mlc-ai#1099)

* Enable Python Linter (mlc-ai#1098)

This PR enables two Python formatters "black" and "isort" on the following directory:
- `./python/`
- `./tests/python/`

Enabling pylint and mypy is left for future work

* Add Basic Pylint and Mypy Tooling (mlc-ai#1100)

Add pylint/mypy tooling into pyproject.toml

This PR establishes the initial Python tooling infra with Pylint and
Mypy. Currently only the newest modules, i.e. `mlc_chat.support` and
`mlc_chat.compiler` are covered, and we expect to cover the entire
package, as being tracked in mlc-ai#1101.

* [CI] Add clang-format (mlc-ai#1103)

* [Slim-LM] Smart path finding for config and weight (mlc-ai#1088)

* [Transform] Provide IRModule transform for rewrite_attention (mlc-ai#1052)

Prior to this commit, `mlc_llm.transform.rewrite_attention` updated a
single function.  This commit modifies it to instead be a transform
operating on any pattern matches within an `IRModule`.

* [ParamManager] Use BundleModelParams for transform_dequantize (mlc-ai#1056)

* [ParamManager] Use BundleModelParams for transform_quantize

Prior to this commit, `ParamManager.transform_quantize` function took
as input functions with separate parameters for each weight tensor,
and produced output functions with a tuple parameter for all weights.
Because `LiftTransformParams` had the same convention, neither could
be applied as part of the same build flow.

This commit updates `ParamManager.transform_quantize` pass to produce
outputs with separate tensor parameters, using the `BundleModelParams`
transform to later combine them into a single tuple parameter.  The
analogous change was also performed for `LiftTransformParams` as part
of apache/tvm#15657.

In addition, prior to this commit, the
`ParamManager.transform_dequantize` function operated directly on a
`IRModule` object.  As a result, any debug instrumentation
(e.g. before/after printouts for each pass, before/after verification
with `relax.analysis.well_formed`, etc.) did not apply to this
`transform_dequantize`.  This commit updates
`ParamManager.transform_dequantize` to return a `ir.transform.Pass`.

* Correct type annotation

* [Slim-LM] Introduce HFLoad for loading Pytorch and SafeTensor weights (mlc-ai#1113)

* [WINDOWS] reduce noise in windows build (mlc-ai#1115)

* Add CLI commands for compilation (mlc-ai#1109)

* Auto updated submodule references

* fix mismatched argument name (mlc-ai#1117)

fix error introduced by recent code changes

fixes mlc-ai#1116

* [Docs] Add doc for max and mean gen len, shift factor; and buildArgs (mlc-ai#1119)

* Add doc for max and mean gen len, shift factor

* Update python docs for BuildArgs

* Revert "[ParamManager] Use BundleModelParams for transform_dequantize" (mlc-ai#1120)

Revert "[ParamManager] Use BundleModelParams for transform_dequantize (mlc-ai#1056)"

This reverts commit e5927ce.

This causes a regression impacting all MLC LLM nightlies as it violates the existing calling convention in MLC Chat runtime. An example: mlc-ai#1060 (comment)

* Remove inaccurate warning message (mlc-ai#1121)

This PR removes an inaccurate warning from mlc-ai#1086, which warns about
`model_lib` overriding regardless of whether or not it's actually
overridden. With this commit, we only warn if its value is not None.

* [REST] OpenAI compatible Rest API (mlc-ai#1107)

* add presence and frequency penalty

* Added support for passing conversation history in /v1/chat/completions endpoint

* Added support for RestAPI parameters max_gen_len, n, and stop_str

* * add presence and frequency penalty to generation config
* refactor generation config

* Added documentation for parameters

* replace lib_path with model_lib_path in rest.py

* fixed black isort issues

* fix lib_path

* Add --opt flag parsing to CLI (mlc-ai#1123)

* [ParamManager][Redo] Use BundleModelParams for transform_dequantize (mlc-ai#1127)

Prior to this commit, `ParamManager.transform_quantize` function took
as input functions with separate parameters for each weight tensor,
and produced output functions with a tuple parameter for all weights.
Because `LiftTransformParams` had the same convention, neither could
be applied as part of the same build flow.

This commit updates `ParamManager.transform_quantize` pass to produce
outputs with separate tensor parameters, using the `BundleModelParams`
transform to later combine them into a single tuple parameter.  The
analogous change was also performed for `LiftTransformParams` as part
of apache/tvm#15657.

In addition, prior to this commit, the
`ParamManager.transform_dequantize` function operated directly on a
`IRModule` object.  As a result, any debug instrumentation
(e.g. before/after printouts for each pass, before/after verification
with `relax.analysis.well_formed`, etc.) did not apply to this
`transform_dequantize`.  This commit updates
`ParamManager.transform_dequantize` to return a `ir.transform.Pass`.

This commit is a repeat of the reverted PR
mlc-ai#1056.  This PR resolves the bug
in the earlier implementation by removing the call to
`.without_attr("num_input")` in `ParamReplacer.rewrite_func`.  This
follows an analogous update in `LiftTransformParams`, preserving the
`"num_input"` attribute for use in `BundleModelParams`.

* added details to windows installation (mlc-ai#1133)

32bit version of the zstd.dll library was causing issues, so updated the doc to be more specific and download the 64bit version.

* Grammatical and Typographical improvements (mlc-ai#1139)

* Update faq.rst

* Update guideline.rst

* Update compile_models.rst

* Update distribute_compiled_models.rst

* Update get-vicuna-weight.rst

* Update python.rst

* Update android.rst

* Update cli.rst

* Update ios.rst

* Update javascript.rst

* Update python.rst

* Update rest.rst

* Minor enhancements to `ChatModule` (mlc-ai#1132)

Some minor enhancements to `ChatModule`, mainly handle the device parsing solely in `_parse_device_str` instead of handling it both in the member function and the `__init__` function to avoid redundancy; and some type annotation fix.

* Updating tvm install docs (mlc-ai#1143)

Updating the tvm install docs to assist a user in finding and copying zstd.dll to the correct folder.

* Make the help info consistent with program name (mlc-ai#1137)

When user use command `mlc_chat_cli --help`, the output will be
something like

Usage: mlc_chat [--help] ...

That's because the program name specified in `cli_main.cc` is "mlc_chat".
It will be less confusing if the output of help info shows

Usage: mlc_chat_cli [--help] ...

* Support parameter packing (mlc-ai#1146)

* [Slim-LM] Enable Group Quant (mlc-ai#1129)

* Enable group quant via new interface.

* Minor fix.

* Linting.

* Fix isort.

* Fix mypy.

* TE compute working.

* Skip embed.

* Support cpu+gpu quantization.

* Add target option to tests.

* Linting.

* Enable Mypy and Pylint in mlc_chat Python Package (mlc-ai#1149)

* Migrate Compiler Passes (mlc-ai#1150)

* Compile Model Preset without External `config.json` (mlc-ai#1151)

This PR adds support for compiling a preset of models without
having to provide a `config.json` on disk using the commands below:

```diff
python -m mlc_chat.cli.compile \
       --quantization q4f16_1 -o /tmp/1.so \
-       --config /models/Llama-2-7b-chat-hf
+       --config llama2_7b
```

This allows easier testing and binary distribution without having to
depend on external model directory.

* Update attention layer (mlc-ai#1153)

Existing dlight optimization only works for NT matmul, but not NN. As a
result, the new `nn.Module`-based implementation, which uses NN matmul,
fails compilation at HEAD for now. This PR fixes this issue by tweaking
`k` to the preferred layout.

The following commands now work with the new compilation pipeline:

```bash
python -m mlc_chat.cli.compile --config llama2_7b  --quantization q4f16_1 -o /tmp/1.so
python -m mlc_chat.cli.compile --config llama2_13b --quantization q4f16_1 -o /tmp/1.so
python -m mlc_chat.cli.compile --config llama2_70b --quantization q4f16_1 -o /tmp/1.so
```

Note that the quantization algorithm per se, `q4f16_1`, has not been
implemented yet, meaning this code path is not yet ready for use so far.

* Add batched Llama model definition using vLLM paged attention (mlc-ai#1134)

* Add batched Llama model with vllm paged attention

* update core.py

* doc

* minor

* add e2e test

* mv file

* clean

* Check if TVM has been built with USE_VLLM

* update BuildArgs docstring

* [Transform][Redo] Apply split_rotary optimization on prefill (mlc-ai#1125)

Prior to this commit, the `transform.fuse_split_rotary_embedding`
function was only applicable to the `decode` function of a Llama-type
model.  This was due to the sequence length being restricted to one,
both in the pattern-match rule and in the `split_rotary` function, and
the function being restricted to operate only on the `decode`
function.

This commit updates the `transform.fuse_split_rotary_embedding` pass
to be a `tvm.ir.transform.Pass`, operating on all applicable matched
in the `IRModule`.  The `split_rotary` function is now produced as a
fully-generic function, with static parameters substituted in
afterwards.  At this stage, the sequence length is retained as a
dynamic parameter, such that it can be used by the `prefill` function.

This commit reapplies the reverted commit
mlc-ai#1033.  The error in the
previous implementation was in the definition of
`rotary_embedding_offset`, which provided the `query_sequence_length`
instead of `kv_sequence_length`.  This was able to pass the validity
tests described
[here](mlc-ai#1058 (comment)),
as these two sequence lengths are identical for the first call.

* Apply rewrite for normal attention and MQA (mlc-ai#1138)

Fixes a bug introduced in mlc-ai#1052,
where use of the `--use-flash-attn-mqa` flag on a model that doesn't
use MQA would prevent the use of CUTLASS attention at all.

* [Rest] Fix emoji handling in Rest API. (mlc-ai#1142)

* [Utility] Check for isinstance(exc, Exception) before entering pdb (mlc-ai#1095)

This is a follow-up to mlc-ai#1017, which added a `--pdb` flag to enter a
debugger on exit.  This commit checks the type of the raised
exception, and only enters the debugger if it is a subclass of
`Exception`.  This ensures that implementation-details, such as a
thrown `SystemExit` or `KeyboardInterrupt`, do not cause an erroneous
entry to pdb.

* [Utils] Remove conversion to numpy array in utils.save_params (mlc-ai#1083)

Prior to this commit, each parameter was converted to a numpy-owned
array as part of a total size computation.  This commit computes the
size directly, removing the conversion.

* [Fix][REST] Use lowered-cased "app" (mlc-ai#1159)

* [Rest] Document emoji handling (mlc-ai#1160)

Followup PR of mlc-ai#1142 to document the emoji handling.

* Enable group quant transform with nn.Module (mlc-ai#1154)

* Enable group quant transform with nn.Module

This PR completes the group quantization support for `nn.Module` based model.

* remove deprecated tests

* Update

* wip

* remove deprecated test

* fix lint

* fix lint

* fix lint

---------

Co-authored-by: Junru Shao <junrushao@apache.org>

* Misc Cleanups of Compilation Pipeline (mlc-ai#1165)

* Support CUDA Multi-Arch Compilation (mlc-ai#1166)

* [Bugfix] Cannot find global function `mlc.llm_chat_create` (mlc-ai#1167)

* Fix RWKV Support (mlc-ai#1136)

I successfully ran the rwkv-world-3b fp16 model on my Xiaomi phone. This PR is to fix a bug on the main branch where the rwkv model outputs only one word and then stop.

![image](https://github.com/mlc-ai/mlc-llm/assets/35585791/6514d6ef-c93c-4ad2-8e76-8ffa0663080f)

* Auto updated submodule references

* Fix Android app Permission denied error on Android 10  (mlc-ai#1175)

Use scoped storage instead of Downloads directory

Co-authored-by: Animesh Bohara <abohara@cs.cmu.edu>

* [SLM] Fix group quantization (mlc-ai#1172)

This PR fixes the group quantization and add related unit tests.

* [Fix] TIR block name of dequantization (mlc-ai#1177)

* [SLM][AutoLLM] Enable Command Line Weight Conversion (mlc-ai#1170)

This PR enables weight conversion in command line.
Sample command: `python3 -m mlc_chat.cli.convert_weight --config dist/models/llama-2-13b-chat-hf/ --quantization "q4f16_1" --output dist/test/`

* [Fix][SLM] Update q4f16 quantization with the new mutator name rule (mlc-ai#1178)

[Fix] Update q4f16 quantization with the new mutator name rule

* [Model Support][SWA] Add support for sliding window attention for Mistral (mlc-ai#1087)

* mistral base

* Add sliding window mask making and its tests

* Small changes for sliding window mask

* Clean up mask making

* Remove kv_seq_len

* Add prefill chunking, handle max window size in SWA

* Add interleave kv

* Temporary fix for kv seq len

* Pass in more shapes to SWA prefill and decode in runtime

* mistral var fix

* Small changes regarding shape passing

* Small fix on chunk size

* Add build args, fix mlc chat config dump

* mistral system prompt
---------

Co-authored-by: David Pissarra <david.pissarra@tecnico.ulisboa.pt>
Co-authored-by: David Pissarra <61968959+davidpissarra@users.noreply.github.com>

* Add Python API for Weight Conversion (mlc-ai#1182)

This PR primarily does a major refactoring to introduce Python API that
is consistent with the CLI API. Besides, it includes the following
fixes and enhancements:

- More info provided to `isort` for better formatting in `pyproject.toml`;
- Print out the default value of all arguments in argparse command line;
- Ensure `--device` is always available locally when doing weight
  conversion;
- Add argument echoing in weight conversion to be consistent with its
  counterpart in compilation;
- Add a consistency checker to make sure the shapes/dtypes of all
  tensors from weight conversion is consistent with compilation;
- Echo the total size of parameters;
- Better logging of each parameter's shape and dtype, and either or not
  its quantized;
- More structure robustification, renaming `parameter/` to `loader/` to
  be more explicit about its intention;
- Inline and remove `ParamQuantizer` into the loader to improve logging
  and the logic flow;
- Always add instructions "Use `--xxx` to override" for any options that
  are auto detected to be more informative to end users;
- Fix wrong shape calculation when quantizing `nn.Embedding`;
- Fix wrong dtype calculation in group quantization when the input dtype
  is different from model dtype (e.g. "float32" in torch, but the model
  dtype in quantization is fp16 in `q4f16_1`);
- Fix inconsistent param names in layers such as `GroupQuantizeLinear`;
- Fix dtype inconsistency when a parameter is not quantized;
- Fix existing unittests.

* Merge `llama_config.CONFIG` into `MODEL_PRESETS` (mlc-ai#1188)

* Merge llama_config.py into llama_model.py (mlc-ai#1189)

* Add CodeLlama as part of model presets (mlc-ai#1190)

* [Docs] Clarify zstd installation on Windows (mlc-ai#1191)

* [Docs] Clarify zstd installation on Windows (mlc-ai#1196)

Update zstd installation

* Support overriding `--max-sequence-length` in command line (mlc-ai#1197)

* [RestAPI] Added docs (mlc-ai#1193)

Add docs for RestAPI

Co-authored-by: Animesh Bohara <abohara@cs.cmu.edu>

* [API] ```llm-vscode``` extension support (mlc-ai#1198)

This PR enables ```llm-vscode``` extension API support for copilot-like code completion, following [HF's LSP](https://github.com/huggingface/llm-ls). Fully compatible with ```CodeLlama``` and ```starcoder``` on mlc-llm. 

- huggingface/llm-vscode#103 enhances extension user experience when used with mlc-llm rest api.

Thanks @ pacman100, who came up with this on his latest blogpost: https://huggingface.co/blog/personal-copilot

* [Fix] Use `fabs` as floating point abs function in C++ (mlc-ai#1202)

* Integrating MLC runtime with the new compilation workflow (mlc-ai#1203)

* [Fix] Remove Redundant Warnings (mlc-ai#1204)

PR mlc-ai#1203 introduces some unnecessary and redundant logging messages.
This PR gets them removed.

* Try fix macOS build with picojson (mlc-ai#1206)

The error message below

```
/usr/share/miniconda/envs/mlc-llm-build/conda-bld/mlc-chat-cli-nightly-package_1699286394016/work/3rdparty/tvm/3rdparty/picojson/picojson.h: In member function 'std::string picojson::value::to_str() const':
/usr/share/miniconda/envs/mlc-llm-build/conda-bld/mlc-chat-cli-nightly-package_1699286394016/work/3rdparty/tvm/3rdparty/picojson/picojson.h:494:37: error: expected ')' before 'PRId64'
  494 |       SNPRINTF(buf, sizeof(buf), "%" PRId64, u_.int64_);
      |               ~                     ^~~~~~~
      |                                     )
/usr/share/miniconda/envs/mlc-llm-build/conda-bld/mlc-chat-cli-nightly-package_1699286394016/work/3rdparty/tvm/3rdparty/picojson/picojson.h:81:1: note: 'PRId64' is defined in header '<cinttypes>'; did you forget to '#include <cinttypes>'?
   80 | #include <errno.h>
  +++ |+#include <cinttypes>
   81 | #include <inttypes.h>

```

indicates that the `__STDC_FORMAT_MACROS` flag is not turned on for some
reason.

* Try fix macOS build with picojson again (mlc-ai#1207)

Try fix macOS build with picojson

* Auto updated submodule references

* [Fix] Keep update-to-date with upstream API change (mlc-ai#1209)

* Detect `mtriple` via LLVM (mlc-ai#1211)

* Fix Python3.8 compatibility breakage (mlc-ai#1210)

The breakage was resulting from newer syntax being used for type
annotations, as part of mlc-ai#592.
So long as `mlc_chat.interface.openai_api` wasn't imported, the
breaking changes were not encountered.  In
mlc-ai#1107, the addition of `from
.interface.openai_api import ChatMessage` caused this module to be
imported, breaking compatibility of `mlc_chat.ChatModule` with
Python3.8.

This commit updates the type annotations to the supported syntax.

* [Slim-LM] Enable loading from AWQ pre-quantized weight. (mlc-ai#1114)

* [SLM] Enable loading from AWQ pre-quantized weight.

* remove awq_loader.py

* Update to the latest commit

* Delete llama_parameter.py

* update unittest

* fix lint

* upd

* add Llama-2-7B-AWQ

* [Bugfix] Fix Cannot import name '_LIB' from 'mlc_chat.base' (mlc-ai#1214)

Fix Python API doc

* [SLM] Support `q3f16_1` and `q4f32_1` (mlc-ai#1215)

This PR supports the int3 and float32 group quantization, and fixes some minor issue in quantization impl and tests.

* Make the Compilation Working E2E (mlc-ai#1218)

* [Mistral][SWA] Add sliding window to metadata (mlc-ai#1217)

Add sliding window to metadata, make smalle changes to invariants in runtime

* Support for `chatml` format conversation (for TinyLlama-1.1B-Chat-v0.2) (mlc-ai#956)

* added support for chatml format conversation

* added template to factory

* Add Rust Support for MLC-LLM (mlc-ai#1213)

This PR introduces Rust language support for the MLC-LLM project, specifically targeting supporting the `ChatModule` interface. It utilizes the existing C++ implementation of MLC-LLM and leverages both TVM's C API and its Rust bindings. The `rust/examples/mlc_chat.rs` gives an example of how to create a `chat_module` and serve user prompts in Rust.

The primary goal of this PR is to enrich the MLC-LLM ecosystem by offering a Rust interface that aligns with the current Python API. This enhancement will empower Rust developers to integrate MLC-LLM into their codebase and applications.

**Followup PRs**: 
- Extend the feature set to achieve parity with the C++/Python interface.
- Refine the Rust API, ensuring robustness.
- Set up Rust CI if needed.

* [Bugfix] Remove dependency on openai_api in chat module (mlc-ai#1222)

Remove dependency on openai_api

* Bake in RAM Usage in the Generated DSO (mlc-ai#1224)

With this PR, the metadata in a DSO file using `vm["_metadata"]()` now
have information about the upper bound RAM estimate on each function.
As an example, the JSON string now is:

```json
{
  "quantization": "q4f16_1",
  "model_type": "llama",
  "memory_usage": {
    "_initialize_effect": 0,
    "prefill": 136192,
    "softmax_with_temperature": 0,
    "decode": 218624
  },
  "params": [
    {"name": "model.embed_tokens.q_weight", "shape": [32000, 512], "dtype": "uint32"},
    {"name": "model.embed_tokens.q_scale", "shape": [32000, 128], "dtype": "float16"},
    ...
  ]
}
```

This helps the MLC runtime to better determine if a method is going to
OOM and plan ahead, e.g. plan pre-allocated KVCache, accordingly.

The idea originates from Ruihang's ancient PR that prints memory usage
estimate as debugging information for demo purposes, and this PR further
enhances it to IRModule-level attribute that can be used by the runtime.

* [Fix] ChatModule python messages and offset types (mlc-ai#1220)

small fixes

* [Fix] Variable Upperbound Should be Injected before Build Pipeline (mlc-ai#1225)

Now it shows a more reasonable upper bound for sequence length = 4096.

```json
{
  "_initialize_effect": 0,
  "prefill": 3479311360,
  "softmax_with_temperature": 0,
  "decode": 34531840
}
```

Thanks Ruihang for helping with the fix!

* [MultiGPU] Support pre-sharded model weights (mlc-ai#1096)

* [Bugfix] Correct input shape for shard info function

Prior to this commit, the sharding functions sharded axis converted
from `orig_size * num_shards` to `orig_size // num_shards`.  This
commit updates the sharding functions to instead convert from
`orig_size` to `orig_size // num_shards`.

* [Bugfix] Include LegalizeOps in utils.convert_weights

Prior to this commit, `utils.convert_weights` assumes that the
parameter transformation module is already legalized, and uses no
relax operations that require legalization.  This commit adds a call
to `relax.transform.LegalizeOps` to remove this assumption.

* [MultiGPU] Cleanup create_shard_info_func

- De-duplicate the `if param.shard_strategy == foo` if/else chain

- Return a `tvm.IRModule` instead of modifying an existing module

* Extract a ParamManager.optimize_transform_param_order method

* Extract ParamManager.create_parameter_transformation call from convert_weights

* Support writing of pre-sharded weights

* Support execution using pre-sharded weights

* Updating for review comments

* fix typo

* fix

---------

Co-authored-by: David Pissarra <61968959+davidpissarra@users.noreply.github.com>
Co-authored-by: Roee Shenberg <shenberg@gmail.com>
Co-authored-by: Eric Lunderberg <Lunderberg@users.noreply.github.com>
Co-authored-by: Yaxing Cai <caiyaxing666@gmail.com>
Co-authored-by: Charlie Ruan <53290280+CharlieFRuan@users.noreply.github.com>
Co-authored-by: Bohan Hou <bohanhou@andrew.cmu.edu>
Co-authored-by: yongjer <54315206+yongjer@users.noreply.github.com>
Co-authored-by: Jeethu Rao <jeethu@jeethurao.com>
Co-authored-by: Junru Shao <junrushao@apache.org>
Co-authored-by: Ruihang Lai <ruihangl@cs.cmu.edu>
Co-authored-by: Denise Kutnick <boca.denise@gmail.com>
Co-authored-by: Lesheng Jin <34279105+LeshengJin@users.noreply.github.com>
Co-authored-by: Junru Shao <junrushao1994@gmail.com>
Co-authored-by: Sunghyun Park <sunggg@umich.edu>
Co-authored-by: “Sunghyun <sunggg@umich.com>
Co-authored-by: Rick Zhou <riczhou@linkedin.com>
Co-authored-by: Varshith Bathini <varshith15@gmail.com>
Co-authored-by: Varshith <varshith.bathini@sprinklr.com>
Co-authored-by: Tianqi Chen <tqchen@users.noreply.github.com>
Co-authored-by: Git bot <bot@noreply.github.com>
Co-authored-by: SingLi <Sing-Li@users.noreply.github.com>
Co-authored-by: Kartik Khandelwal <kartikkhandelwal1998@gmail.com>
Co-authored-by: Goutham Tamilselvan <goutham2688@gmail.com>
Co-authored-by: S A G A R <110724849+tmsagarofficial@users.noreply.github.com>
Co-authored-by: Yuchen Jin <yuchenj@cs.washington.edu>
Co-authored-by: DavidSharma <68979667+David-Sharma@users.noreply.github.com>
Co-authored-by: fennecJ <hwahwa649@gmail.com>
Co-authored-by: Xiyou Zhou <xiyou.zhou@gmail.com>
Co-authored-by: Xiaoyu Zhang <35585791+BBuf@users.noreply.github.com>
Co-authored-by: Animesh Bohara <ani.bohara@gmail.com>
Co-authored-by: Animesh Bohara <abohara@cs.cmu.edu>
Co-authored-by: David Pissarra <david.pissarra@tecnico.ulisboa.pt>
Co-authored-by: Antonio Calatrava <antonio@antoniocalatrava.com>
sunggg added a commit to octoml/mlc-llm that referenced this pull request Nov 22, 2023
* [API] Add GenerationConfig (mlc-ai#1024)

* Fix two bugs in kv-cache backtrack loop (mlc-ai#856)

Fix two bugs in kv-cache pop loop

Bug 1: old code would stop early because output_ids was shortened in-place during the loop

Bug 2: off-by-one in backoff size due to break

* [Build] Added --pdb flag to build.py, drop into pdb on error (mlc-ai#1017)

This commit adds an optional `--pdb` flag to the `build.py` script. If
passed, any exception raised that would otherwise terminate the script
will first enter a pdb post-mortem, allowing the error to be
inspected.

* [Android] Use `AlertDialog` instead of `Toast` (mlc-ai#1039)

* Add doc for ChatConfig, ConvConfig, GenerationConfig, BuildArgs (mlc-ai#1040)

Add doc for ChatConfig, ConvConfig, GenerationConfig, BuildArgs, build model

* [Android] Add Llama2 q4f16_0 (mlc-ai#1041)

llama2 q4f160

* [Docs] Model prebuilts tracking page revamp (mlc-ai#1000)

* Update compile_models.rst (mlc-ai#1038)

fix permission issue

* Support for the Stable LM 3B model (mlc-ai#1008)

Support for the stablelm-3b-4e1t model

* [Docs] Iterate model prebuilts docs (mlc-ai#1043)

* Iterate model prebuilts docs

* small fix

* Update README.md

* [CPP] Separate common utils out from llm_chat.cc (mlc-ai#1044)

This PR separates out the tokenizer creation function, the
random number generator out from `llm_chat.cc` as a preparation
step for batching inference support, since these functions/modules
are also used in the same way in batching inference.

* Update README.md (mlc-ai#1045)

Update README.md

* add verbose stats to mlc-chat REST API (mlc-ai#1049)

* add verbose stats to mlc-chat REST API

* update docs

* [Transform] Apply split_rotary optimization on prefill (mlc-ai#1033)

* [Transform] Apply split_rotary optimization on prefill

Prior to this commit, the `transform.fuse_split_rotary_embedding`
function was only applicable to the `decode` function of a Llama-type
model.  This was due to the sequence length being restricted to one,
both in the pattern-match rule and in the `split_rotary` function, and
the function being restricted to operate only on the `decode`
function.

This commit updates the `transform.fuse_split_rotary_embedding` pass
to be a `tvm.ir.transform.Pass`, operating on all applicable matched
in the `IRModule`.  The `split_rotary` function is now produced as a
fully-generic function, with static parameters substituted in
afterwards.  At this stage, the sequence length is retained as a
dynamic parameter, such that it can be used by the `prefill` function.

* Avoid multiple kernel launches for split_rotary

* [Docs] Add `mlc.ai/package` to `DEPENDENCY INSTALLATION` group (mlc-ai#1055)

Co-authored-by: Junru Shao <junrushao1994@gmail.com>

* Revert "[Transform] Apply split_rotary optimization on prefill (mlc-ai#1033)" (mlc-ai#1058)

This reverts commit b9179cf as elaborated here mlc-ai#1033 (comment)

* [BugFix] Set the right `max_sequence_length` for both Llama-1 and Llama-2 families (mlc-ai#1032)

* fix

* reflect feedback

---------

Co-authored-by: “Sunghyun <sunggg@umich.com>

* [Doc] Use -U instead of --force-reinstall (mlc-ai#1062)

`--force-reinstall` will reinstall all dependencies to a python package,
which is unnecessary. `-U` is a better choice in this case.

* [Model] Initial batching support for Llama (mlc-ai#1048)

This PR introduces the initial batched input support for llama
models. To make the code managable, we keep both the single-sequence
handling flow and the batching handling flow in the Llama modeling.

Now, with `--enable-batching` as a build argument, we build Llama
for the batched version.

NOTE: The paged attention kernel/TIR func are not included in this PR,
so currently the built library with batching enabled is not runnable.
We will follow up with the attention kernel in the future.

This PR guarantees that the existing single-sequence inference (Python
API, CLI, etc.) is not broken.

P.S.. The batching flow is subject to bug fixes as we integrate with
the attention function and run the e2e flow in the future.

* Fix Stable LM 3B build (mlc-ai#1061)

* [stablelm 3b] Rename dynamic vocab size from "v" to "vocab_size"

* Add get_num_key_value_heads method to StableLM3bConfig

* [Core] Remove duplication in MODEL.get_model calls (mlc-ai#1054)

This commit removes the `if`/`elif` chain in `core.py`, where the body
of each conditional assigns the same `mod, param_manager, params,
model_config`, and is identical except for the choice of model being
built.

* [ParamManager] Cleanup creation of quantization IRModule (mlc-ai#1053)

This commit replaces the single-parameter
`relax_model.param_manager.create_quantize_func` function with a
method on the `ParamManager`, `create_parameter_transformation`.  This
avoids potential typos between `param_manager` as the imported Python
module `mlc_llm.relax_model.param_manager` and an instance of the
`ParamManager` class named `param_manager`, and makes the
functionality easier to find.

This function also takes an optional `optimize_parameter_order` flag,
defaulting to `True`, which applies the `ReorderTransformFunc` pass.
Since the `ReorderTransformFunc` is intended to be used with several
configuration objects owned by `ParamManager`, this simplifies the
common path of producing an optimally-ordered parameter transformation
module.

* Minor typo fix (mlc-ai#1064)

* Add links to Python API Reference (mlc-ai#1068)

* [Fix] ChatModule incorrect temperature buffer shape (mlc-ai#1070)

PR mlc-ai#1048 updated the signature of softmax in the built model library
and changed the temperature buffer shape in ChatModule. This causes
some existing demo unable to run since we did not do a round of model
library update.

This PR reverts the ChatModule change, and adds back the softmax
function in non-batching case. With this PR, the regression should
be fixed.

* [ParamManager] Added progress bar for get_item/set_item (mlc-ai#1063)

* [Python] Extract common device str parse function in ChatModule (mlc-ai#1074)

This PR lifts the device string parsing (just a few of lines)
to a standalone function, so that on the serving side the serving
can make use of this function as well.

Tested Python API and it does not seem to incur regression.

* [Bugfix] Compilation Error in q4f32_1 (mlc-ai#1078)

The pass `fuse-split-rotary` assumes the compute dtype is fp16, which
usually is, but in certain cases, e.g. `q0f32` and `q4f32_1`, the
compute is based on fp32 instead. This PR strengthens the check guard.

* Establish `mlc_chat.compiler` (mlc-ai#1082)

This PR establishes the compiler components in MLC-Chat Python API,
which currently includes two primary components: models and parameters.

The models are `nn.Module`-based definition of an LLM, which, as the
very first stab, contains only `LlamaForCasualLM`. It is decomposed into
three files:
- `llama_config.py`: common configurations for Llama, where we define
  relevant configurations of its architecture, as well as include
  standard config file for Llama2-7B/13B/70B for convenient testing;
- `llama.py`: the model architecture of Llama, based on the PyTorch-like
`nn.Module` API;
- `llama_parameter.py`: defines the mapping between MLC parameters and
  pytorch parameters.

The parameters contains the basic functionality of parameter mapping,
and the loaders that effectively convert parameters from PyTorch to MLC
according to the mapping specified. Currently, only `HFTorchLoader` is
implemented, but loaders like SafeTensor, GPTQ or AWQ should be quite
straightforward according to the existing design.

On top of this PR, on-the-fly quantization could be defined as a loading
time transformation on MLC parameters, while pre-quantized parameter
loading is effectively parameter loading after MLC's `nn.Module` is
quantized.

Two unittests examplify how the infrastructure works:
- `./tests/python/model/test_llama.py` shows how to create an `nn.Module`
using the new infra, and then convert it to TVM IRModule;
- `./tests/python/parameter/hf_torch_loader.py` shows how to load
parameters from HuggingFace PyTorch format.

Besides, `mlc_chat.support` is established for utility functions, which
now contains two utils:
- `config.py` which supports reading configurations into dataclasses
from JSON file or Python dict. On top of Python dataclass, it throws
irrelevant fields into `cls.kwargs`, which is helpful when loading
HuggingFace configuration file;
- `tqdm.py` which contains tqdm-related utilities, primarily redirecting
logging and printing to work nicely with tqdm.

* Update README.md for Multi-GPU (mlc-ai#1090)

* Support lib_path override in C++. Improvements on docs and error messages (mlc-ai#1086)

* Support lib_path option in C++ CLI. Disable ChatConfig.model_lib override in Python API. Improvements on helper messages and error messages

* Update docs

* Rename lib_path -> model_lib_path

* StreamIterator (mlc-ai#1057)

Co-authored-by: Varshith <varshith.bathini@sprinklr.com>

* Update `benchmark.py` according to mlc-ai#1086 (mlc-ai#1091)

Update `benchmark.py`

* Disable Disco for q4f16_ft and q8f16_ft quantization (mlc-ai#1094)

* [Format] Apply isort and black for `python/` (mlc-ai#1097)

[Format] Apply isort and black on `python/`

The commands I am using are:

```
isort --profile black python/
black python/
```

It is always recommended to format the code before submission, given we
don't have a linter CI yet.

* More formatting (mlc-ai#1099)

* Enable Python Linter (mlc-ai#1098)

This PR enables two Python formatters "black" and "isort" on the following directory:
- `./python/`
- `./tests/python/`

Enabling pylint and mypy is left for future work

* Add Basic Pylint and Mypy Tooling (mlc-ai#1100)

Add pylint/mypy tooling into pyproject.toml

This PR establishes the initial Python tooling infra with Pylint and
Mypy. Currently only the newest modules, i.e. `mlc_chat.support` and
`mlc_chat.compiler` are covered, and we expect to cover the entire
package, as being tracked in mlc-ai#1101.

* [CI] Add clang-format (mlc-ai#1103)

* [Slim-LM] Smart path finding for config and weight (mlc-ai#1088)

* [Transform] Provide IRModule transform for rewrite_attention (mlc-ai#1052)

Prior to this commit, `mlc_llm.transform.rewrite_attention` updated a
single function.  This commit modifies it to instead be a transform
operating on any pattern matches within an `IRModule`.

* [ParamManager] Use BundleModelParams for transform_dequantize (mlc-ai#1056)

* [ParamManager] Use BundleModelParams for transform_quantize

Prior to this commit, `ParamManager.transform_quantize` function took
as input functions with separate parameters for each weight tensor,
and produced output functions with a tuple parameter for all weights.
Because `LiftTransformParams` had the same convention, neither could
be applied as part of the same build flow.

This commit updates `ParamManager.transform_quantize` pass to produce
outputs with separate tensor parameters, using the `BundleModelParams`
transform to later combine them into a single tuple parameter.  The
analogous change was also performed for `LiftTransformParams` as part
of apache/tvm#15657.

In addition, prior to this commit, the
`ParamManager.transform_dequantize` function operated directly on a
`IRModule` object.  As a result, any debug instrumentation
(e.g. before/after printouts for each pass, before/after verification
with `relax.analysis.well_formed`, etc.) did not apply to this
`transform_dequantize`.  This commit updates
`ParamManager.transform_dequantize` to return a `ir.transform.Pass`.

* Correct type annotation

* [Slim-LM] Introduce HFLoad for loading Pytorch and SafeTensor weights (mlc-ai#1113)

* [WINDOWS] reduce noise in windows build (mlc-ai#1115)

* Add CLI commands for compilation (mlc-ai#1109)

* Auto updated submodule references

* fix mismatched argument name (mlc-ai#1117)

fix error introduced by recent code changes

fixes mlc-ai#1116

* [Docs] Add doc for max and mean gen len, shift factor; and buildArgs (mlc-ai#1119)

* Add doc for max and mean gen len, shift factor

* Update python docs for BuildArgs

* Revert "[ParamManager] Use BundleModelParams for transform_dequantize" (mlc-ai#1120)

Revert "[ParamManager] Use BundleModelParams for transform_dequantize (mlc-ai#1056)"

This reverts commit e5927ce.

This causes a regression impacting all MLC LLM nightlies as it violates the existing calling convention in MLC Chat runtime. An example: mlc-ai#1060 (comment)

* Remove inaccurate warning message (mlc-ai#1121)

This PR removes an inaccurate warning from mlc-ai#1086, which warns about
`model_lib` overriding regardless of whether or not it's actually
overridden. With this commit, we only warn if its value is not None.

* [REST] OpenAI compatible Rest API (mlc-ai#1107)

* add presence and frequency penalty

* Added support for passing conversation history in /v1/chat/completions endpoint

* Added support for RestAPI parameters max_gen_len, n, and stop_str

* * add presence and frequency penalty to generation config
* refactor generation config

* Added documentation for parameters

* replace lib_path with model_lib_path in rest.py

* fixed black isort issues

* fix lib_path

* Add --opt flag parsing to CLI (mlc-ai#1123)

* [ParamManager][Redo] Use BundleModelParams for transform_dequantize (mlc-ai#1127)

Prior to this commit, `ParamManager.transform_quantize` function took
as input functions with separate parameters for each weight tensor,
and produced output functions with a tuple parameter for all weights.
Because `LiftTransformParams` had the same convention, neither could
be applied as part of the same build flow.

This commit updates `ParamManager.transform_quantize` pass to produce
outputs with separate tensor parameters, using the `BundleModelParams`
transform to later combine them into a single tuple parameter.  The
analogous change was also performed for `LiftTransformParams` as part
of apache/tvm#15657.

In addition, prior to this commit, the
`ParamManager.transform_dequantize` function operated directly on a
`IRModule` object.  As a result, any debug instrumentation
(e.g. before/after printouts for each pass, before/after verification
with `relax.analysis.well_formed`, etc.) did not apply to this
`transform_dequantize`.  This commit updates
`ParamManager.transform_dequantize` to return a `ir.transform.Pass`.

This commit is a repeat of the reverted PR
mlc-ai#1056.  This PR resolves the bug
in the earlier implementation by removing the call to
`.without_attr("num_input")` in `ParamReplacer.rewrite_func`.  This
follows an analogous update in `LiftTransformParams`, preserving the
`"num_input"` attribute for use in `BundleModelParams`.

* added details to windows installation (mlc-ai#1133)

32bit version of the zstd.dll library was causing issues, so updated the doc to be more specific and download the 64bit version.

* Grammatical and Typographical improvements (mlc-ai#1139)

* Update faq.rst

* Update guideline.rst

* Update compile_models.rst

* Update distribute_compiled_models.rst

* Update get-vicuna-weight.rst

* Update python.rst

* Update android.rst

* Update cli.rst

* Update ios.rst

* Update javascript.rst

* Update python.rst

* Update rest.rst

* Minor enhancements to `ChatModule` (mlc-ai#1132)

Some minor enhancements to `ChatModule`, mainly handle the device parsing solely in `_parse_device_str` instead of handling it both in the member function and the `__init__` function to avoid redundancy; and some type annotation fix.

* Updating tvm install docs (mlc-ai#1143)

Updating the tvm install docs to assist a user in finding and copying zstd.dll to the correct folder.

* Make the help info consistent with program name (mlc-ai#1137)

When user use command `mlc_chat_cli --help`, the output will be
something like

Usage: mlc_chat [--help] ...

That's because the program name specified in `cli_main.cc` is "mlc_chat".
It will be less confusing if the output of help info shows

Usage: mlc_chat_cli [--help] ...

* Support parameter packing (mlc-ai#1146)

* [Slim-LM] Enable Group Quant (mlc-ai#1129)

* Enable group quant via new interface.

* Minor fix.

* Linting.

* Fix isort.

* Fix mypy.

* TE compute working.

* Skip embed.

* Support cpu+gpu quantization.

* Add target option to tests.

* Linting.

* Enable Mypy and Pylint in mlc_chat Python Package (mlc-ai#1149)

* Migrate Compiler Passes (mlc-ai#1150)

* Compile Model Preset without External `config.json` (mlc-ai#1151)

This PR adds support for compiling a preset of models without
having to provide a `config.json` on disk using the commands below:

```diff
python -m mlc_chat.cli.compile \
       --quantization q4f16_1 -o /tmp/1.so \
-       --config /models/Llama-2-7b-chat-hf
+       --config llama2_7b
```

This allows easier testing and binary distribution without having to
depend on external model directory.

* Update attention layer (mlc-ai#1153)

Existing dlight optimization only works for NT matmul, but not NN. As a
result, the new `nn.Module`-based implementation, which uses NN matmul,
fails compilation at HEAD for now. This PR fixes this issue by tweaking
`k` to the preferred layout.

The following commands now work with the new compilation pipeline:

```bash
python -m mlc_chat.cli.compile --config llama2_7b  --quantization q4f16_1 -o /tmp/1.so
python -m mlc_chat.cli.compile --config llama2_13b --quantization q4f16_1 -o /tmp/1.so
python -m mlc_chat.cli.compile --config llama2_70b --quantization q4f16_1 -o /tmp/1.so
```

Note that the quantization algorithm per se, `q4f16_1`, has not been
implemented yet, meaning this code path is not yet ready for use so far.

* Add batched Llama model definition using vLLM paged attention (mlc-ai#1134)

* Add batched Llama model with vllm paged attention

* update core.py

* doc

* minor

* add e2e test

* mv file

* clean

* Check if TVM has been built with USE_VLLM

* update BuildArgs docstring

* [Transform][Redo] Apply split_rotary optimization on prefill (mlc-ai#1125)

Prior to this commit, the `transform.fuse_split_rotary_embedding`
function was only applicable to the `decode` function of a Llama-type
model.  This was due to the sequence length being restricted to one,
both in the pattern-match rule and in the `split_rotary` function, and
the function being restricted to operate only on the `decode`
function.

This commit updates the `transform.fuse_split_rotary_embedding` pass
to be a `tvm.ir.transform.Pass`, operating on all applicable matched
in the `IRModule`.  The `split_rotary` function is now produced as a
fully-generic function, with static parameters substituted in
afterwards.  At this stage, the sequence length is retained as a
dynamic parameter, such that it can be used by the `prefill` function.

This commit reapplies the reverted commit
mlc-ai#1033.  The error in the
previous implementation was in the definition of
`rotary_embedding_offset`, which provided the `query_sequence_length`
instead of `kv_sequence_length`.  This was able to pass the validity
tests described
[here](mlc-ai#1058 (comment)),
as these two sequence lengths are identical for the first call.

* Apply rewrite for normal attention and MQA (mlc-ai#1138)

Fixes a bug introduced in mlc-ai#1052,
where use of the `--use-flash-attn-mqa` flag on a model that doesn't
use MQA would prevent the use of CUTLASS attention at all.

* [Rest] Fix emoji handling in Rest API. (mlc-ai#1142)

* [Utility] Check for isinstance(exc, Exception) before entering pdb (mlc-ai#1095)

This is a follow-up to mlc-ai#1017, which added a `--pdb` flag to enter a
debugger on exit.  This commit checks the type of the raised
exception, and only enters the debugger if it is a subclass of
`Exception`.  This ensures that implementation-details, such as a
thrown `SystemExit` or `KeyboardInterrupt`, do not cause an erroneous
entry to pdb.

* [Utils] Remove conversion to numpy array in utils.save_params (mlc-ai#1083)

Prior to this commit, each parameter was converted to a numpy-owned
array as part of a total size computation.  This commit computes the
size directly, removing the conversion.

* [Fix][REST] Use lowered-cased "app" (mlc-ai#1159)

* [Rest] Document emoji handling (mlc-ai#1160)

Followup PR of mlc-ai#1142 to document the emoji handling.

* Enable group quant transform with nn.Module (mlc-ai#1154)

* Enable group quant transform with nn.Module

This PR completes the group quantization support for `nn.Module` based model.

* remove deprecated tests

* Update

* wip

* remove deprecated test

* fix lint

* fix lint

* fix lint

---------

Co-authored-by: Junru Shao <junrushao@apache.org>

* Misc Cleanups of Compilation Pipeline (mlc-ai#1165)

* Support CUDA Multi-Arch Compilation (mlc-ai#1166)

* [Bugfix] Cannot find global function `mlc.llm_chat_create` (mlc-ai#1167)

* Fix RWKV Support (mlc-ai#1136)

I successfully ran the rwkv-world-3b fp16 model on my Xiaomi phone. This PR is to fix a bug on the main branch where the rwkv model outputs only one word and then stop.

![image](https://github.com/mlc-ai/mlc-llm/assets/35585791/6514d6ef-c93c-4ad2-8e76-8ffa0663080f)

* Auto updated submodule references

* Fix Android app Permission denied error on Android 10  (mlc-ai#1175)

Use scoped storage instead of Downloads directory

Co-authored-by: Animesh Bohara <abohara@cs.cmu.edu>

* [SLM] Fix group quantization (mlc-ai#1172)

This PR fixes the group quantization and add related unit tests.

* [Fix] TIR block name of dequantization (mlc-ai#1177)

* [SLM][AutoLLM] Enable Command Line Weight Conversion (mlc-ai#1170)

This PR enables weight conversion in command line.
Sample command: `python3 -m mlc_chat.cli.convert_weight --config dist/models/llama-2-13b-chat-hf/ --quantization "q4f16_1" --output dist/test/`

* [Fix][SLM] Update q4f16 quantization with the new mutator name rule (mlc-ai#1178)

[Fix] Update q4f16 quantization with the new mutator name rule

* [Model Support][SWA] Add support for sliding window attention for Mistral (mlc-ai#1087)

* mistral base

* Add sliding window mask making and its tests

* Small changes for sliding window mask

* Clean up mask making

* Remove kv_seq_len

* Add prefill chunking, handle max window size in SWA

* Add interleave kv

* Temporary fix for kv seq len

* Pass in more shapes to SWA prefill and decode in runtime

* mistral var fix

* Small changes regarding shape passing

* Small fix on chunk size

* Add build args, fix mlc chat config dump

* mistral system prompt
---------

Co-authored-by: David Pissarra <david.pissarra@tecnico.ulisboa.pt>
Co-authored-by: David Pissarra <61968959+davidpissarra@users.noreply.github.com>

* Add Python API for Weight Conversion (mlc-ai#1182)

This PR primarily does a major refactoring to introduce Python API that
is consistent with the CLI API. Besides, it includes the following
fixes and enhancements:

- More info provided to `isort` for better formatting in `pyproject.toml`;
- Print out the default value of all arguments in argparse command line;
- Ensure `--device` is always available locally when doing weight
  conversion;
- Add argument echoing in weight conversion to be consistent with its
  counterpart in compilation;
- Add a consistency checker to make sure the shapes/dtypes of all
  tensors from weight conversion is consistent with compilation;
- Echo the total size of parameters;
- Better logging of each parameter's shape and dtype, and either or not
  its quantized;
- More structure robustification, renaming `parameter/` to `loader/` to
  be more explicit about its intention;
- Inline and remove `ParamQuantizer` into the loader to improve logging
  and the logic flow;
- Always add instructions "Use `--xxx` to override" for any options that
  are auto detected to be more informative to end users;
- Fix wrong shape calculation when quantizing `nn.Embedding`;
- Fix wrong dtype calculation in group quantization when the input dtype
  is different from model dtype (e.g. "float32" in torch, but the model
  dtype in quantization is fp16 in `q4f16_1`);
- Fix inconsistent param names in layers such as `GroupQuantizeLinear`;
- Fix dtype inconsistency when a parameter is not quantized;
- Fix existing unittests.

* Merge `llama_config.CONFIG` into `MODEL_PRESETS` (mlc-ai#1188)

* Merge llama_config.py into llama_model.py (mlc-ai#1189)

* Add CodeLlama as part of model presets (mlc-ai#1190)

* [Docs] Clarify zstd installation on Windows (mlc-ai#1191)

* [Docs] Clarify zstd installation on Windows (mlc-ai#1196)

Update zstd installation

* Support overriding `--max-sequence-length` in command line (mlc-ai#1197)

* [RestAPI] Added docs (mlc-ai#1193)

Add docs for RestAPI

Co-authored-by: Animesh Bohara <abohara@cs.cmu.edu>

* [API] ```llm-vscode``` extension support (mlc-ai#1198)

This PR enables ```llm-vscode``` extension API support for copilot-like code completion, following [HF's LSP](https://github.com/huggingface/llm-ls). Fully compatible with ```CodeLlama``` and ```starcoder``` on mlc-llm. 

- huggingface/llm-vscode#103 enhances extension user experience when used with mlc-llm rest api.

Thanks @ pacman100, who came up with this on his latest blogpost: https://huggingface.co/blog/personal-copilot

* [Fix] Use `fabs` as floating point abs function in C++ (mlc-ai#1202)

* Integrating MLC runtime with the new compilation workflow (mlc-ai#1203)

* [Fix] Remove Redundant Warnings (mlc-ai#1204)

PR mlc-ai#1203 introduces some unnecessary and redundant logging messages.
This PR gets them removed.

* Try fix macOS build with picojson (mlc-ai#1206)

The error message below

```
/usr/share/miniconda/envs/mlc-llm-build/conda-bld/mlc-chat-cli-nightly-package_1699286394016/work/3rdparty/tvm/3rdparty/picojson/picojson.h: In member function 'std::string picojson::value::to_str() const':
/usr/share/miniconda/envs/mlc-llm-build/conda-bld/mlc-chat-cli-nightly-package_1699286394016/work/3rdparty/tvm/3rdparty/picojson/picojson.h:494:37: error: expected ')' before 'PRId64'
  494 |       SNPRINTF(buf, sizeof(buf), "%" PRId64, u_.int64_);
      |               ~                     ^~~~~~~
      |                                     )
/usr/share/miniconda/envs/mlc-llm-build/conda-bld/mlc-chat-cli-nightly-package_1699286394016/work/3rdparty/tvm/3rdparty/picojson/picojson.h:81:1: note: 'PRId64' is defined in header '<cinttypes>'; did you forget to '#include <cinttypes>'?
   80 | #include <errno.h>
  +++ |+#include <cinttypes>
   81 | #include <inttypes.h>

```

indicates that the `__STDC_FORMAT_MACROS` flag is not turned on for some
reason.

* Try fix macOS build with picojson again (mlc-ai#1207)

Try fix macOS build with picojson

* Auto updated submodule references

* [Fix] Keep update-to-date with upstream API change (mlc-ai#1209)

* Detect `mtriple` via LLVM (mlc-ai#1211)

* Fix Python3.8 compatibility breakage (mlc-ai#1210)

The breakage was resulting from newer syntax being used for type
annotations, as part of mlc-ai#592.
So long as `mlc_chat.interface.openai_api` wasn't imported, the
breaking changes were not encountered.  In
mlc-ai#1107, the addition of `from
.interface.openai_api import ChatMessage` caused this module to be
imported, breaking compatibility of `mlc_chat.ChatModule` with
Python3.8.

This commit updates the type annotations to the supported syntax.

* [Slim-LM] Enable loading from AWQ pre-quantized weight. (mlc-ai#1114)

* [SLM] Enable loading from AWQ pre-quantized weight.

* remove awq_loader.py

* Update to the latest commit

* Delete llama_parameter.py

* update unittest

* fix lint

* upd

* add Llama-2-7B-AWQ

* [Bugfix] Fix Cannot import name '_LIB' from 'mlc_chat.base' (mlc-ai#1214)

Fix Python API doc

* [SLM] Support `q3f16_1` and `q4f32_1` (mlc-ai#1215)

This PR supports the int3 and float32 group quantization, and fixes some minor issue in quantization impl and tests.

* Make the Compilation Working E2E (mlc-ai#1218)

* [Mistral][SWA] Add sliding window to metadata (mlc-ai#1217)

Add sliding window to metadata, make smalle changes to invariants in runtime

* Support for `chatml` format conversation (for TinyLlama-1.1B-Chat-v0.2) (mlc-ai#956)

* added support for chatml format conversation

* added template to factory

* Add Rust Support for MLC-LLM (mlc-ai#1213)

This PR introduces Rust language support for the MLC-LLM project, specifically targeting supporting the `ChatModule` interface. It utilizes the existing C++ implementation of MLC-LLM and leverages both TVM's C API and its Rust bindings. The `rust/examples/mlc_chat.rs` gives an example of how to create a `chat_module` and serve user prompts in Rust.

The primary goal of this PR is to enrich the MLC-LLM ecosystem by offering a Rust interface that aligns with the current Python API. This enhancement will empower Rust developers to integrate MLC-LLM into their codebase and applications.

**Followup PRs**: 
- Extend the feature set to achieve parity with the C++/Python interface.
- Refine the Rust API, ensuring robustness.
- Set up Rust CI if needed.

* [Bugfix] Remove dependency on openai_api in chat module (mlc-ai#1222)

Remove dependency on openai_api

* Bake in RAM Usage in the Generated DSO (mlc-ai#1224)

With this PR, the metadata in a DSO file using `vm["_metadata"]()` now
have information about the upper bound RAM estimate on each function.
As an example, the JSON string now is:

```json
{
  "quantization": "q4f16_1",
  "model_type": "llama",
  "memory_usage": {
    "_initialize_effect": 0,
    "prefill": 136192,
    "softmax_with_temperature": 0,
    "decode": 218624
  },
  "params": [
    {"name": "model.embed_tokens.q_weight", "shape": [32000, 512], "dtype": "uint32"},
    {"name": "model.embed_tokens.q_scale", "shape": [32000, 128], "dtype": "float16"},
    ...
  ]
}
```

This helps the MLC runtime to better determine if a method is going to
OOM and plan ahead, e.g. plan pre-allocated KVCache, accordingly.

The idea originates from Ruihang's ancient PR that prints memory usage
estimate as debugging information for demo purposes, and this PR further
enhances it to IRModule-level attribute that can be used by the runtime.

* [Fix] ChatModule python messages and offset types (mlc-ai#1220)

small fixes

* [Fix] Variable Upperbound Should be Injected before Build Pipeline (mlc-ai#1225)

Now it shows a more reasonable upper bound for sequence length = 4096.

```json
{
  "_initialize_effect": 0,
  "prefill": 3479311360,
  "softmax_with_temperature": 0,
  "decode": 34531840
}
```

Thanks Ruihang for helping with the fix!

* [MultiGPU] Support pre-sharded model weights (mlc-ai#1096)

* [Bugfix] Correct input shape for shard info function

Prior to this commit, the sharding functions sharded axis converted
from `orig_size * num_shards` to `orig_size // num_shards`.  This
commit updates the sharding functions to instead convert from
`orig_size` to `orig_size // num_shards`.

* [Bugfix] Include LegalizeOps in utils.convert_weights

Prior to this commit, `utils.convert_weights` assumes that the
parameter transformation module is already legalized, and uses no
relax operations that require legalization.  This commit adds a call
to `relax.transform.LegalizeOps` to remove this assumption.

* [MultiGPU] Cleanup create_shard_info_func

- De-duplicate the `if param.shard_strategy == foo` if/else chain

- Return a `tvm.IRModule` instead of modifying an existing module

* Extract a ParamManager.optimize_transform_param_order method

* Extract ParamManager.create_parameter_transformation call from convert_weights

* Support writing of pre-sharded weights

* Support execution using pre-sharded weights

* Updating for review comments

* fix typo

* [AWQ] e2e awq-quantized model (mlc-ai#1229)

* [SLM] Support `q0f16` and `q0f32` (mlc-ai#1228)

This PR adds the support of `q0f16` and `q0f32`, and change `RMSNorm` to `nn.RMSNorm`.

* [Core][Llama] Argument `max_vocab_size` and `max_batch_size` (mlc-ai#1076)

This PR introduces the `max_vocab_size` and `max_batch_size`
as two new compile arguments. The purpose is for better
memory planning.

Besides, this PR updates llama to make use of the two arguments.
Other models are not changed yet.

The default value for `max_vocab_size` is set to 40000, which
I think is larger than the values of most models. The default
value for `max_batch_size` is currently set as 256. It is possible
that we update this value in the future to have a good default
number.

* [Llama] Support batched prefill (mlc-ai#1233)

This PR supports the Llama modeling with batched prefill, which
can bring higher throughput for the overall prefill process in
serving.

Besides, the PR splits the attention function used in batching
settings into two separate ones, so that we do not dispatch to
the prefill/decode attention functions at runtime.

* [Core] Skip PrimExpr index int32 downcasting for batching (mlc-ai#1234)

This PR makes the ForceNarrowIndexToInt32 to skip application
when batching is enabled.

The reason is because the flattened index of the KV cache append
function may exceed the range of int32 when the cache is large.
For example, in Llama-7b, when a KV cache supports more than
8192 tokens, the total cache size will be at least
```
8192 * 2 (K/V) * 32 (layers) * 4096 = 2147483648,
```
which reaches the maximum int32 value.

* Auto updated submodule references

* Update index.rst (mlc-ai#1236)

Fixed typo on tab:: Android

* Update android.rst (mlc-ai#1237)

On linux, TVM_NDK_CC environment variable should contain linux-x86_64

* Correct typo in cuda device name for rust chat model (mlc-ai#1241)

* Generating mlc-chat-config.json (mlc-ai#1238)

This PR finishes the last piece of new compilation pipeline, i.e.
generation of `mlc-chat-config.json` and other configuration files.

* Rename `--config` to `--model` and Consolidate CLI Messages (mlc-ai#1244)

* Specify argument "dest" in argparse (mlc-ai#1245)

* Add more stats during quantization (mlc-ai#1246)

* ensure that max_gen_len is set properly in mlc_chat_config (mlc-ai#1249)

Currently, `max_gen_len` defaults to 512 in `dump_mlc_chat_config`. However, the instantiations of `dump_mlc_chat_config` within `mlc-llm.build` currently omit the `max_gen_len` argument (even when it's specified in the HF config), so the default of 512 gets set for every `mlc-chat-config.json` that is created by `mlc-llm.build`. This PR fixes the issue.

* [Fix] Memory usage statistics (mlc-ai#1252)

* Introduce mlc_chat subcommands (mlc-ai#1251)

This PR makes it possible to use subcommands of `mlc_chat` package to
control quantization and compilation. Example:

```bash
python -m mlc_chat convert_weight \
    --model /models/Llama-2/hf/Llama-2-7b-chat-hf \
    --quantization q4f16_1 \
    -o ./dist/new-llama/

python -m mlc_chat gen_mlc_chat_config \
    --model ./dist/models/Llama-2-7b-hf \
    --quantization q4f16_1 \
    --max-sequence-length 4096 \
    --conv-template LM \
    -o ./dist/new-llama \

python -m mlc_chat compile \
    --model ./dist/models/Llama-2-7b-hf \
    --quantization q4f16_1 \
    --max-sequence-length 4096 \
    -o ./dist/new-llama/llama.so
```

It slightly simplifies the workflow.

* Update mlc-chat-config.json (mlc-ai#1254)

This PR updates two fields:
* `tokenizer_files`, which now non-existent files are removed from this
  list;
* `model_preset_tag` added to `model_config`, which helps the system to
  conveniently identify if a model configuration is already part of the
  system's built-in model preset.

* [Rust] Support multiple prompts (mlc-ai#1253)

This PR introduces `Prompt` and `ChatMessage` structures, and enhances the `ChatModule` to generate tokens using either a single string (via `Prompt::String`) or a vector of `ChatMessage` (via `Prompt::MessageList`). An example is provided in [rust/examples/mlc_chat.rs](https://github.com/mlc-ai/mlc-llm/compare/main...YuchenJin:mlc-llm:multi-prompts?expand=1#diff-4ffa9349207c1df6ceeebe06a9afc8f2015000e031b39d677bbbe7e85ae2819b).

Here is a snippet demonstrating the interface:
```rust
let message1 = ChatMessage {
    role: "user".to_owned(),
    content: "suppose we already have projects llama, alpaca and vicuna, what do you think would be a great name for the next project?".to_string(),
};
let message2 = ChatMessage {
    role: "assistant".to_owned(),
    content: "based on the previous projects, a possible name for the next project could be \"cervidae\" which is the scientific name for deer family. this name reflects the collaboration and teamwork involved in the development of the project, and also nods to the previous projects that have been developed by the team.".to_string(),
};
let message3 = ChatMessage {
    role: "user".to_owned(),
    content: "Summarize our conversations".to_string(),
};
let messages = vec![message1, message2, message3];
let prompt = Prompt::MessageList(messages);
let output = cm.generate(&prompt, None).unwrap();
```

* [UI] Correct "convert_weight_only" to "convert_weights_only" (mlc-ai#1227)

* [UI] Correct "convert_weight_only" to "convert_weights_only"

This is a frequent typo among multiple developers, as "weights" is
typically plural.  This commit updates the command-line-argument from
`--convert-weight-only` to `--convert-weights-only`.  For backwards
compatibility, the original spelling is kept as an equivalent usage.

* Update all use of "convert_weight_only" to "convert_weights_only"

* Add a downloader from HuggingFace (mlc-ai#1258)

This PR allows programmably downloading from HuggingFace to MLC's cache
directory, which locates in `$HOME/.cache/mlc_chat/model_weights/` by
default.

This PR relies on Git to clone the metadata, and Python's requests
library to fetch concrete weights as large files instead of the less
reliable Git LFS.

The example demonstrates downloading the 4-bit quantized Llama2-7B
model:

```python
from mlc_chat.support.download import download_mlc_weights

download_mlc_weights("HF://mlc-ai/mlc-chat-Llama-2-7b-chat-hf-q4f16_1")
```

Screenshot:

<img width="1913" alt="image" src="https://github.com/mlc-ai/mlc-llm/assets/22515877/3ac50594-4971-4216-bb17-47710b4af1dd">

* [Fix] Add prefix_tokens to `ConvConfig` in Python to match C++ implementation (mlc-ai#1256)

During my Rust implementation of the project, I noticed an inconsistency between the Python and C++ implementations of `ConvConfig`. Specifically, the Python version lacks the `prefix_tokens` field, which is present in the C++ version.: https://github.com/mlc-ai/mlc-llm/blob/5e02cacd8ebba2e0206d5a447225a137de2dac0d/cpp/conversation.h#L69-L70.

This can cause the [`_load_json_override`](https://github.com/mlc-ai/mlc-llm/blob/5e02cacd8ebba2e0206d5a447225a137de2dac0d/python/mlc_chat/chat_module.py#L1062C26-L1062C26) fails to work in the `_prefill` function.

I think a simple unit test would help, I'd like to add a regression test if the CI has been set up.

* [nn.Module] Mistral implementation (mlc-ai#1230)

* Add mistral high level structure

* Small config change

* Now work with compile, mistral inference logic left

* Add masking, cache_len, kv_seq_len; only attention forward left

* fix mistral override naming

* `interleave_kv` implementation

* lint fix

* move rolling buffer cache impl to mlc-llm

* isort fix

* nn.module implementation - reorganize structure

* Update python/mlc_chat/cli/compile.py

Co-authored-by: Junru Shao <junrushao1994@gmail.com>

* lint fix

---------

Co-authored-by: Charlie Ruan <charlieruan2001130@gmail.com>
Co-authored-by: Junru Shao <junrushao1994@gmail.com>

* Add `mlc_chat.__main__` as command line entrypoint (mlc-ai#1263)

This PR makes it possible to invoke mlc_chat subcommands directly.

Previously one has to use `python -m` as the prefix to invoke
`mlc_chat`:

```bash
python -m mlc_chat compile \
  --model /models/Llama-2-7b-chat-hf \
  --quantization q4f16_1 \
  --max-sequence-length 4096 \
  -o ./llama.so
```

This PR makes is possible to use it without the `python -m` prefix:

```bash
mlc_chat compile \
  --model /models/Llama-2-7b-chat-hf \
  --quantization q4f16_1 \
  --max-sequence-length 4096 \
  -o ./llama.so
```

* [Rust] Improve ergonomics of `generate` function in `ChatModule`  (mlc-ai#1262)

Following PR mlc-ai#1253, I think ergonomics of the `generate` function of `ChatModule` can be improved (given it's an important public-facing API). 

This PR simplifies the function's usage by implementing the `From` trait for the `Prompt` enum. Also updated the example code.

Now the interface changes to:

```rust
/// Single prompt case:
cm.generate("what is the meaning of life?", None)

/// Multiple prompt case:
let messages: Vec<ChatMessage> = vec![message1, message2, message3];
let output = cm.generate(messages, None).unwrap();
```

* [Fix] mistral `max_gen_len` (mlc-ai#1264)

* Rename `max-sequence-length` to `context-window-size` (mlc-ai#1265)

"Context window" is a terminology better aligned with LLM world.
Whenever a new model is trained, it is one of the most important metrics
that people care about. Therefore, I'd love to switch it over sooner
than later, before "mlc_chat compile" becomes mature and documented.

* Auto updated submodule references

* Fix group quantization shape infer (mlc-ai#1273)

This PR fixes the shape infer for group quantization.

* Continuous Model Delivery (mlc-ai#1272)

This PR provides a script that automatically quantizes models from
HuggingFace using various quantization formats as specified.

Example: When being provided the following JSON file:

```json
{
  "destination": "{username}/{model_id}-{quantization}", # Name of HF repo
  "default_quantization": ["q0f16", "q0f32", "q3f16_1", "q4f16_1", "q4f32_1"],
  "tasks": [
    {
      "model_id": "Llama-2-7b-hf",
      "model": "/models/Llama-2-7b-hf", # Can be HF URL or a local path
      "context_window_size": 4096,
      "conv_template": "LM",
      "quantization": [
        {
          "format": "q4f16_awq",
          "model": "https://huggingface.co/TheBloke/Llama-2-7B-AWQ", # Overriding default `source`
          "source_format": "awq"
        }
      ]
    }
  ]
}
```

The script will automatically run quantization and upload them to the
following repos:
- https://huggingface.co/junrushao/Llama-2-7b-hf-q0f16
- https://huggingface.co/junrushao/Llama-2-7b-hf-q0f32
- https://huggingface.co/junrushao/Llama-2-7b-hf-q3f16_1
- https://huggingface.co/junrushao/Llama-2-7b-hf-q4f16_1
- https://huggingface.co/junrushao/Llama-2-7b-hf-q4f32_1
- https://huggingface.co/junrushao/Llama-2-7b-hf-q4f16_awq

* Auto updated submodule references

* Enhance Model Delivery (mlc-ai#1283)

This PR introduces a few enhancements:
- Allow to override temporary path via environment variable `MLC_TEMP_DIR`;
- Add a 10-time retry when uploading the quantized weights to
  HuggingFace Hub. It could fail at times;
- Echo the commands being used to quantize the models in `logs.txt`;
- Fix a compatibility issue when pulling individual weights down from
  HuggingFace Hub in Git LFS.

* add python, rest api test (mlc-ai#1278)

* add python, rest api test

* remove mistral, fix pylint

* fix pylint requests import error

* Enable Jenkins CI (mlc-ai#1292)

* fix

* Update android.rst (mlc-ai#1289)

This fix enables default models in app-config.json to get shown "downloaded" in model list via with adb push method for the default models

* more fix

* Consolidate Logics for GPU Detection (mlc-ai#1297)

This PR unifies automatic device detection logic by using
`mlc_chat.support.auto_device`, which comes with detailed logging and
fallback mechanisms.

* [CI] Fix lint concurrent clone issue (mlc-ai#1299)

This PR fixes the broken CI due to different tasks sharing the same
workspace.

* Auto updated submodule references

* [Feature] Prefill chunking for non-SWA models (mlc-ai#1280)

* generalize `prefill-chunk-size`

* renaming `cache_len` to `rolling_cache_len`

* [nn.Module] generalize `prefill_chunk_size`

* quick fix

* lint fix

* check sw with chunking

* fix `_attach_variable_bounds`

* update config from lib metadata

* cleanup

cleanup

* metadata fix

* Compatible with chatglm  (mlc-ai#979)

compatible for chatglm

* Add q4/q8_ft_group quantization mode (mlc-ai#1284)

* Add q4/q8_ft_group quantization mode

* Update submodule

* fix

* restore multi gpu support for FT quant

---------

Co-authored-by: David Pissarra <61968959+davidpissarra@users.noreply.github.com>
Co-authored-by: Roee Shenberg <shenberg@gmail.com>
Co-authored-by: Eric Lunderberg <Lunderberg@users.noreply.github.com>
Co-authored-by: Yaxing Cai <caiyaxing666@gmail.com>
Co-authored-by: Charlie Ruan <53290280+CharlieFRuan@users.noreply.github.com>
Co-authored-by: Bohan Hou <bohanhou@andrew.cmu.edu>
Co-authored-by: yongjer <54315206+yongjer@users.noreply.github.com>
Co-authored-by: Jeethu Rao <jeethu@jeethurao.com>
Co-authored-by: Junru Shao <junrushao@apache.org>
Co-authored-by: Ruihang Lai <ruihangl@cs.cmu.edu>
Co-authored-by: Denise Kutnick <boca.denise@gmail.com>
Co-authored-by: Lesheng Jin <34279105+LeshengJin@users.noreply.github.com>
Co-authored-by: Junru Shao <junrushao1994@gmail.com>
Co-authored-by: Sunghyun Park <sunggg@umich.edu>
Co-authored-by: “Sunghyun <sunggg@umich.com>
Co-authored-by: Rick Zhou <riczhou@linkedin.com>
Co-authored-by: Varshith Bathini <varshith15@gmail.com>
Co-authored-by: Varshith <varshith.bathini@sprinklr.com>
Co-authored-by: Tianqi Chen <tqchen@users.noreply.github.com>
Co-authored-by: Git bot <bot@noreply.github.com>
Co-authored-by: SingLi <Sing-Li@users.noreply.github.com>
Co-authored-by: Kartik Khandelwal <kartikkhandelwal1998@gmail.com>
Co-authored-by: Goutham Tamilselvan <goutham2688@gmail.com>
Co-authored-by: S A G A R <110724849+tmsagarofficial@users.noreply.github.com>
Co-authored-by: Yuchen Jin <yuchenj@cs.washington.edu>
Co-authored-by: DavidSharma <68979667+David-Sharma@users.noreply.github.com>
Co-authored-by: fennecJ <hwahwa649@gmail.com>
Co-authored-by: Xiyou Zhou <xiyou.zhou@gmail.com>
Co-authored-by: Xiaoyu Zhang <35585791+BBuf@users.noreply.github.com>
Co-authored-by: Animesh Bohara <ani.bohara@gmail.com>
Co-authored-by: Animesh Bohara <abohara@cs.cmu.edu>
Co-authored-by: David Pissarra <david.pissarra@tecnico.ulisboa.pt>
Co-authored-by: Antonio Calatrava <antonio@antoniocalatrava.com>
Co-authored-by: Aman Kushwaha <aman.kushwaha.inbox@gmail.com>
Co-authored-by: Malcolm Ramsay <m@malramsay.com>
Co-authored-by: Denise Kutnick <denise@octoml.ai>
Co-authored-by: Charlie Ruan <charlieruan2001130@gmail.com>
Co-authored-by: Siyuan Feng <Hzfengsy@sjtu.edu.cn>
Co-authored-by: Masahiro Masuda <maasahi129@gmail.com>
Co-authored-by: ChaoQin <qc903113684@163.com>
Co-authored-by: Wuwei Lin <wuwei@apache.org>
masahi added a commit to octoml/mlc-llm that referenced this pull request Dec 1, 2023
* [API] Add GenerationConfig (#1024)

* Fix two bugs in kv-cache backtrack loop (#856)

Fix two bugs in kv-cache pop loop

Bug 1: old code would stop early because output_ids was shortened in-place during the loop

Bug 2: off-by-one in backoff size due to break

* [Build] Added --pdb flag to build.py, drop into pdb on error (#1017)

This commit adds an optional `--pdb` flag to the `build.py` script. If
passed, any exception raised that would otherwise terminate the script
will first enter a pdb post-mortem, allowing the error to be
inspected.

* [Android] Use `AlertDialog` instead of `Toast` (#1039)

* Add doc for ChatConfig, ConvConfig, GenerationConfig, BuildArgs (#1040)

Add doc for ChatConfig, ConvConfig, GenerationConfig, BuildArgs, build model

* [Android] Add Llama2 q4f16_0 (#1041)

llama2 q4f160

* [Docs] Model prebuilts tracking page revamp (#1000)

* Update compile_models.rst (#1038)

fix permission issue

* Support for the Stable LM 3B model (#1008)

Support for the stablelm-3b-4e1t model

* [Docs] Iterate model prebuilts docs (#1043)

* Iterate model prebuilts docs

* small fix

* Update README.md

* [CPP] Separate common utils out from llm_chat.cc (#1044)

This PR separates out the tokenizer creation function, the
random number generator out from `llm_chat.cc` as a preparation
step for batching inference support, since these functions/modules
are also used in the same way in batching inference.

* Update README.md (#1045)

Update README.md

* add verbose stats to mlc-chat REST API (#1049)

* add verbose stats to mlc-chat REST API

* update docs

* [Transform] Apply split_rotary optimization on prefill (#1033)

* [Transform] Apply split_rotary optimization on prefill

Prior to this commit, the `transform.fuse_split_rotary_embedding`
function was only applicable to the `decode` function of a Llama-type
model.  This was due to the sequence length being restricted to one,
both in the pattern-match rule and in the `split_rotary` function, and
the function being restricted to operate only on the `decode`
function.

This commit updates the `transform.fuse_split_rotary_embedding` pass
to be a `tvm.ir.transform.Pass`, operating on all applicable matched
in the `IRModule`.  The `split_rotary` function is now produced as a
fully-generic function, with static parameters substituted in
afterwards.  At this stage, the sequence length is retained as a
dynamic parameter, such that it can be used by the `prefill` function.

* Avoid multiple kernel launches for split_rotary

* [Docs] Add `mlc.ai/package` to `DEPENDENCY INSTALLATION` group (#1055)

Co-authored-by: Junru Shao <junrushao1994@gmail.com>

* Revert "[Transform] Apply split_rotary optimization on prefill (#1033)" (#1058)

This reverts commit b9179cfdf02e041be15871b36e74400ab9001921 as elaborated here https://github.com/mlc-ai/mlc-llm/pull/1033#issuecomment-1760386712

* [BugFix] Set the right `max_sequence_length` for both Llama-1 and Llama-2 families (#1032)

* fix

* reflect feedback

---------

Co-authored-by: “Sunghyun <sunggg@umich.com>

* [Doc] Use -U instead of --force-reinstall (#1062)

`--force-reinstall` will reinstall all dependencies to a python package,
which is unnecessary. `-U` is a better choice in this case.

* [Model] Initial batching support for Llama (#1048)

This PR introduces the initial batched input support for llama
models. To make the code managable, we keep both the single-sequence
handling flow and the batching handling flow in the Llama modeling.

Now, with `--enable-batching` as a build argument, we build Llama
for the batched version.

NOTE: The paged attention kernel/TIR func are not included in this PR,
so currently the built library with batching enabled is not runnable.
We will follow up with the attention kernel in the future.

This PR guarantees that the existing single-sequence inference (Python
API, CLI, etc.) is not broken.

P.S.. The batching flow is subject to bug fixes as we integrate with
the attention function and run the e2e flow in the future.

* Fix Stable LM 3B build (#1061)

* [stablelm 3b] Rename dynamic vocab size from "v" to "vocab_size"

* Add get_num_key_value_heads method to StableLM3bConfig

* [Core] Remove duplication in MODEL.get_model calls (#1054)

This commit removes the `if`/`elif` chain in `core.py`, where the body
of each conditional assigns the same `mod, param_manager, params,
model_config`, and is identical except for the choice of model being
built.

* [ParamManager] Cleanup creation of quantization IRModule (#1053)

This commit replaces the single-parameter
`relax_model.param_manager.create_quantize_func` function with a
method on the `ParamManager`, `create_parameter_transformation`.  This
avoids potential typos between `param_manager` as the imported Python
module `mlc_llm.relax_model.param_manager` and an instance of the
`ParamManager` class named `param_manager`, and makes the
functionality easier to find.

This function also takes an optional `optimize_parameter_order` flag,
defaulting to `True`, which applies the `ReorderTransformFunc` pass.
Since the `ReorderTransformFunc` is intended to be used with several
configuration objects owned by `ParamManager`, this simplifies the
common path of producing an optimally-ordered parameter transformation
module.

* Minor typo fix (#1064)

* Add links to Python API Reference (#1068)

* [Fix] ChatModule incorrect temperature buffer shape (#1070)

PR #1048 updated the signature of softmax in the built model library
and changed the temperature buffer shape in ChatModule. This causes
some existing demo unable to run since we did not do a round of model
library update.

This PR reverts the ChatModule change, and adds back the softmax
function in non-batching case. With this PR, the regression should
be fixed.

* [ParamManager] Added progress bar for get_item/set_item (#1063)

* [Python] Extract common device str parse function in ChatModule (#1074)

This PR lifts the device string parsing (just a few of lines)
to a standalone function, so that on the serving side the serving
can make use of this function as well.

Tested Python API and it does not seem to incur regression.

* [Bugfix] Compilation Error in q4f32_1 (#1078)

The pass `fuse-split-rotary` assumes the compute dtype is fp16, which
usually is, but in certain cases, e.g. `q0f32` and `q4f32_1`, the
compute is based on fp32 instead. This PR strengthens the check guard.

* Establish `mlc_chat.compiler` (#1082)

This PR establishes the compiler components in MLC-Chat Python API,
which currently includes two primary components: models and parameters.

The models are `nn.Module`-based definition of an LLM, which, as the
very first stab, contains only `LlamaForCasualLM`. It is decomposed into
three files:
- `llama_config.py`: common configurations for Llama, where we define
  relevant configurations of its architecture, as well as include
  standard config file for Llama2-7B/13B/70B for convenient testing;
- `llama.py`: the model architecture of Llama, based on the PyTorch-like
`nn.Module` API;
- `llama_parameter.py`: defines the mapping between MLC parameters and
  pytorch parameters.

The parameters contains the basic functionality of parameter mapping,
and the loaders that effectively convert parameters from PyTorch to MLC
according to the mapping specified. Currently, only `HFTorchLoader` is
implemented, but loaders like SafeTensor, GPTQ or AWQ should be quite
straightforward according to the existing design.

On top of this PR, on-the-fly quantization could be defined as a loading
time transformation on MLC parameters, while pre-quantized parameter
loading is effectively parameter loading after MLC's `nn.Module` is
quantized.

Two unittests examplify how the infrastructure works:
- `./tests/python/model/test_llama.py` shows how to create an `nn.Module`
using the new infra, and then convert it to TVM IRModule;
- `./tests/python/parameter/hf_torch_loader.py` shows how to load
parameters from HuggingFace PyTorch format.

Besides, `mlc_chat.support` is established for utility functions, which
now contains two utils:
- `config.py` which supports reading configurations into dataclasses
from JSON file or Python dict. On top of Python dataclass, it throws
irrelevant fields into `cls.kwargs`, which is helpful when loading
HuggingFace configuration file;
- `tqdm.py` which contains tqdm-related utilities, primarily redirecting
logging and printing to work nicely with tqdm.

* Update README.md for Multi-GPU (#1090)

* Support lib_path override in C++. Improvements on docs and error messages (#1086)

* Support lib_path option in C++ CLI. Disable ChatConfig.model_lib override in Python API. Improvements on helper messages and error messages

* Update docs

* Rename lib_path -> model_lib_path

* StreamIterator (#1057)

Co-authored-by: Varshith <varshith.bathini@sprinklr.com>

* Update `benchmark.py` according to #1086 (#1091)

Update `benchmark.py`

* Disable Disco for q4f16_ft and q8f16_ft quantization (#1094)

* [Format] Apply isort and black for `python/` (#1097)

[Format] Apply isort and black on `python/`

The commands I am using are:

```
isort --profile black python/
black python/
```

It is always recommended to format the code before submission, given we
don't have a linter CI yet.

* More formatting (#1099)

* Enable Python Linter (#1098)

This PR enables two Python formatters "black" and "isort" on the following directory:
- `./python/`
- `./tests/python/`

Enabling pylint and mypy is left for future work

* Add Basic Pylint and Mypy Tooling (#1100)

Add pylint/mypy tooling into pyproject.toml

This PR establishes the initial Python tooling infra with Pylint and
Mypy. Currently only the newest modules, i.e. `mlc_chat.support` and
`mlc_chat.compiler` are covered, and we expect to cover the entire
package, as being tracked in #1101.

* [CI] Add clang-format (#1103)

* [Slim-LM] Smart path finding for config and weight (#1088)

* [Transform] Provide IRModule transform for rewrite_attention (#1052)

Prior to this commit, `mlc_llm.transform.rewrite_attention` updated a
single function.  This commit modifies it to instead be a transform
operating on any pattern matches within an `IRModule`.

* [ParamManager] Use BundleModelParams for transform_dequantize (#1056)

* [ParamManager] Use BundleModelParams for transform_quantize

Prior to this commit, `ParamManager.transform_quantize` function took
as input functions with separate parameters for each weight tensor,
and produced output functions with a tuple parameter for all weights.
Because `LiftTransformParams` had the same convention, neither could
be applied as part of the same build flow.

This commit updates `ParamManager.transform_quantize` pass to produce
outputs with separate tensor parameters, using the `BundleModelParams`
transform to later combine them into a single tuple parameter.  The
analogous change was also performed for `LiftTransformParams` as part
of https://github.com/apache/tvm/pull/15657.

In addition, prior to this commit, the
`ParamManager.transform_dequantize` function operated directly on a
`IRModule` object.  As a result, any debug instrumentation
(e.g. before/after printouts for each pass, before/after verification
with `relax.analysis.well_formed`, etc.) did not apply to this
`transform_dequantize`.  This commit updates
`ParamManager.transform_dequantize` to return a `ir.transform.Pass`.

* Correct type annotation

* [Slim-LM] Introduce HFLoad for loading Pytorch and SafeTensor weights (#1113)

* [WINDOWS] reduce noise in windows build (#1115)

* Add CLI commands for compilation (#1109)

* Auto updated submodule references

* fix mismatched argument name (#1117)

fix error introduced by recent code changes

fixes #1116

* [Docs] Add doc for max and mean gen len, shift factor; and buildArgs (#1119)

* Add doc for max and mean gen len, shift factor

* Update python docs for BuildArgs

* Revert "[ParamManager] Use BundleModelParams for transform_dequantize" (#1120)

Revert "[ParamManager] Use BundleModelParams for transform_dequantize (#1056)"

This reverts commit e5927cee3b932b6e3116b43778008a3aa11ef0a3.

This causes a regression impacting all MLC LLM nightlies as it violates the existing calling convention in MLC Chat runtime. An example: https://github.com/mlc-ai/mlc-llm/issues/1060#issuecomment-1776761032

* Remove inaccurate warning message (#1121)

This PR removes an inaccurate warning from #1086, which warns about
`model_lib` overriding regardless of whether or not it's actually
overridden. With this commit, we only warn if its value is not None.

* [REST] OpenAI compatible Rest API (#1107)

* add presence and frequency penalty

* Added support for passing conversation history in /v1/chat/completions endpoint

* Added support for RestAPI parameters max_gen_len, n, and stop_str

* * add presence and frequency penalty to generation config
* refactor generation config

* Added documentation for parameters

* replace lib_path with model_lib_path in rest.py

* fixed black isort issues

* fix lib_path

* Add --opt flag parsing to CLI (#1123)

* [ParamManager][Redo] Use BundleModelParams for transform_dequantize (#1127)

Prior to this commit, `ParamManager.transform_quantize` function took
as input functions with separate parameters for each weight tensor,
and produced output functions with a tuple parameter for all weights.
Because `LiftTransformParams` had the same convention, neither could
be applied as part of the same build flow.

This commit updates `ParamManager.transform_quantize` pass to produce
outputs with separate tensor parameters, using the `BundleModelParams`
transform to later combine them into a single tuple parameter.  The
analogous change was also performed for `LiftTransformParams` as part
of https://github.com/apache/tvm/pull/15657.

In addition, prior to this commit, the
`ParamManager.transform_dequantize` function operated directly on a
`IRModule` object.  As a result, any debug instrumentation
(e.g. before/after printouts for each pass, before/after verification
with `relax.analysis.well_formed`, etc.) did not apply to this
`transform_dequantize`.  This commit updates
`ParamManager.transform_dequantize` to return a `ir.transform.Pass`.

This commit is a repeat of the reverted PR
https://github.com/mlc-ai/mlc-llm/pull/1056.  This PR resolves the bug
in the earlier implementation by removing the call to
`.without_attr("num_input")` in `ParamReplacer.rewrite_func`.  This
follows an analogous update in `LiftTransformParams`, preserving the
`"num_input"` attribute for use in `BundleModelParams`.

* added details to windows installation (#1133)

32bit version of the zstd.dll library was causing issues, so updated the doc to be more specific and download the 64bit version.

* Grammatical and Typographical improvements (#1139)

* Update faq.rst

* Update guideline.rst

* Update compile_models.rst

* Update distribute_compiled_models.rst

* Update get-vicuna-weight.rst

* Update python.rst

* Update android.rst

* Update cli.rst

* Update ios.rst

* Update javascript.rst

* Update python.rst

* Update rest.rst

* Minor enhancements to `ChatModule` (#1132)

Some minor enhancements to `ChatModule`, mainly handle the device parsing solely in `_parse_device_str` instead of handling it both in the member function and the `__init__` function to avoid redundancy; and some type annotation fix.

* Updating tvm install docs (#1143)

Updating the tvm install docs to assist a user in finding and copying zstd.dll to the correct folder.

* Make the help info consistent with program name (#1137)

When user use command `mlc_chat_cli --help`, the output will be
something like

Usage: mlc_chat [--help] ...

That's because the program name specified in `cli_main.cc` is "mlc_chat".
It will be less confusing if the output of help info shows

Usage: mlc_chat_cli [--help] ...

* Support parameter packing (#1146)

* [Slim-LM] Enable Group Quant (#1129)

* Enable group quant via new interface.

* Minor fix.

* Linting.

* Fix isort.

* Fix mypy.

* TE compute working.

* Skip embed.

* Support cpu+gpu quantization.

* Add target option to tests.

* Linting.

* Enable Mypy and Pylint in mlc_chat Python Package (#1149)

* Migrate Compiler Passes (#1150)

* Compile Model Preset without External `config.json` (#1151)

This PR adds support for compiling a preset of models without
having to provide a `config.json` on disk using the commands below:

```diff
python -m mlc_chat.cli.compile \
       --quantization q4f16_1 -o /tmp/1.so \
-       --config /models/Llama-2-7b-chat-hf
+       --config llama2_7b
```

This allows easier testing and binary distribution without having to
depend on external model directory.

* Update attention layer (#1153)

Existing dlight optimization only works for NT matmul, but not NN. As a
result, the new `nn.Module`-based implementation, which uses NN matmul,
fails compilation at HEAD for now. This PR fixes this issue by tweaking
`k` to the preferred layout.

The following commands now work with the new compilation pipeline:

```bash
python -m mlc_chat.cli.compile --config llama2_7b  --quantization q4f16_1 -o /tmp/1.so
python -m mlc_chat.cli.compile --config llama2_13b --quantization q4f16_1 -o /tmp/1.so
python -m mlc_chat.cli.compile --config llama2_70b --quantization q4f16_1 -o /tmp/1.so
```

Note that the quantization algorithm per se, `q4f16_1`, has not been
implemented yet, meaning this code path is not yet ready for use so far.

* Add batched Llama model definition using vLLM paged attention (#1134)

* Add batched Llama model with vllm paged attention

* update core.py

* doc

* minor

* add e2e test

* mv file

* clean

* Check if TVM has been built with USE_VLLM

* update BuildArgs docstring

* [Transform][Redo] Apply split_rotary optimization on prefill (#1125)

Prior to this commit, the `transform.fuse_split_rotary_embedding`
function was only applicable to the `decode` function of a Llama-type
model.  This was due to the sequence length being restricted to one,
both in the pattern-match rule and in the `split_rotary` function, and
the function being restricted to operate only on the `decode`
function.

This commit updates the `transform.fuse_split_rotary_embedding` pass
to be a `tvm.ir.transform.Pass`, operating on all applicable matched
in the `IRModule`.  The `split_rotary` function is now produced as a
fully-generic function, with static parameters substituted in
afterwards.  At this stage, the sequence length is retained as a
dynamic parameter, such that it can be used by the `prefill` function.

This commit reapplies the reverted commit
https://github.com/mlc-ai/mlc-llm/pull/1033.  The error in the
previous implementation was in the definition of
`rotary_embedding_offset`, which provided the `query_sequence_length`
instead of `kv_sequence_length`.  This was able to pass the validity
tests described
[here](https://github.com/mlc-ai/mlc-llm/pull/1058#issuecomment-1761622534),
as these two sequence lengths are identical for the first call.

* Apply rewrite for normal attention and MQA (#1138)

Fixes a bug introduced in https://github.com/mlc-ai/mlc-llm/pull/1052,
where use of the `--use-flash-attn-mqa` flag on a model that doesn't
use MQA would prevent the use of CUTLASS attention at all.

* [Rest] Fix emoji handling in Rest API. (#1142)

* [Utility] Check for isinstance(exc, Exception) before entering pdb (#1095)

This is a follow-up to #1017, which added a `--pdb` flag to enter a
debugger on exit.  This commit checks the type of the raised
exception, and only enters the debugger if it is a subclass of
`Exception`.  This ensures that implementation-details, such as a
thrown `SystemExit` or `KeyboardInterrupt`, do not cause an erroneous
entry to pdb.

* [Utils] Remove conversion to numpy array in utils.save_params (#1083)

Prior to this commit, each parameter was converted to a numpy-owned
array as part of a total size computation.  This commit computes the
size directly, removing the conversion.

* [Fix][REST] Use lowered-cased "app" (#1159)

* [Rest] Document emoji handling (#1160)

Followup PR of #1142 to document the emoji handling.

* Enable group quant transform with nn.Module (#1154)

* Enable group quant transform with nn.Module

This PR completes the group quantization support for `nn.Module` based model.

* remove deprecated tests

* Update

* wip

* remove deprecated test

* fix lint

* fix lint

* fix lint

---------

Co-authored-by: Junru Shao <junrushao@apache.org>

* Misc Cleanups of Compilation Pipeline (#1165)

* Support CUDA Multi-Arch Compilation (#1166)

* [Bugfix] Cannot find global function `mlc.llm_chat_create` (#1167)

* Fix RWKV Support (#1136)

I successfully ran the rwkv-world-3b fp16 model on my Xiaomi phone. This PR is to fix a bug on the main branch where the rwkv model outputs only one word and then stop.

![image](https://github.com/mlc-ai/mlc-llm/assets/35585791/6514d6ef-c93c-4ad2-8e76-8ffa0663080f)

* Auto updated submodule references

* Fix Android app Permission denied error on Android 10  (#1175)

Use scoped storage instead of Downloads directory

Co-authored-by: Animesh Bohara <abohara@cs.cmu.edu>

* [SLM] Fix group quantization (#1172)

This PR fixes the group quantization and add related unit tests.

* [Fix] TIR block name of dequantization (#1177)

* [SLM][AutoLLM] Enable Command Line Weight Conversion (#1170)

This PR enables weight conversion in command line.
Sample command: `python3 -m mlc_chat.cli.convert_weight --config dist/models/llama-2-13b-chat-hf/ --quantization "q4f16_1" --output dist/test/`

* [Fix][SLM] Update q4f16 quantization with the new mutator name rule (#1178)

[Fix] Update q4f16 quantization with the new mutator name rule

* [Model Support][SWA] Add support for sliding window attention for Mistral (#1087)

* mistral base

* Add sliding window mask making and its tests

* Small changes for sliding window mask

* Clean up mask making

* Remove kv_seq_len

* Add prefill chunking, handle max window size in SWA

* Add interleave kv

* Temporary fix for kv seq len

* Pass in more shapes to SWA prefill and decode in runtime

* mistral var fix

* Small changes regarding shape passing

* Small fix on chunk size

* Add build args, fix mlc chat config dump

* mistral system prompt
---------

Co-authored-by: David Pissarra <david.pissarra@tecnico.ulisboa.pt>
Co-authored-by: David Pissarra <61968959+davidpissarra@users.noreply.github.com>

* Add Python API for Weight Conversion (#1182)

This PR primarily does a major refactoring to introduce Python API that
is consistent with the CLI API. Besides, it includes the following
fixes and enhancements:

- More info provided to `isort` for better formatting in `pyproject.toml`;
- Print out the default value of all arguments in argparse command line;
- Ensure `--device` is always available locally when doing weight
  conversion;
- Add argument echoing in weight conversion to be consistent with its
  counterpart in compilation;
- Add a consistency checker to make sure the shapes/dtypes of all
  tensors from weight conversion is consistent with compilation;
- Echo the total size of parameters;
- Better logging of each parameter's shape and dtype, and either or not
  its quantized;
- More structure robustification, renaming `parameter/` to `loader/` to
  be more explicit about its intention;
- Inline and remove `ParamQuantizer` into the loader to improve logging
  and the logic flow;
- Always add instructions "Use `--xxx` to override" for any options that
  are auto detected to be more informative to end users;
- Fix wrong shape calculation when quantizing `nn.Embedding`;
- Fix wrong dtype calculation in group quantization when the input dtype
  is different from model dtype (e.g. "float32" in torch, but the model
  dtype in quantization is fp16 in `q4f16_1`);
- Fix inconsistent param names in layers such as `GroupQuantizeLinear`;
- Fix dtype inconsistency when a parameter is not quantized;
- Fix existing unittests.

* Merge `llama_config.CONFIG` into `MODEL_PRESETS` (#1188)

* Merge llama_config.py into llama_model.py (#1189)

* Add CodeLlama as part of model presets (#1190)

* [Docs] Clarify zstd installation on Windows (#1191)

* [Docs] Clarify zstd installation on Windows (#1196)

Update zstd installation

* Support overriding `--max-sequence-length` in command line (#1197)

* [RestAPI] Added docs (#1193)

Add docs for RestAPI

Co-authored-by: Animesh Bohara <abohara@cs.cmu.edu>

* [API] ```llm-vscode``` extension support (#1198)

This PR enables ```llm-vscode``` extension API support for copilot-like code completion, following [HF's LSP](https://github.com/huggingface/llm-ls). Fully compatible with ```CodeLlama``` and ```starcoder``` on mlc-llm. 

- https://github.com/huggingface/llm-vscode/pull/103 enhances extension user experience when used with mlc-llm rest api.

Thanks @ pacman100, who came up with this on his latest blogpost: https://huggingface.co/blog/personal-copilot

* [Fix] Use `fabs` as floating point abs function in C++ (#1202)

* Integrating MLC runtime with the new compilation workflow (#1203)

* [Fix] Remove Redundant Warnings (#1204)

PR #1203 introduces some unnecessary and redundant logging messages.
This PR gets them removed.

* Try fix macOS build with picojson (#1206)

The error message below

```
/usr/share/miniconda/envs/mlc-llm-build/conda-bld/mlc-chat-cli-nightly-package_1699286394016/work/3rdparty/tvm/3rdparty/picojson/picojson.h: In member function 'std::string picojson::value::to_str() const':
/usr/share/miniconda/envs/mlc-llm-build/conda-bld/mlc-chat-cli-nightly-package_1699286394016/work/3rdparty/tvm/3rdparty/picojson/picojson.h:494:37: error: expected ')' before 'PRId64'
  494 |       SNPRINTF(buf, sizeof(buf), "%" PRId64, u_.int64_);
      |               ~                     ^~~~~~~
      |                                     )
/usr/share/miniconda/envs/mlc-llm-build/conda-bld/mlc-chat-cli-nightly-package_1699286394016/work/3rdparty/tvm/3rdparty/picojson/picojson.h:81:1: note: 'PRId64' is defined in header '<cinttypes>'; did you forget to '#include <cinttypes>'?
   80 | #include <errno.h>
  +++ |+#include <cinttypes>
   81 | #include <inttypes.h>

```

indicates that the `__STDC_FORMAT_MACROS` flag is not turned on for some
reason.

* Try fix macOS build with picojson again (#1207)

Try fix macOS build with picojson

* Auto updated submodule references

* [Fix] Keep update-to-date with upstream API change (#1209)

* Detect `mtriple` via LLVM (#1211)

* Fix Python3.8 compatibility breakage (#1210)

The breakage was resulting from newer syntax being used for type
annotations, as part of https://github.com/mlc-ai/mlc-llm/pull/592.
So long as `mlc_chat.interface.openai_api` wasn't imported, the
breaking changes were not encountered.  In
https://github.com/mlc-ai/mlc-llm/pull/1107, the addition of `from
.interface.openai_api import ChatMessage` caused this module to be
imported, breaking compatibility of `mlc_chat.ChatModule` with
Python3.8.

This commit updates the type annotations to the supported syntax.

* [Slim-LM] Enable loading from AWQ pre-quantized weight. (#1114)

* [SLM] Enable loading from AWQ pre-quantized weight.

* remove awq_loader.py

* Update to the latest commit

* Delete llama_parameter.py

* update unittest

* fix lint

* upd

* add Llama-2-7B-AWQ

* [Bugfix] Fix Cannot import name '_LIB' from 'mlc_chat.base' (#1214)

Fix Python API doc

* [SLM] Support `q3f16_1` and `q4f32_1` (#1215)

This PR supports the int3 and float32 group quantization, and fixes some minor issue in quantization impl and tests.

* Make the Compilation Working E2E (#1218)

* [Mistral][SWA] Add sliding window to metadata (#1217)

Add sliding window to metadata, make smalle changes to invariants in runtime

* Support for `chatml` format conversation (for TinyLlama-1.1B-Chat-v0.2) (#956)

* added support for chatml format conversation

* added template to factory

* Add Rust Support for MLC-LLM (#1213)

This PR introduces Rust language support for the MLC-LLM project, specifically targeting supporting the `ChatModule` interface. It utilizes the existing C++ implementation of MLC-LLM and leverages both TVM's C API and its Rust bindings. The `rust/examples/mlc_chat.rs` gives an example of how to create a `chat_module` and serve user prompts in Rust.

The primary goal of this PR is to enrich the MLC-LLM ecosystem by offering a Rust interface that aligns with the current Python API. This enhancement will empower Rust developers to integrate MLC-LLM into their codebase and applications.

**Followup PRs**: 
- Extend the feature set to achieve parity with the C++/Python interface.
- Refine the Rust API, ensuring robustness.
- Set up Rust CI if needed.

* [Bugfix] Remove dependency on openai_api in chat module (#1222)

Remove dependency on openai_api

* Bake in RAM Usage in the Generated DSO (#1224)

With this PR, the metadata in a DSO file using `vm["_metadata"]()` now
have information about the upper bound RAM estimate on each function.
As an example, the JSON string now is:

```json
{
  "quantization": "q4f16_1",
  "model_type": "llama",
  "memory_usage": {
    "_initialize_effect": 0,
    "prefill": 136192,
    "softmax_with_temperature": 0,
    "decode": 218624
  },
  "params": [
    {"name": "model.embed_tokens.q_weight", "shape": [32000, 512], "dtype": "uint32"},
    {"name": "model.embed_tokens.q_scale", "shape": [32000, 128], "dtype": "float16"},
    ...
  ]
}
```

This helps the MLC runtime to better determine if a method is going to
OOM and plan ahead, e.g. plan pre-allocated KVCache, accordingly.

The idea originates from Ruihang's ancient PR that prints memory usage
estimate as debugging information for demo purposes, and this PR further
enhances it to IRModule-level attribute that can be used by the runtime.

* [Fix] ChatModule python messages and offset types (#1220)

small fixes

* [Fix] Variable Upperbound Should be Injected before Build Pipeline (#1225)

Now it shows a more reasonable upper bound for sequence length = 4096.

```json
{
  "_initialize_effect": 0,
  "prefill": 3479311360,
  "softmax_with_temperature": 0,
  "decode": 34531840
}
```

Thanks Ruihang for helping with the fix!

* [MultiGPU] Support pre-sharded model weights (#1096)

* [Bugfix] Correct input shape for shard info function

Prior to this commit, the sharding functions sharded axis converted
from `orig_size * num_shards` to `orig_size // num_shards`.  This
commit updates the sharding functions to instead convert from
`orig_size` to `orig_size // num_shards`.

* [Bugfix] Include LegalizeOps in utils.convert_weights

Prior to this commit, `utils.convert_weights` assumes that the
parameter transformation module is already legalized, and uses no
relax operations that require legalization.  This commit adds a call
to `relax.transform.LegalizeOps` to remove this assumption.

* [MultiGPU] Cleanup create_shard_info_func

- De-duplicate the `if param.shard_strategy == foo` if/else chain

- Return a `tvm.IRModule` instead of modifying an existing module

* Extract a ParamManager.optimize_transform_param_order method

* Extract ParamManager.create_parameter_transformation call from convert_weights

* Support writing of pre-sharded weights

* Support execution using pre-sharded weights

* Updating for review comments

* fix typo

* [AWQ] e2e awq-quantized model (#1229)

* [SLM] Support `q0f16` and `q0f32` (#1228)

This PR adds the support of `q0f16` and `q0f32`, and change `RMSNorm` to `nn.RMSNorm`.

* [Core][Llama] Argument `max_vocab_size` and `max_batch_size` (#1076)

This PR introduces the `max_vocab_size` and `max_batch_size`
as two new compile arguments. The purpose is for better
memory planning.

Besides, this PR updates llama to make use of the two arguments.
Other models are not changed yet.

The default value for `max_vocab_size` is set to 40000, which
I think is larger than the values of most models. The default
value for `max_batch_size` is currently set as 256. It is possible
that we update this value in the future to have a good default
number.

* [Llama] Support batched prefill (#1233)

This PR supports the Llama modeling with batched prefill, which
can bring higher throughput for the overall prefill process in
serving.

Besides, the PR splits the attention function used in batching
settings into two separate ones, so that we do not dispatch to
the prefill/decode attention functions at runtime.

* [Core] Skip PrimExpr index int32 downcasting for batching (#1234)

This PR makes the ForceNarrowIndexToInt32 to skip application
when batching is enabled.

The reason is because the flattened index of the KV cache append
function may exceed the range of int32 when the cache is large.
For example, in Llama-7b, when a KV cache supports more than
8192 tokens, the total cache size will be at least
```
8192 * 2 (K/V) * 32 (layers) * 4096 = 2147483648,
```
which reaches the maximum int32 value.

* Auto updated submodule references

* Update index.rst (#1236)

Fixed typo on tab:: Android

* Update android.rst (#1237)

On linux, TVM_NDK_CC environment variable should contain linux-x86_64

* Correct typo in cuda device name for rust chat model (#1241)

* Generating mlc-chat-config.json (#1238)

This PR finishes the last piece of new compilation pipeline, i.e.
generation of `mlc-chat-config.json` and other configuration files.

* Rename `--config` to `--model` and Consolidate CLI Messages (#1244)

* Specify argument "dest" in argparse (#1245)

* Add more stats during quantization (#1246)

* ensure that max_gen_len is set properly in mlc_chat_config (#1249)

Currently, `max_gen_len` defaults to 512 in `dump_mlc_chat_config`. However, the instantiations of `dump_mlc_chat_config` within `mlc-llm.build` currently omit the `max_gen_len` argument (even when it's specified in the HF config), so the default of 512 gets set for every `mlc-chat-config.json` that is created by `mlc-llm.build`. This PR fixes the issue.

* [Fix] Memory usage statistics (#1252)

* Introduce mlc_chat subcommands (#1251)

This PR makes it possible to use subcommands of `mlc_chat` package to
control quantization and compilation. Example:

```bash
python -m mlc_chat convert_weight \
    --model /models/Llama-2/hf/Llama-2-7b-chat-hf \
    --quantization q4f16_1 \
    -o ./dist/new-llama/

python -m mlc_chat gen_mlc_chat_config \
    --model ./dist/models/Llama-2-7b-hf \
    --quantization q4f16_1 \
    --max-sequence-length 4096 \
    --conv-template LM \
    -o ./dist/new-llama \

python -m mlc_chat compile \
    --model ./dist/models/Llama-2-7b-hf \
    --quantization q4f16_1 \
    --max-sequence-length 4096 \
    -o ./dist/new-llama/llama.so
```

It slightly simplifies the workflow.

* Update mlc-chat-config.json (#1254)

This PR updates two fields:
* `tokenizer_files`, which now non-existent files are removed from this
  list;
* `model_preset_tag` added to `model_config`, which helps the system to
  conveniently identify if a model configuration is already part of the
  system's built-in model preset.

* [Rust] Support multiple prompts (#1253)

This PR introduces `Prompt` and `ChatMessage` structures, and enhances the `ChatModule` to generate tokens using either a single string (via `Prompt::String`) or a vector of `ChatMessage` (via `Prompt::MessageList`). An example is provided in [rust/examples/mlc_chat.rs](https://github.com/mlc-ai/mlc-llm/compare/main...YuchenJin:mlc-llm:multi-prompts?expand=1#diff-4ffa9349207c1df6ceeebe06a9afc8f2015000e031b39d677bbbe7e85ae2819b).

Here is a snippet demonstrating the interface:
```rust
let message1 = ChatMessage {
    role: "user".to_owned(),
    content: "suppose we already have projects llama, alpaca and vicuna, what do you think would be a great name for the next project?".to_string(),
};
let message2 = ChatMessage {
    role: "assistant".to_owned(),
    content: "based on the previous projects, a possible name for the next project could be \"cervidae\" which is the scientific name for deer family. this name reflects the collaboration and teamwork involved in the development of the project, and also nods to the previous projects that have been developed by the team.".to_string(),
};
let message3 = ChatMessage {
    role: "user".to_owned(),
    content: "Summarize our conversations".to_string(),
};
let messages = vec![message1, message2, message3];
let prompt = Prompt::MessageList(messages);
let output = cm.generate(&prompt, None).unwrap();
```

* [UI] Correct "convert_weight_only" to "convert_weights_only" (#1227)

* [UI] Correct "convert_weight_only" to "convert_weights_only"

This is a frequent typo among multiple developers, as "weights" is
typically plural.  This commit updates the command-line-argument from
`--convert-weight-only` to `--convert-weights-only`.  For backwards
compatibility, the original spelling is kept as an equivalent usage.

* Update all use of "convert_weight_only" to "convert_weights_only"

* Add a downloader from HuggingFace (#1258)

This PR allows programmably downloading from HuggingFace to MLC's cache
directory, which locates in `$HOME/.cache/mlc_chat/model_weights/` by
default.

This PR relies on Git to clone the metadata, and Python's requests
library to fetch concrete weights as large files instead of the less
reliable Git LFS.

The example demonstrates downloading the 4-bit quantized Llama2-7B
model:

```python
from mlc_chat.support.download import download_mlc_weights

download_mlc_weights("HF://mlc-ai/mlc-chat-Llama-2-7b-chat-hf-q4f16_1")
```

Screenshot:

<img width="1913" alt="image" src="https://github.com/mlc-ai/mlc-llm/assets/22515877/3ac50594-4971-4216-bb17-47710b4af1dd">

* [Fix] Add prefix_tokens to `ConvConfig` in Python to match C++ implementation (#1256)

During my Rust implementation of the project, I noticed an inconsistency between the Python and C++ implementations of `ConvConfig`. Specifically, the Python version lacks the `prefix_tokens` field, which is present in the C++ version.: https://github.com/mlc-ai/mlc-llm/blob/5e02cacd8ebba2e0206d5a447225a137de2dac0d/cpp/conversation.h#L69-L70.

This can cause the [`_load_json_override`](https://github.com/mlc-ai/mlc-llm/blob/5e02cacd8ebba2e0206d5a447225a137de2dac0d/python/mlc_chat/chat_module.py#L1062C26-L1062C26) fails to work in the `_prefill` function.

I think a simple unit test would help, I'd like to add a regression test if the CI has been set up.

* [nn.Module] Mistral implementation (#1230)

* Add mistral high level structure

* Small config change

* Now work with compile, mistral inference logic left

* Add masking, cache_len, kv_seq_len; only attention forward left

* fix mistral override naming

* `interleave_kv` implementation

* lint fix

* move rolling buffer cache impl to mlc-llm

* isort fix

* nn.module implementation - reorganize structure

* Update python/mlc_chat/cli/compile.py

Co-authored-by: Junru Shao <junrushao1994@gmail.com>

* lint fix

---------

Co-authored-by: Charlie Ruan <charlieruan2001130@gmail.com>
Co-authored-by: Junru Shao <junrushao1994@gmail.com>

* Add `mlc_chat.__main__` as command line entrypoint (#1263)

This PR makes it possible to invoke mlc_chat subcommands directly.

Previously one has to use `python -m` as the prefix to invoke
`mlc_chat`:

```bash
python -m mlc_chat compile \
  --model /models/Llama-2-7b-chat-hf \
  --quantization q4f16_1 \
  --max-sequence-length 4096 \
  -o ./llama.so
```

This PR makes is possible to use it without the `python -m` prefix:

```bash
mlc_chat compile \
  --model /models/Llama-2-7b-chat-hf \
  --quantization q4f16_1 \
  --max-sequence-length 4096 \
  -o ./llama.so
```

* [Rust] Improve ergonomics of `generate` function in `ChatModule`  (#1262)

Following PR #1253, I think ergonomics of the `generate` function of `ChatModule` can be improved (given it's an important public-facing API). 

This PR simplifies the function's usage by implementing the `From` trait for the `Prompt` enum. Also updated the example code.

Now the interface changes to:

```rust
/// Single prompt case:
cm.generate("what is the meaning of life?", None)

/// Multiple prompt case:
let messages: Vec<ChatMessage> = vec![message1, message2, message3];
let output = cm.generate(messages, None).unwrap();
```

* [Fix] mistral `max_gen_len` (#1264)

* Rename `max-sequence-length` to `context-window-size` (#1265)

"Context window" is a terminology better aligned with LLM world.
Whenever a new model is trained, it is one of the most important metrics
that people care about. Therefore, I'd love to switch it over sooner
than later, before "mlc_chat compile" becomes mature and documented.

* Auto updated submodule references

* Fix group quantization shape infer (#1273)

This PR fixes the shape infer for group quantization.

* Continuous Model Delivery (#1272)

This PR provides a script that automatically quantizes models from
HuggingFace using various quantization formats as specified.

Example: When being provided the following JSON file:

```json
{
  "destination": "{username}/{model_id}-{quantization}", # Name of HF repo
  "default_quantization": ["q0f16", "q0f32", "q3f16_1", "q4f16_1", "q4f32_1"],
  "tasks": [
    {
      "model_id": "Llama-2-7b-hf",
      "model": "/models/Llama-2-7b-hf", # Can be HF URL or a local path
      "context_window_size": 4096,
      "conv_template": "LM",
      "quantization": [
        {
          "format": "q4f16_awq",
          "model": "https://huggingface.co/TheBloke/Llama-2-7B-AWQ", # Overriding default `source`
          "source_format": "awq"
        }
      ]
    }
  ]
}
```

The script will automatically run quantization and upload them to the
following repos:
- https://huggingface.co/junrushao/Llama-2-7b-hf-q0f16
- https://huggingface.co/junrushao/Llama-2-7b-hf-q0f32
- https://huggingface.co/junrushao/Llama-2-7b-hf-q3f16_1
- https://huggingface.co/junrushao/Llama-2-7b-hf-q4f16_1
- https://huggingface.co/junrushao/Llama-2-7b-hf-q4f32_1
- https://huggingface.co/junrushao/Llama-2-7b-hf-q4f16_awq

* Auto updated submodule references

* Enhance Model Delivery (#1283)

This PR introduces a few enhancements:
- Allow to override temporary path via environment variable `MLC_TEMP_DIR`;
- Add a 10-time retry when uploading the quantized weights to
  HuggingFace Hub. It could fail at times;
- Echo the commands being used to quantize the models in `logs.txt`;
- Fix a compatibility issue when pulling individual weights down from
  HuggingFace Hub in Git LFS.

* add python, rest api test (#1278)

* add python, rest api test

* remove mistral, fix pylint

* fix pylint requests import error

* Enable Jenkins CI (#1292)

* Update android.rst (#1289)

This fix enables default models in app-config.json to get shown "downloaded" in model list via with adb push method for the default models

* Consolidate Logics for GPU Detection (#1297)

This PR unifies automatic device detection logic by using
`mlc_chat.support.auto_device`, which comes with detailed logging and
fallback mechanisms.

* [CI] Fix lint concurrent clone issue (#1299)

This PR fixes the broken CI due to different tasks sharing the same
workspace.

* Auto updated submodule references

* [Feature] Prefill chunking for non-SWA models (#1280)

* generalize `prefill-chunk-size`

* renaming `cache_len` to `rolling_cache_len`

* [nn.Module] generalize `prefill_chunk_size`

* quick fix

* lint fix

* check sw with chunking

* fix `_attach_variable_bounds`

* update config from lib metadata

* cleanup

cleanup

* metadata fix

* Compatible with chatglm  (#979)

compatible for chatglm

* Add q4/q8_ft_group quantization mode (#1284)

* Add q4/q8_ft_group quantization mode

* Update submodule

* [CI] Clean workspace before build (#1304)

Cleaning the workspace before building, so that the previous
corrupted directory will not affect the current CI build.

Used the `cleanWS` from here https://www.jenkins.io/doc/pipeline/steps/ws-cleanup/

* [Python] Detect Driver/Device in a Separate Process (#1311)

This PR separates the device detection into separate
subprocesses.

The change is because the device detection will setup the
driver, which consumes some GPU VRAM (for example,
`tvm.device("opencl", 0).exist` consumes 390MB of VRAM on
RTX 4090). Consider the case we detect if CUDA, Vulkan and
OpenCL are available. When they are all available, each
detection holds some VRAM, larger than 430MB altogether.

If the device detection is in the same process as the main
process, the VRAM consumed by device detection will never
be released. This means that in the example above, we
detect CUDA, Vulkan and OpenCL, while in the end we
prioritize the CUDA device. Consequently, the memory held by
Vulkan and OpenCL detection will never be released.

Motivated by this issue, we separate the detection into
subprocess, so that the held VRAM can be successfully
released after detection.

* add chatglm3 support (#1313)

* feat: add chatglm3 support

* fix: remove duplicate code

---------

Co-authored-by: Max Lin <jason.max.lin@outlook.com>

* [SLIM] Skip None param when loading rather than failing (#1308)

* Auto updated submodule references

* [nn.Module] Implement GPT-2 Model Support (#1314)

Co-authored-by: Charlie Ruan <charlieruan2001130@gmail.com>

* remove ndk referencce from mali build target (#1312)

* remove ndk referencce from mali target

This removes the ndk reference for mali targets when building

* adding check for NDK environment variable

---------

Co-authored-by: x <x@xs-MacBook-Pro.local>

* [Rust] A few enhancements (#1310)

a few enhancements.

* [iOS] Mistral support (#1320)

* Add terminator for streaming REST API (#1325)

* Add terminator for streaming REST API

Add compatibility with OpenAI's streaming API.

fixes #1300

* make black happy

* read CUDA_ARCH_LIST to set CUDA capability versions for nvcc (#1326)

read {TORCH_,}CUDA_ARCH_LIST from environment and use it to set compute arch versions

* Update emcc.rst

* [AUTO-DEVICE] In process early exit device detection (#1333)

This PR updates the device auto detection to use in process
early exit. This hopefully will address some of the windows
issues in multi-process based approach while preserving memory
without allocating in each during auto mode.

* [RestAPI] Update parameters for /v1/completions and add tests (#1335)

Add n and stop for /v1/completions endpoint

Co-authored-by: Animesh Bohara <abohara@cs.cmu.edu>

* fix broken REST examples due to recent compatibility change (#1345)

fix broken restapi examples due to recent OpenAI API compatibility changes

* [Bugfix] Ignore exit code in device detection (#1350)

TVM's Vulkan runtime emits a non-zero exit code on certain Windows
drivers on DLL offloading. While there is definitely a way to fix this,
for now, we quickly get around this by not checking the exit code in
device detection.

This PR also improves clarify when multiple GPUs presents by emitting
logging messages on all GPUs available, rather than only GPU 0.

* [OpenHermes] Add conversation template for OpenHermes Mistral (#1354)

Add conversation template for open hermes mistral

Co-authored-by: David Pissarra <david.pissarra@tecnico.ulisboa.pt>

* [Tokenizer] Prioritize huggingface tokenizer.json, generate one if not included (#1352)

* Prioritize tokenzer json, generate one if not included

* [Rust] Prepare for publishing (#1342)

Prepare for publishing.

* Fix gen_mlc_chat_config for mistral  (#1353)

* fix gen config for mistral

* fix pylint

* Allow override of sw and chunk size in config gen

* lint fix

---------

Co-authored-by: Charlie Ruan <charlieruan2001130@gmail.com>
Co-authored-by: Charlie Ruan <53290280+CharlieFRuan@users.noreply.github.com>

* Fix ft quantization scale computation (#1321)

* fix

* fix

---------

Co-authored-by: David Pissarra <61968959+davidpissarra@users.noreply.github.com>
Co-authored-by: Roee Shenberg <shenberg@gmail.com>
Co-authored-by: Eric Lunderberg <Lunderberg@users.noreply.github.com>
Co-authored-by: Yaxing Cai <caiyaxing666@gmail.com>
Co-authored-by: Charlie Ruan <53290280+CharlieFRuan@users.noreply.github.com>
Co-authored-by: Bohan Hou <bohanhou@andrew.cmu.edu>
Co-authored-by: yongjer <54315206+yongjer@users.noreply.github.com>
Co-authored-by: Jeethu Rao <jeethu@jeethurao.com>
Co-authored-by: Junru Shao <junrushao@apache.org>
Co-authored-by: Ruihang Lai <ruihangl@cs.cmu.edu>
Co-authored-by: Denise Kutnick <boca.denise@gmail.com>
Co-authored-by: Lesheng Jin <34279105+LeshengJin@users.noreply.github.com>
Co-authored-by: Junru Shao <junrushao1994@gmail.com>
Co-authored-by: Sunghyun Park <sunggg@umich.edu>
Co-authored-by: “Sunghyun <sunggg@umich.com>
Co-authored-by: Rick Zhou <riczhou@linkedin.com>
Co-authored-by: Varshith Bathini <varshith15@gmail.com>
Co-authored-by: Varshith <varshith.bathini@sprinklr.com>
Co-authored-by: Tianqi Chen <tqchen@users.noreply.github.com>
Co-authored-by: Git bot <bot@noreply.github.com>
Co-authored-by: SingLi <Sing-Li@users.noreply.github.com>
Co-authored-by: Kartik Khandelwal <kartikkhandelwal1998@gmail.com>
Co-authored-by: Goutham Tamilselvan <goutham2688@gmail.com>
Co-authored-by: S A G A R <110724849+tmsagarofficial@users.noreply.github.com>
Co-authored-by: Yuchen Jin <yuchenj@cs.washington.edu>
Co-authored-by: DavidSharma <68979667+David-Sharma@users.noreply.github.com>
Co-authored-by: fennecJ <hwahwa649@gmail.com>
Co-authored-by: Xiyou Zhou <xiyou.zhou@gmail.com>
Co-authored-by: Xiaoyu Zhang <35585791+BBuf@users.noreply.github.com>
Co-authored-by: Animesh Bohara <ani.bohara@gmail.com>
Co-authored-by: Animesh Bohara <abohara@cs.cmu.edu>
Co-authored-by: David Pissarra <david.pissarra@tecnico.ulisboa.pt>
Co-authored-by: Antonio Calatrava <antonio@antoniocalatrava.com>
Co-authored-by: Aman Kushwaha <aman.kushwaha.inbox@gmail.com>
Co-authored-by: Malcolm Ramsay <m@malramsay.com>
Co-authored-by: Denise Kutnick <denise@octoml.ai>
Co-authored-by: Charlie Ruan <charlieruan2001130@gmail.com>
Co-authored-by: Siyuan Feng <Hzfengsy@sjtu.edu.cn>
Co-authored-by: ChaoQin <qc903113684@163.com>
Co-authored-by: Wuwei Lin <wuwei@apache.org>
Co-authored-by: Jason Lin <2627866800@qq.com>
Co-authored-by: Max Lin <jason.max.lin@outlook.com>
Co-authored-by: Rick Zhou <rickzhoucmu@gmail.com>
Co-authored-by: Shankar Ambady <shanbady@gmail.com>
Co-authored-by: x <x@xs-MacBook-Pro.local>
Co-authored-by: technillogue <wisepoison@gmail.com>
Co-authored-by: Hongyi Jin <jinhongyi02@gmail.com>
masahi added a commit to octoml/mlc-llm that referenced this pull request Jan 9, 2024
* [Fix] ChatModule incorrect temperature buffer shape (#1070)

PR #1048 updated the signature of softmax in the built model library
and changed the temperature buffer shape in ChatModule. This causes
some existing demo unable to run since we did not do a round of model
library update.

This PR reverts the ChatModule change, and adds back the softmax
function in non-batching case. With this PR, the regression should
be fixed.

* [ParamManager] Added progress bar for get_item/set_item (#1063)

* [Python] Extract common device str parse function in ChatModule (#1074)

This PR lifts the device string parsing (just a few of lines)
to a standalone function, so that on the serving side the serving
can make use of this function as well.

Tested Python API and it does not seem to incur regression.

* [Bugfix] Compilation Error in q4f32_1 (#1078)

The pass `fuse-split-rotary` assumes the compute dtype is fp16, which
usually is, but in certain cases, e.g. `q0f32` and `q4f32_1`, the
compute is based on fp32 instead. This PR strengthens the check guard.

* Establish `mlc_chat.compiler` (#1082)

This PR establishes the compiler components in MLC-Chat Python API,
which currently includes two primary components: models and parameters.

The models are `nn.Module`-based definition of an LLM, which, as the
very first stab, contains only `LlamaForCasualLM`. It is decomposed into
three files:
- `llama_config.py`: common configurations for Llama, where we define
  relevant configurations of its architecture, as well as include
  standard config file for Llama2-7B/13B/70B for convenient testing;
- `llama.py`: the model architecture of Llama, based on the PyTorch-like
`nn.Module` API;
- `llama_parameter.py`: defines the mapping between MLC parameters and
  pytorch parameters.

The parameters contains the basic functionality of parameter mapping,
and the loaders that effectively convert parameters from PyTorch to MLC
according to the mapping specified. Currently, only `HFTorchLoader` is
implemented, but loaders like SafeTensor, GPTQ or AWQ should be quite
straightforward according to the existing design.

On top of this PR, on-the-fly quantization could be defined as a loading
time transformation on MLC parameters, while pre-quantized parameter
loading is effectively parameter loading after MLC's `nn.Module` is
quantized.

Two unittests examplify how the infrastructure works:
- `./tests/python/model/test_llama.py` shows how to create an `nn.Module`
using the new infra, and then convert it to TVM IRModule;
- `./tests/python/parameter/hf_torch_loader.py` shows how to load
parameters from HuggingFace PyTorch format.

Besides, `mlc_chat.support` is established for utility functions, which
now contains two utils:
- `config.py` which supports reading configurations into dataclasses
from JSON file or Python dict. On top of Python dataclass, it throws
irrelevant fields into `cls.kwargs`, which is helpful when loading
HuggingFace configuration file;
- `tqdm.py` which contains tqdm-related utilities, primarily redirecting
logging and printing to work nicely with tqdm.

* Update README.md for Multi-GPU (#1090)

* Support lib_path override in C++. Improvements on docs and error messages (#1086)

* Support lib_path option in C++ CLI. Disable ChatConfig.model_lib override in Python API. Improvements on helper messages and error messages

* Update docs

* Rename lib_path -> model_lib_path

* StreamIterator (#1057)

Co-authored-by: Varshith <varshith.bathini@sprinklr.com>

* Update `benchmark.py` according to #1086 (#1091)

Update `benchmark.py`

* Disable Disco for q4f16_ft and q8f16_ft quantization (#1094)

* [Format] Apply isort and black for `python/` (#1097)

[Format] Apply isort and black on `python/`

The commands I am using are:

```
isort --profile black python/
black python/
```

It is always recommended to format the code before submission, given we
don't have a linter CI yet.

* More formatting (#1099)

* Enable Python Linter (#1098)

This PR enables two Python formatters "black" and "isort" on the following directory:
- `./python/`
- `./tests/python/`

Enabling pylint and mypy is left for future work

* Add Basic Pylint and Mypy Tooling (#1100)

Add pylint/mypy tooling into pyproject.toml

This PR establishes the initial Python tooling infra with Pylint and
Mypy. Currently only the newest modules, i.e. `mlc_chat.support` and
`mlc_chat.compiler` are covered, and we expect to cover the entire
package, as being tracked in #1101.

* [CI] Add clang-format (#1103)

* [Slim-LM] Smart path finding for config and weight (#1088)

* [Transform] Provide IRModule transform for rewrite_attention (#1052)

Prior to this commit, `mlc_llm.transform.rewrite_attention` updated a
single function.  This commit modifies it to instead be a transform
operating on any pattern matches within an `IRModule`.

* [ParamManager] Use BundleModelParams for transform_dequantize (#1056)

* [ParamManager] Use BundleModelParams for transform_quantize

Prior to this commit, `ParamManager.transform_quantize` function took
as input functions with separate parameters for each weight tensor,
and produced output functions with a tuple parameter for all weights.
Because `LiftTransformParams` had the same convention, neither could
be applied as part of the same build flow.

This commit updates `ParamManager.transform_quantize` pass to produce
outputs with separate tensor parameters, using the `BundleModelParams`
transform to later combine them into a single tuple parameter.  The
analogous change was also performed for `LiftTransformParams` as part
of https://github.com/apache/tvm/pull/15657.

In addition, prior to this commit, the
`ParamManager.transform_dequantize` function operated directly on a
`IRModule` object.  As a result, any debug instrumentation
(e.g. before/after printouts for each pass, before/after verification
with `relax.analysis.well_formed`, etc.) did not apply to this
`transform_dequantize`.  This commit updates
`ParamManager.transform_dequantize` to return a `ir.transform.Pass`.

* Correct type annotation

* [Slim-LM] Introduce HFLoad for loading Pytorch and SafeTensor weights (#1113)

* [WINDOWS] reduce noise in windows build (#1115)

* Add CLI commands for compilation (#1109)

* Auto updated submodule references

* fix mismatched argument name (#1117)

fix error introduced by recent code changes

fixes #1116

* [Docs] Add doc for max and mean gen len, shift factor; and buildArgs (#1119)

* Add doc for max and mean gen len, shift factor

* Update python docs for BuildArgs

* Revert "[ParamManager] Use BundleModelParams for transform_dequantize" (#1120)

Revert "[ParamManager] Use BundleModelParams for transform_dequantize (#1056)"

This reverts commit e5927cee3b932b6e3116b43778008a3aa11ef0a3.

This causes a regression impacting all MLC LLM nightlies as it violates the existing calling convention in MLC Chat runtime. An example: https://github.com/mlc-ai/mlc-llm/issues/1060#issuecomment-1776761032

* Remove inaccurate warning message (#1121)

This PR removes an inaccurate warning from #1086, which warns about
`model_lib` overriding regardless of whether or not it's actually
overridden. With this commit, we only warn if its value is not None.

* [REST] OpenAI compatible Rest API (#1107)

* add presence and frequency penalty

* Added support for passing conversation history in /v1/chat/completions endpoint

* Added support for RestAPI parameters max_gen_len, n, and stop_str

* * add presence and frequency penalty to generation config
* refactor generation config

* Added documentation for parameters

* replace lib_path with model_lib_path in rest.py

* fixed black isort issues

* fix lib_path

* Add --opt flag parsing to CLI (#1123)

* [ParamManager][Redo] Use BundleModelParams for transform_dequantize (#1127)

Prior to this commit, `ParamManager.transform_quantize` function took
as input functions with separate parameters for each weight tensor,
and produced output functions with a tuple parameter for all weights.
Because `LiftTransformParams` had the same convention, neither could
be applied as part of the same build flow.

This commit updates `ParamManager.transform_quantize` pass to produce
outputs with separate tensor parameters, using the `BundleModelParams`
transform to later combine them into a single tuple parameter.  The
analogous change was also performed for `LiftTransformParams` as part
of https://github.com/apache/tvm/pull/15657.

In addition, prior to this commit, the
`ParamManager.transform_dequantize` function operated directly on a
`IRModule` object.  As a result, any debug instrumentation
(e.g. before/after printouts for each pass, before/after verification
with `relax.analysis.well_formed`, etc.) did not apply to this
`transform_dequantize`.  This commit updates
`ParamManager.transform_dequantize` to return a `ir.transform.Pass`.

This commit is a repeat of the reverted PR
https://github.com/mlc-ai/mlc-llm/pull/1056.  This PR resolves the bug
in the earlier implementation by removing the call to
`.without_attr("num_input")` in `ParamReplacer.rewrite_func`.  This
follows an analogous update in `LiftTransformParams`, preserving the
`"num_input"` attribute for use in `BundleModelParams`.

* added details to windows installation (#1133)

32bit version of the zstd.dll library was causing issues, so updated the doc to be more specific and download the 64bit version.

* Grammatical and Typographical improvements (#1139)

* Update faq.rst

* Update guideline.rst

* Update compile_models.rst

* Update distribute_compiled_models.rst

* Update get-vicuna-weight.rst

* Update python.rst

* Update android.rst

* Update cli.rst

* Update ios.rst

* Update javascript.rst

* Update python.rst

* Update rest.rst

* Minor enhancements to `ChatModule` (#1132)

Some minor enhancements to `ChatModule`, mainly handle the device parsing solely in `_parse_device_str` instead of handling it both in the member function and the `__init__` function to avoid redundancy; and some type annotation fix.

* Updating tvm install docs (#1143)

Updating the tvm install docs to assist a user in finding and copying zstd.dll to the correct folder.

* Make the help info consistent with program name (#1137)

When user use command `mlc_chat_cli --help`, the output will be
something like

Usage: mlc_chat [--help] ...

That's because the program name specified in `cli_main.cc` is "mlc_chat".
It will be less confusing if the output of help info shows

Usage: mlc_chat_cli [--help] ...

* Support parameter packing (#1146)

* [Slim-LM] Enable Group Quant (#1129)

* Enable group quant via new interface.

* Minor fix.

* Linting.

* Fix isort.

* Fix mypy.

* TE compute working.

* Skip embed.

* Support cpu+gpu quantization.

* Add target option to tests.

* Linting.

* Enable Mypy and Pylint in mlc_chat Python Package (#1149)

* Migrate Compiler Passes (#1150)

* Compile Model Preset without External `config.json` (#1151)

This PR adds support for compiling a preset of models without
having to provide a `config.json` on disk using the commands below:

```diff
python -m mlc_chat.cli.compile \
       --quantization q4f16_1 -o /tmp/1.so \
-       --config /models/Llama-2-7b-chat-hf
+       --config llama2_7b
```

This allows easier testing and binary distribution without having to
depend on external model directory.

* Update attention layer (#1153)

Existing dlight optimization only works for NT matmul, but not NN. As a
result, the new `nn.Module`-based implementation, which uses NN matmul,
fails compilation at HEAD for now. This PR fixes this issue by tweaking
`k` to the preferred layout.

The following commands now work with the new compilation pipeline:

```bash
python -m mlc_chat.cli.compile --config llama2_7b  --quantization q4f16_1 -o /tmp/1.so
python -m mlc_chat.cli.compile --config llama2_13b --quantization q4f16_1 -o /tmp/1.so
python -m mlc_chat.cli.compile --config llama2_70b --quantization q4f16_1 -o /tmp/1.so
```

Note that the quantization algorithm per se, `q4f16_1`, has not been
implemented yet, meaning this code path is not yet ready for use so far.

* Add batched Llama model definition using vLLM paged attention (#1134)

* Add batched Llama model with vllm paged attention

* update core.py

* doc

* minor

* add e2e test

* mv file

* clean

* Check if TVM has been built with USE_VLLM

* update BuildArgs docstring

* [Transform][Redo] Apply split_rotary optimization on prefill (#1125)

Prior to this commit, the `transform.fuse_split_rotary_embedding`
function was only applicable to the `decode` function of a Llama-type
model.  This was due to the sequence length being restricted to one,
both in the pattern-match rule and in the `split_rotary` function, and
the function being restricted to operate only on the `decode`
function.

This commit updates the `transform.fuse_split_rotary_embedding` pass
to be a `tvm.ir.transform.Pass`, operating on all applicable matched
in the `IRModule`.  The `split_rotary` function is now produced as a
fully-generic function, with static parameters substituted in
afterwards.  At this stage, the sequence length is retained as a
dynamic parameter, such that it can be used by the `prefill` function.

This commit reapplies the reverted commit
https://github.com/mlc-ai/mlc-llm/pull/1033.  The error in the
previous implementation was in the definition of
`rotary_embedding_offset`, which provided the `query_sequence_length`
instead of `kv_sequence_length`.  This was able to pass the validity
tests described
[here](https://github.com/mlc-ai/mlc-llm/pull/1058#issuecomment-1761622534),
as these two sequence lengths are identical for the first call.

* Apply rewrite for normal attention and MQA (#1138)

Fixes a bug introduced in https://github.com/mlc-ai/mlc-llm/pull/1052,
where use of the `--use-flash-attn-mqa` flag on a model that doesn't
use MQA would prevent the use of CUTLASS attention at all.

* [Rest] Fix emoji handling in Rest API. (#1142)

* [Utility] Check for isinstance(exc, Exception) before entering pdb (#1095)

This is a follow-up to #1017, which added a `--pdb` flag to enter a
debugger on exit.  This commit checks the type of the raised
exception, and only enters the debugger if it is a subclass of
`Exception`.  This ensures that implementation-details, such as a
thrown `SystemExit` or `KeyboardInterrupt`, do not cause an erroneous
entry to pdb.

* [Utils] Remove conversion to numpy array in utils.save_params (#1083)

Prior to this commit, each parameter was converted to a numpy-owned
array as part of a total size computation.  This commit computes the
size directly, removing the conversion.

* [Fix][REST] Use lowered-cased "app" (#1159)

* [Rest] Document emoji handling (#1160)

Followup PR of #1142 to document the emoji handling.

* Enable group quant transform with nn.Module (#1154)

* Enable group quant transform with nn.Module

This PR completes the group quantization support for `nn.Module` based model.

* remove deprecated tests

* Update

* wip

* remove deprecated test

* fix lint

* fix lint

* fix lint

---------

Co-authored-by: Junru Shao <junrushao@apache.org>

* Misc Cleanups of Compilation Pipeline (#1165)

* Support CUDA Multi-Arch Compilation (#1166)

* [Bugfix] Cannot find global function `mlc.llm_chat_create` (#1167)

* Fix RWKV Support (#1136)

I successfully ran the rwkv-world-3b fp16 model on my Xiaomi phone. This PR is to fix a bug on the main branch where the rwkv model outputs only one word and then stop.

![image](https://github.com/mlc-ai/mlc-llm/assets/35585791/6514d6ef-c93c-4ad2-8e76-8ffa0663080f)

* Auto updated submodule references

* Fix Android app Permission denied error on Android 10  (#1175)

Use scoped storage instead of Downloads directory

Co-authored-by: Animesh Bohara <abohara@cs.cmu.edu>

* [SLM] Fix group quantization (#1172)

This PR fixes the group quantization and add related unit tests.

* [Fix] TIR block name of dequantization (#1177)

* [SLM][AutoLLM] Enable Command Line Weight Conversion (#1170)

This PR enables weight conversion in command line.
Sample command: `python3 -m mlc_chat.cli.convert_weight --config dist/models/llama-2-13b-chat-hf/ --quantization "q4f16_1" --output dist/test/`

* [Fix][SLM] Update q4f16 quantization with the new mutator name rule (#1178)

[Fix] Update q4f16 quantization with the new mutator name rule

* [Model Support][SWA] Add support for sliding window attention for Mistral (#1087)

* mistral base

* Add sliding window mask making and its tests

* Small changes for sliding window mask

* Clean up mask making

* Remove kv_seq_len

* Add prefill chunking, handle max window size in SWA

* Add interleave kv

* Temporary fix for kv seq len

* Pass in more shapes to SWA prefill and decode in runtime

* mistral var fix

* Small changes regarding shape passing

* Small fix on chunk size

* Add build args, fix mlc chat config dump

* mistral system prompt
---------

Co-authored-by: David Pissarra <david.pissarra@tecnico.ulisboa.pt>
Co-authored-by: David Pissarra <61968959+davidpissarra@users.noreply.github.com>

* Add Python API for Weight Conversion (#1182)

This PR primarily does a major refactoring to introduce Python API that
is consistent with the CLI API. Besides, it includes the following
fixes and enhancements:

- More info provided to `isort` for better formatting in `pyproject.toml`;
- Print out the default value of all arguments in argparse command line;
- Ensure `--device` is always available locally when doing weight
  conversion;
- Add argument echoing in weight conversion to be consistent with its
  counterpart in compilation;
- Add a consistency checker to make sure the shapes/dtypes of all
  tensors from weight conversion is consistent with compilation;
- Echo the total size of parameters;
- Better logging of each parameter's shape and dtype, and either or not
  its quantized;
- More structure robustification, renaming `parameter/` to `loader/` to
  be more explicit about its intention;
- Inline and remove `ParamQuantizer` into the loader to improve logging
  and the logic flow;
- Always add instructions "Use `--xxx` to override" for any options that
  are auto detected to be more informative to end users;
- Fix wrong shape calculation when quantizing `nn.Embedding`;
- Fix wrong dtype calculation in group quantization when the input dtype
  is different from model dtype (e.g. "float32" in torch, but the model
  dtype in quantization is fp16 in `q4f16_1`);
- Fix inconsistent param names in layers such as `GroupQuantizeLinear`;
- Fix dtype inconsistency when a parameter is not quantized;
- Fix existing unittests.

* Merge `llama_config.CONFIG` into `MODEL_PRESETS` (#1188)

* Merge llama_config.py into llama_model.py (#1189)

* Add CodeLlama as part of model presets (#1190)

* [Docs] Clarify zstd installation on Windows (#1191)

* [Docs] Clarify zstd installation on Windows (#1196)

Update zstd installation

* Support overriding `--max-sequence-length` in command line (#1197)

* [RestAPI] Added docs (#1193)

Add docs for RestAPI

Co-authored-by: Animesh Bohara <abohara@cs.cmu.edu>

* [API] ```llm-vscode``` extension support (#1198)

This PR enables ```llm-vscode``` extension API support for copilot-like code completion, following [HF's LSP](https://github.com/huggingface/llm-ls). Fully compatible with ```CodeLlama``` and ```starcoder``` on mlc-llm. 

- https://github.com/huggingface/llm-vscode/pull/103 enhances extension user experience when used with mlc-llm rest api.

Thanks @ pacman100, who came up with this on his latest blogpost: https://huggingface.co/blog/personal-copilot

* [Fix] Use `fabs` as floating point abs function in C++ (#1202)

* Integrating MLC runtime with the new compilation workflow (#1203)

* [Fix] Remove Redundant Warnings (#1204)

PR #1203 introduces some unnecessary and redundant logging messages.
This PR gets them removed.

* Try fix macOS build with picojson (#1206)

The error message below

```
/usr/share/miniconda/envs/mlc-llm-build/conda-bld/mlc-chat-cli-nightly-package_1699286394016/work/3rdparty/tvm/3rdparty/picojson/picojson.h: In member function 'std::string picojson::value::to_str() const':
/usr/share/miniconda/envs/mlc-llm-build/conda-bld/mlc-chat-cli-nightly-package_1699286394016/work/3rdparty/tvm/3rdparty/picojson/picojson.h:494:37: error: expected ')' before 'PRId64'
  494 |       SNPRINTF(buf, sizeof(buf), "%" PRId64, u_.int64_);
      |               ~                     ^~~~~~~
      |                                     )
/usr/share/miniconda/envs/mlc-llm-build/conda-bld/mlc-chat-cli-nightly-package_1699286394016/work/3rdparty/tvm/3rdparty/picojson/picojson.h:81:1: note: 'PRId64' is defined in header '<cinttypes>'; did you forget to '#include <cinttypes>'?
   80 | #include <errno.h>
  +++ |+#include <cinttypes>
   81 | #include <inttypes.h>

```

indicates that the `__STDC_FORMAT_MACROS` flag is not turned on for some
reason.

* Try fix macOS build with picojson again (#1207)

Try fix macOS build with picojson

* Auto updated submodule references

* [Fix] Keep update-to-date with upstream API change (#1209)

* Detect `mtriple` via LLVM (#1211)

* Fix Python3.8 compatibility breakage (#1210)

The breakage was resulting from newer syntax being used for type
annotations, as part of https://github.com/mlc-ai/mlc-llm/pull/592.
So long as `mlc_chat.interface.openai_api` wasn't imported, the
breaking changes were not encountered.  In
https://github.com/mlc-ai/mlc-llm/pull/1107, the addition of `from
.interface.openai_api import ChatMessage` caused this module to be
imported, breaking compatibility of `mlc_chat.ChatModule` with
Python3.8.

This commit updates the type annotations to the supported syntax.

* [Slim-LM] Enable loading from AWQ pre-quantized weight. (#1114)

* [SLM] Enable loading from AWQ pre-quantized weight.

* remove awq_loader.py

* Update to the latest commit

* Delete llama_parameter.py

* update unittest

* fix lint

* upd

* add Llama-2-7B-AWQ

* [Bugfix] Fix Cannot import name '_LIB' from 'mlc_chat.base' (#1214)

Fix Python API doc

* [SLM] Support `q3f16_1` and `q4f32_1` (#1215)

This PR supports the int3 and float32 group quantization, and fixes some minor issue in quantization impl and tests.

* Make the Compilation Working E2E (#1218)

* [Mistral][SWA] Add sliding window to metadata (#1217)

Add sliding window to metadata, make smalle changes to invariants in runtime

* Support for `chatml` format conversation (for TinyLlama-1.1B-Chat-v0.2) (#956)

* added support for chatml format conversation

* added template to factory

* Add Rust Support for MLC-LLM (#1213)

This PR introduces Rust language support for the MLC-LLM project, specifically targeting supporting the `ChatModule` interface. It utilizes the existing C++ implementation of MLC-LLM and leverages both TVM's C API and its Rust bindings. The `rust/examples/mlc_chat.rs` gives an example of how to create a `chat_module` and serve user prompts in Rust.

The primary goal of this PR is to enrich the MLC-LLM ecosystem by offering a Rust interface that aligns with the current Python API. This enhancement will empower Rust developers to integrate MLC-LLM into their codebase and applications.

**Followup PRs**: 
- Extend the feature set to achieve parity with the C++/Python interface.
- Refine the Rust API, ensuring robustness.
- Set up Rust CI if needed.

* [Bugfix] Remove dependency on openai_api in chat module (#1222)

Remove dependency on openai_api

* Bake in RAM Usage in the Generated DSO (#1224)

With this PR, the metadata in a DSO file using `vm["_metadata"]()` now
have information about the upper bound RAM estimate on each function.
As an example, the JSON string now is:

```json
{
  "quantization": "q4f16_1",
  "model_type": "llama",
  "memory_usage": {
    "_initialize_effect": 0,
    "prefill": 136192,
    "softmax_with_temperature": 0,
    "decode": 218624
  },
  "params": [
    {"name": "model.embed_tokens.q_weight", "shape": [32000, 512], "dtype": "uint32"},
    {"name": "model.embed_tokens.q_scale", "shape": [32000, 128], "dtype": "float16"},
    ...
  ]
}
```

This helps the MLC runtime to better determine if a method is going to
OOM and plan ahead, e.g. plan pre-allocated KVCache, accordingly.

The idea originates from Ruihang's ancient PR that prints memory usage
estimate as debugging information for demo purposes, and this PR further
enhances it to IRModule-level attribute that can be used by the runtime.

* [Fix] ChatModule python messages and offset types (#1220)

small fixes

* [Fix] Variable Upperbound Should be Injected before Build Pipeline (#1225)

Now it shows a more reasonable upper bound for sequence length = 4096.

```json
{
  "_initialize_effect": 0,
  "prefill": 3479311360,
  "softmax_with_temperature": 0,
  "decode": 34531840
}
```

Thanks Ruihang for helping with the fix!

* [MultiGPU] Support pre-sharded model weights (#1096)

* [Bugfix] Correct input shape for shard info function

Prior to this commit, the sharding functions sharded axis converted
from `orig_size * num_shards` to `orig_size // num_shards`.  This
commit updates the sharding functions to instead convert from
`orig_size` to `orig_size // num_shards`.

* [Bugfix] Include LegalizeOps in utils.convert_weights

Prior to this commit, `utils.convert_weights` assumes that the
parameter transformation module is already legalized, and uses no
relax operations that require legalization.  This commit adds a call
to `relax.transform.LegalizeOps` to remove this assumption.

* [MultiGPU] Cleanup create_shard_info_func

- De-duplicate the `if param.shard_strategy == foo` if/else chain

- Return a `tvm.IRModule` instead of modifying an existing module

* Extract a ParamManager.optimize_transform_param_order method

* Extract ParamManager.create_parameter_transformation call from convert_weights

* Support writing of pre-sharded weights

* Support execution using pre-sharded weights

* Updating for review comments

* fix typo

* [AWQ] e2e awq-quantized model (#1229)

* [SLM] Support `q0f16` and `q0f32` (#1228)

This PR adds the support of `q0f16` and `q0f32`, and change `RMSNorm` to `nn.RMSNorm`.

* [Core][Llama] Argument `max_vocab_size` and `max_batch_size` (#1076)

This PR introduces the `max_vocab_size` and `max_batch_size`
as two new compile arguments. The purpose is for better
memory planning.

Besides, this PR updates llama to make use of the two arguments.
Other models are not changed yet.

The default value for `max_vocab_size` is set to 40000, which
I think is larger than the values of most models. The default
value for `max_batch_size` is currently set as 256. It is possible
that we update this value in the future to have a good default
number.

* [Llama] Support batched prefill (#1233)

This PR supports the Llama modeling with batched prefill, which
can bring higher throughput for the overall prefill process in
serving.

Besides, the PR splits the attention function used in batching
settings into two separate ones, so that we do not dispatch to
the prefill/decode attention functions at runtime.

* [Core] Skip PrimExpr index int32 downcasting for batching (#1234)

This PR makes the ForceNarrowIndexToInt32 to skip application
when batching is enabled.

The reason is because the flattened index of the KV cache append
function may exceed the range of int32 when the cache is large.
For example, in Llama-7b, when a KV cache supports more than
8192 tokens, the total cache size will be at least
```
8192 * 2 (K/V) * 32 (layers) * 4096 = 2147483648,
```
which reaches the maximum int32 value.

* Auto updated submodule references

* Update index.rst (#1236)

Fixed typo on tab:: Android

* Update android.rst (#1237)

On linux, TVM_NDK_CC environment variable should contain linux-x86_64

* Correct typo in cuda device name for rust chat model (#1241)

* Generating mlc-chat-config.json (#1238)

This PR finishes the last piece of new compilation pipeline, i.e.
generation of `mlc-chat-config.json` and other configuration files.

* Rename `--config` to `--model` and Consolidate CLI Messages (#1244)

* Specify argument "dest" in argparse (#1245)

* Add more stats during quantization (#1246)

* ensure that max_gen_len is set properly in mlc_chat_config (#1249)

Currently, `max_gen_len` defaults to 512 in `dump_mlc_chat_config`. However, the instantiations of `dump_mlc_chat_config` within `mlc-llm.build` currently omit the `max_gen_len` argument (even when it's specified in the HF config), so the default of 512 gets set for every `mlc-chat-config.json` that is created by `mlc-llm.build`. This PR fixes the issue.

* [Fix] Memory usage statistics (#1252)

* Introduce mlc_chat subcommands (#1251)

This PR makes it possible to use subcommands of `mlc_chat` package to
control quantization and compilation. Example:

```bash
python -m mlc_chat convert_weight \
    --model /models/Llama-2/hf/Llama-2-7b-chat-hf \
    --quantization q4f16_1 \
    -o ./dist/new-llama/

python -m mlc_chat gen_mlc_chat_config \
    --model ./dist/models/Llama-2-7b-hf \
    --quantization q4f16_1 \
    --max-sequence-length 4096 \
    --conv-template LM \
    -o ./dist/new-llama \

python -m mlc_chat compile \
    --model ./dist/models/Llama-2-7b-hf \
    --quantization q4f16_1 \
    --max-sequence-length 4096 \
    -o ./dist/new-llama/llama.so
```

It slightly simplifies the workflow.

* Update mlc-chat-config.json (#1254)

This PR updates two fields:
* `tokenizer_files`, which now non-existent files are removed from this
  list;
* `model_preset_tag` added to `model_config`, which helps the system to
  conveniently identify if a model configuration is already part of the
  system's built-in model preset.

* [Rust] Support multiple prompts (#1253)

This PR introduces `Prompt` and `ChatMessage` structures, and enhances the `ChatModule` to generate tokens using either a single string (via `Prompt::String`) or a vector of `ChatMessage` (via `Prompt::MessageList`). An example is provided in [rust/examples/mlc_chat.rs](https://github.com/mlc-ai/mlc-llm/compare/main...YuchenJin:mlc-llm:multi-prompts?expand=1#diff-4ffa9349207c1df6ceeebe06a9afc8f2015000e031b39d677bbbe7e85ae2819b).

Here is a snippet demonstrating the interface:
```rust
let message1 = ChatMessage {
    role: "user".to_owned(),
    content: "suppose we already have projects llama, alpaca and vicuna, what do you think would be a great name for the next project?".to_string(),
};
let message2 = ChatMessage {
    role: "assistant".to_owned(),
    content: "based on the previous projects, a possible name for the next project could be \"cervidae\" which is the scientific name for deer family. this name reflects the collaboration and teamwork involved in the development of the project, and also nods to the previous projects that have been developed by the team.".to_string(),
};
let message3 = ChatMessage {
    role: "user".to_owned(),
    content: "Summarize our conversations".to_string(),
};
let messages = vec![message1, message2, message3];
let prompt = Prompt::MessageList(messages);
let output = cm.generate(&prompt, None).unwrap();
```

* [UI] Correct "convert_weight_only" to "convert_weights_only" (#1227)

* [UI] Correct "convert_weight_only" to "convert_weights_only"

This is a frequent typo among multiple developers, as "weights" is
typically plural.  This commit updates the command-line-argument from
`--convert-weight-only` to `--convert-weights-only`.  For backwards
compatibility, the original spelling is kept as an equivalent usage.

* Update all use of "convert_weight_only" to "convert_weights_only"

* Add a downloader from HuggingFace (#1258)

This PR allows programmably downloading from HuggingFace to MLC's cache
directory, which locates in `$HOME/.cache/mlc_chat/model_weights/` by
default.

This PR relies on Git to clone the metadata, and Python's requests
library to fetch concrete weights as large files instead of the less
reliable Git LFS.

The example demonstrates downloading the 4-bit quantized Llama2-7B
model:

```python
from mlc_chat.support.download import download_mlc_weights

download_mlc_weights("HF://mlc-ai/mlc-chat-Llama-2-7b-chat-hf-q4f16_1")
```

Screenshot:

<img width="1913" alt="image" src="https://github.com/mlc-ai/mlc-llm/assets/22515877/3ac50594-4971-4216-bb17-47710b4af1dd">

* [Fix] Add prefix_tokens to `ConvConfig` in Python to match C++ implementation (#1256)

During my Rust implementation of the project, I noticed an inconsistency between the Python and C++ implementations of `ConvConfig`. Specifically, the Python version lacks the `prefix_tokens` field, which is present in the C++ version.: https://github.com/mlc-ai/mlc-llm/blob/5e02cacd8ebba2e0206d5a447225a137de2dac0d/cpp/conversation.h#L69-L70.

This can cause the [`_load_json_override`](https://github.com/mlc-ai/mlc-llm/blob/5e02cacd8ebba2e0206d5a447225a137de2dac0d/python/mlc_chat/chat_module.py#L1062C26-L1062C26) fails to work in the `_prefill` function.

I think a simple unit test would help, I'd like to add a regression test if the CI has been set up.

* [nn.Module] Mistral implementation (#1230)

* Add mistral high level structure

* Small config change

* Now work with compile, mistral inference logic left

* Add masking, cache_len, kv_seq_len; only attention forward left

* fix mistral override naming

* `interleave_kv` implementation

* lint fix

* move rolling buffer cache impl to mlc-llm

* isort fix

* nn.module implementation - reorganize structure

* Update python/mlc_chat/cli/compile.py

Co-authored-by: Junru Shao <junrushao1994@gmail.com>

* lint fix

---------

Co-authored-by: Charlie Ruan <charlieruan2001130@gmail.com>
Co-authored-by: Junru Shao <junrushao1994@gmail.com>

* Add `mlc_chat.__main__` as command line entrypoint (#1263)

This PR makes it possible to invoke mlc_chat subcommands directly.

Previously one has to use `python -m` as the prefix to invoke
`mlc_chat`:

```bash
python -m mlc_chat compile \
  --model /models/Llama-2-7b-chat-hf \
  --quantization q4f16_1 \
  --max-sequence-length 4096 \
  -o ./llama.so
```

This PR makes is possible to use it without the `python -m` prefix:

```bash
mlc_chat compile \
  --model /models/Llama-2-7b-chat-hf \
  --quantization q4f16_1 \
  --max-sequence-length 4096 \
  -o ./llama.so
```

* [Rust] Improve ergonomics of `generate` function in `ChatModule`  (#1262)

Following PR #1253, I think ergonomics of the `generate` function of `ChatModule` can be improved (given it's an important public-facing API). 

This PR simplifies the function's usage by implementing the `From` trait for the `Prompt` enum. Also updated the example code.

Now the interface changes to:

```rust
/// Single prompt case:
cm.generate("what is the meaning of life?", None)

/// Multiple prompt case:
let messages: Vec<ChatMessage> = vec![message1, message2, message3];
let output = cm.generate(messages, None).unwrap();
```

* [Fix] mistral `max_gen_len` (#1264)

* Rename `max-sequence-length` to `context-window-size` (#1265)

"Context window" is a terminology better aligned with LLM world.
Whenever a new model is trained, it is one of the most important metrics
that people care about. Therefore, I'd love to switch it over sooner
than later, before "mlc_chat compile" becomes mature and documented.

* Auto updated submodule references

* Fix group quantization shape infer (#1273)

This PR fixes the shape infer for group quantization.

* Continuous Model Delivery (#1272)

This PR provides a script that automatically quantizes models from
HuggingFace using various quantization formats as specified.

Example: When being provided the following JSON file:

```json
{
  "destination": "{username}/{model_id}-{quantization}", # Name of HF repo
  "default_quantization": ["q0f16", "q0f32", "q3f16_1", "q4f16_1", "q4f32_1"],
  "tasks": [
    {
      "model_id": "Llama-2-7b-hf",
      "model": "/models/Llama-2-7b-hf", # Can be HF URL or a local path
      "context_window_size": 4096,
      "conv_template": "LM",
      "quantization": [
        {
          "format": "q4f16_awq",
          "model": "https://huggingface.co/TheBloke/Llama-2-7B-AWQ", # Overriding default `source`
          "source_format": "awq"
        }
      ]
    }
  ]
}
```

The script will automatically run quantization and upload them to the
following repos:
- https://huggingface.co/junrushao/Llama-2-7b-hf-q0f16
- https://huggingface.co/junrushao/Llama-2-7b-hf-q0f32
- https://huggingface.co/junrushao/Llama-2-7b-hf-q3f16_1
- https://huggingface.co/junrushao/Llama-2-7b-hf-q4f16_1
- https://huggingface.co/junrushao/Llama-2-7b-hf-q4f32_1
- https://huggingface.co/junrushao/Llama-2-7b-hf-q4f16_awq

* Auto updated submodule references

* Enhance Model Delivery (#1283)

This PR introduces a few enhancements:
- Allow to override temporary path via environment variable `MLC_TEMP_DIR`;
- Add a 10-time retry when uploading the quantized weights to
  HuggingFace Hub. It could fail at times;
- Echo the commands being used to quantize the models in `logs.txt`;
- Fix a compatibility issue when pulling individual weights down from
  HuggingFace Hub in Git LFS.

* add python, rest api test (#1278)

* add python, rest api test

* remove mistral, fix pylint

* fix pylint requests import error

* Enable Jenkins CI (#1292)

* Update android.rst (#1289)

This fix enables default models in app-config.json to get shown "downloaded" in model list via with adb push method for the default models

* Consolidate Logics for GPU Detection (#1297)

This PR unifies automatic device detection logic by using
`mlc_chat.support.auto_device`, which comes with detailed logging and
fallback mechanisms.

* [CI] Fix lint concurrent clone issue (#1299)

This PR fixes the broken CI due to different tasks sharing the same
workspace.

* Auto updated submodule references

* [Feature] Prefill chunking for non-SWA models (#1280)

* generalize `prefill-chunk-size`

* renaming `cache_len` to `rolling_cache_len`

* [nn.Module] generalize `prefill_chunk_size`

* quick fix

* lint fix

* check sw with chunking

* fix `_attach_variable_bounds`

* update config from lib metadata

* cleanup

cleanup

* metadata fix

* Compatible with chatglm  (#979)

compatible for chatglm

* Add q4/q8_ft_group quantization mode (#1284)

* Add q4/q8_ft_group quantization mode

* Update submodule

* [CI] Clean workspace before build (#1304)

Cleaning the workspace before building, so that the previous
corrupted directory will not affect the current CI build.

Used the `cleanWS` from here https://www.jenkins.io/doc/pipeline/steps/ws-cleanup/

* [Python] Detect Driver/Device in a Separate Process (#1311)

This PR separates the device detection into separate
subprocesses.

The change is because the device detection will setup the
driver, which consumes some GPU VRAM (for example,
`tvm.device("opencl", 0).exist` consumes 390MB of VRAM on
RTX 4090). Consider the case we detect if CUDA, Vulkan and
OpenCL are available. When they are all available, each
detection holds some VRAM, larger than 430MB altogether.

If the device detection is in the same process as the main
process, the VRAM consumed by device detection will never
be released. This means that in the example above, we
detect CUDA, Vulkan and OpenCL, while in the end we
prioritize the CUDA device. Consequently, the memory held by
Vulkan and OpenCL detection will never be released.

Motivated by this issue, we separate the detection into
subprocess, so that the held VRAM can be successfully
released after detection.

* add chatglm3 support (#1313)

* feat: add chatglm3 support

* fix: remove duplicate code

---------

Co-authored-by: Max Lin <jason.max.lin@outlook.com>

* [SLIM] Skip None param when loading rather than failing (#1308)

* Auto updated submodule references

* [nn.Module] Implement GPT-2 Model Support (#1314)

Co-authored-by: Charlie Ruan <charlieruan2001130@gmail.com>

* remove ndk referencce from mali build target (#1312)

* remove ndk referencce from mali target

This removes the ndk reference for mali targets when building

* adding check for NDK environment variable

---------

Co-authored-by: x <x@xs-MacBook-Pro.local>

* [Rust] A few enhancements (#1310)

a few enhancements.

* [iOS] Mistral support (#1320)

* Add terminator for streaming REST API (#1325)

* Add terminator for streaming REST API

Add compatibility with OpenAI's streaming API.

fixes #1300

* make black happy

* read CUDA_ARCH_LIST to set CUDA capability versions for nvcc (#1326)

read {TORCH_,}CUDA_ARCH_LIST from environment and use it to set compute arch versions

* Update emcc.rst

* [AUTO-DEVICE] In process early exit device detection (#1333)

This PR updates the device auto detection to use in process
early exit. This hopefully will address some of the windows
issues in multi-process based approach while preserving memory
without allocating in each during auto mode.

* [RestAPI] Update parameters for /v1/completions and add tests (#1335)

Add n and stop for /v1/completions endpoint

Co-authored-by: Animesh Bohara <abohara@cs.cmu.edu>

* fix broken REST examples due to recent compatibility change (#1345)

fix broken restapi examples due to recent OpenAI API compatibility changes

* [Bugfix] Ignore exit code in device detection (#1350)

TVM's Vulkan runtime emits a non-zero exit code on certain Windows
drivers on DLL offloading. While there is definitely a way to fix this,
for now, we quickly get around this by not checking the exit code in
device detection.

This PR also improves clarify when multiple GPUs presents by emitting
logging messages on all GPUs available, rather than only GPU 0.

* [OpenHermes] Add conversation template for OpenHermes Mistral (#1354)

Add conversation template for open hermes mistral

Co-authored-by: David Pissarra <david.pissarra@tecnico.ulisboa.pt>

* [Tokenizer] Prioritize huggingface tokenizer.json, generate one if not included (#1352)

* Prioritize tokenzer json, generate one if not included

* [Rust] Prepare for publishing (#1342)

Prepare for publishing.

* Fix gen_mlc_chat_config for mistral  (#1353)

* fix gen config for mistral

* fix pylint

* Allow override of sw and chunk size in config gen

* lint fix

---------

Co-authored-by: Charlie Ruan <charlieruan2001130@gmail.com>
Co-authored-by: Charlie Ruan <53290280+CharlieFRuan@users.noreply.github.com>

* Fix ft quantization scale computation (#1321)

* [nn.Module] rename `--prefix-symbols` to `--system-lib-prefix` so it matches tvm naming (#1363)

rename `--prefix-symbols` to `--system-lib-prefix`

* [SLIM] Add info to metadata in SLIM and make compatible with llm_chat.cc (#1366)

* llm_chat.cc compatible with SLIM

* Treat model config as base

* [SLM] Support fuse split rotary in SLM flow (#1365)

* squashed commit

* fix lint

* format

* [SLM] UX `mlc_chat compile` improvement (#1371)

* `config.json` to `mlc-chat-config.json`

* remove overlap args

* remove quantization arg overlap

* add `--overrides` arg

* add `--overrides` help

* fix lint

* [SLM] fix pep 604 (#1372)

slm fix pep 604

* Rename subcommand `gen_mlc_chat_config` to `gen_config` (#1375)

This PR makes a few changes:

- Renames the sub-command `gen_mlc_chat_config` to `gen_config`;
- Intorduce `mlc_chat.support.logging` in favor of future switch from
  Python's native logging to structlog;
- Feeds only `model_config` field in `mlc-chat-config.json` to the
  compile interface to make sure the compilation is always reproducible
  depending on this single field plus "quantization";
- Makes sure the generated JSON file should always contain exactly the
  same fields, e.g. "sliding_window" field should never be deleted. See
  the "principle" below for more details.
- Refactored the overriding logics to make it more readable.
- Now only fields relevant to model architecture in `config.json` will
  be exported to `mlc-chat-config.json`, and irrelevant ones, such as
  version of `transformers` library, are stripped.
- Prefer `logging.exception` when used to log exceptions.
- Fixes a minor bug in the runtime where `max_window_size` is no
  longer available, likely introduced by #1371;

**Principle.** This PR strives to set a precedent to structuralize the
generated metadata/configuration files. Under the same "version" string,
we aim to achieve that the field names are always the same, making it
easier to parse and understand.

* docs(compile_models.rst)  (#1377)

* Auto updated submodule references

* Update compile_models.rst (#1381)

style error

* Add OpenHermes to list of conv templates (#1385)

* [iOS] OpenHermes-2.5-Mistral-7B support (#1388)

[iOS] hermes mistral support

* Add scale_attn_by_inverse_layer_idx to GPT-2 model (#1389)

* Check config.json before applying system defaults in gen_config (#1391)

* Refactor android project into a library and application module (#1248)

* [android] refactor android project into a library and application module.

* add library module

* [Model Support] Add Neural Hermes conversation template (#1402)

Add conversation template for neural hermes

* [REST] OpenAI Rest API compatible function calling (#1378)

* add function calling to rest api

* fix lint error

* remove debug print

* [Fix] conv templates (#1405)

fix conv templates

* [SLM] Integration with Disco sharding. (#1212)

* [SLIM] Support out dtype for quantizeLinear (#1412)

* Support out dtype for quantizeLinear

* Fix lint

* [SLIM] Red-Pajama(GPT-Neox class) (#1408)

* [SLIM] RedPajama

* Add GPTNeoX test

---------

Co-authored-by: Charlie Ruan <charlieruan2001130@gmail.com>

* [SLIM][Fix] Quick fix on sharding config for non supported models (#1414)

Quick fix on sharding config for non supported models

* [SLM] UX improvement: HF download & presets (#1374)

* Make the generate call optionally stateless (#1413)

* Auto updated submodule references

* Establish CUDA CI (#1419)

* [SLM] gpt_bigcode migration (#1418)

* [SLM] rename `sliding_window` to `sliding_window_size` (#1425)

rename `sliding_window` to `sliding_window_size`

* [SLM] auto system lib prefix (#1427)

* Update index.md

* [Doc][SLM] Update documentation to rollout SLM (#1421)

* Update web llm doc

* Reorganize build from source tutorial

* Update compile models via MLC

* Deprecate build Python API

* Update mlc chat config doc

* Fix quantization nit

* Make New Model Architectures page a WIP, provide initial pointers

* Update distribute compiled models accordingly

* Use SizeVar instead of Var for model definition (#1432)

Struct info inference can benefit from the implicit >= 0 bound of
SizeVar.

* [Feature] Attention sinks kvcache on mistral (#1435)

* sinks llm_chat impl

* sinks mistral

* rename to `attention_sink_size`

* [SLM] sinks mistral

* [SLM] mistral rope fix

* [3rdparty] Add back TVM in 3rdparty (#1438)

PR #1435 removed tvm from 3rdparty accidentally. This commit
adds it back.

* [SLM][CLI] cli naming so it matches slm `sliding_window_size` (#1437)

* sw cli naming so it matches slm

* fix lint

* [iOS] Update mistral v2 support (#1439)

* [iOS] Update mistral v2 support

Also drops the support for add new models as it is fragile atm.
Add extended virtual memory option so we can go beyond 4G for q4.

* prep for release

* [SLM] Streamlined Compilation Pipeline (#1436)

Previously, to conform with TVM's `relax.build` API, we have to
guarantee that the passes run in `mlc_llm` pipeline produce consistent
IRModule for the subsequent `default_build` pipeline in `relax.build` to
work on.

However, it is not usually the case and some passes will have to be
rerun or have their outcome being discarded. `EstimateMemoryUsage`, as a
notable example, runs half of `default_build` pipeline to lower the
RelaxIR into a right stage, collects an integer representing number of
bytes allocated in each Relax function, then discards the half-way
lowering only to give back this func-name-to-integer dictionary.
Another issue is that it makes certain IR manipulation more challenging
as part of the pass infra, resulting in boilerplate logics in
`compile.py` such as `_attach_metadata`.

Following https://github.com/apache/tvm/pull/16246, it's now possible to
fold all IR manipulation into `mlc_llm` pipeline, and remove dependency
to `default_build` as `relax.build` allows any or none passes as its
`pipeline` parameter. This PR makes such refactoring happen.

WARNING: DO NOT MERGE until https://github.com/apache/tvm/pull/16246 is
synced to https://github.com/mlc-ai/relax repo.

* Integrate FlashInfer into llama model (#1415)

Co-authored-by: Junru Shao <junrushao@apache.org>

* [Benchmark] Support loading prompt from file (#1445)

* Fix MQA and RoPE Application (#1446)

* [Fix][MultiGPU] Compatibility with old flow (#1447)

* Simplify Multi-GPU Loader Logics (#1449)

* Auto updated submodule references

* [Delivery] Update delivery script (#1451)

Update `delivery.py` to reflect new SLM changes such as `mlc_chat_gen_config` --> `gen_config`.

Include all `gen_config` arguments  (e.g. `sliding_window_size`, etc.); make all of them optional, since that is the case when using `gen_config` directly.

* Add Sanity Check in Multi-GPU Loader (#1453)

* Define `__STDC_FORMAT_MACROS` by default (#1454)

Should fix recent breaks of MLC nightly packaging on x86 mac, e.g.
https://github.com/mlc-ai/package/actions/runs/7243831194/job/19731257538.

* [SLIM][AWQ] AWQ GEMM support (#1362)

Previously #1229, we only supported loading INT4-GEMV-awq weights quantized by [the original repo](https://github.com/mit-han-lab/llm-awq#usage). This pr supports loading INT4-GEMM format weights quantized by [AutoAWQ](https://github.com/casper-hansen/AutoAWQ). Here is an example that loads from [TheBloke/Llama-2-7b-Chat-AWQ](https://huggingface.co/TheBloke/Llama-2-7B-Chat-AWQ) and runs the benchmark.

```bash
MODEL_PATH=/opt/models/llama-2/llama-2-7b-chat-hf/
OUTPUT_PATH=./dist/new-llama-awq/
QUANT=q4f16_autoawq

python -m mlc_chat gen_config $MODEL_PATH/config.json --quantization $QUANT -o $OUTPUT_PATH --conv-template llama-2

python -m mlc_chat compile $OUTPUT_PATH -o $OUTPUT_PATH/llama.so

python -m mlc_chat convert_weight $MODEL_PATH --quantization $QUANT -o $OUTPUT_PATH --source-format awq --source ../Llama-2-7B-Chat-AWQ/model.safetensors

python -m mlc_chat.cli.benchmark --model $OUTPUT_PATH --model-lib $OUTPUT_PATH/llama.so --device "cuda:0" --prompt "What is the meaning of life?" --generate-length 256
```

Note:
Q: What is difference between INT4-GEMV and INT4-GEMM?
A: https://github.com/casper-hansen/AutoAWQ#int4-gemm-vs-int4-gemv-vs-fp16

* [Smallfix] Fix wizard math template (#1463)

Fix wizard math template

* [Docs][SLM] Split compile models into convert weight and compile model library (#1464)

Split compile models into convert weight and compile model library

Co-authored-by: Charlie Ruan <charlieruan@Charlies-MacBook-Pro.local>

* [Sink] Add sink size to metadata (#1459)

* Add sink size to metadata

* Parse attention sink in runtime metadata

* Fix lint

* [Docs][SLM] Add end-to-end tutorial for webLLM in SLM (#1476)

* Create privacy.md

* [Docs][SLM] Change folder struct in doc, update webllm (#1479)

Change folder struct in doc, update webllm

* [Docs][Fix] Change webllm model lib link (#1484)

* Update index.rst (#1482)

Remove 'iOS' on Android page

* [Docs][Prebuilt] Add new prebuilt page (#1490)

* [Fix] Skip FuseSplitRotaryEmbedding via attrs (#1472)

Skip FuseSplitRotaryEmbedding via attrs

* Auto updated submodule references

* Prefer Extern Operators to `nn.SourceModule` (#1488)

* [Docs][SLM] Update runtime tutorials (#1491)

* [SLM] Store fp32 weight as fp16 on disk (#1496)

This PR fixes the weight store of `q0f32` with fp16 on disk.

* Introduce a Fused RoPE Kernel in TIR (#1493)

* Fix CMake for Android to avoid unlinked android log symbol (#1501)

Fix CMake for Android to avoid unlinked android log symbol in tokenizers.

* Reorganize folder structure (#1502)

This PR reorganizes the existing `python/` folder structure for better clarify.

- `mlc_chat/model` <- `mlc_chat/compiler/model`
- `mlc_chat/quantization` <- `mlc_chat/compiler/quantization`
- `mlc_chat/loader` <- `mlc_chat/compiler/loader`
- `mlc_chat/operator` <- `mlc_chat/compiler/*_op.py`
- `mlc_chat/compiler_pass` <- `mlc_chat/compiler/compiler_pass.py`
- `mlc_chat/interface` <- `mlc_chat/compiler/{compile/gen_config/convert_weight}.py`

* [SLM] Enable Debug Dump (#1499)

This PR enables the debug dump feature. The command would be something
like

```
mlc_chat compile ./dist/Llama-2-7b-chat-hf-q4f16_1-MLC/mlc-chat-config.json --device cuda -o dist/libs/Llama-2-7b-chat-hf-q4f16_1-cuda.so --debug-dump debug/
```

And it would dump 6 files in the `debug/` folder:

```
debug-phase0.py
debug-phase1.py
debug-phase2.py
debug-phase3.py
debug-phase4.py
debug-final.py
```

* [SLIM] Introduce microsoft/Phi-2 (#1469)

https://huggingface.co/microsoft/phi-2

Instructions to run the model:

```bash
QUANT=q0f16
MODEL_PATH=/PATH/TO/PHI-2
OUTPUT_PATH=./dist/phi-2-${QUANT}/

mkdir -p $OUTPUT_PATH

python -m mlc_chat gen_config $MODEL_PATH/config.json --quantization $QUANT --conv-template phi-2 -o $OUTPUT_PATH/params

python -m mlc_chat compile $OUTPUT_PATH/params -o $OUTPUT_PATH/model.so

python -m mlc_chat convert_weight $MODEL_PATH --quantization $QUANT -o $OUTPUT_PATH/params

CUDA_VISIBLE_DEVICES=6,7 python -m mlc_chat.cli.benchmark --model $OUTPUT_PATH/params --model-lib $OUTPUT_PATH/model.so --device "cuda:0" --prompt "What is the meaning of life?" --generate-length 256 
```

Note:
There are two [conv_template](https://huggingface.co/microsoft/phi-2#qa-format) for Phi-2. I don't know which one is better.
QA: "Instruct: <prompt>\nOutput:"
Chat: "Alice: <prompt>\nBob:"

* [SLM] Android refactor for SLM (#1494)

* refactor android for new flow

* fix url, remove -MLC from lib

* add android model lib url prebuild_models page

* Improvements to the formatting of Android deploy instructions under the Compile Android-capable models section (#1504)

Fix code block formatting in Android deploy docs

* [Docs] Add phi1_5 and phi2 prebuilt doc, also gpt2 (#1506)

Add phi1_5 and phi2 prebuilt doc, also gpt2

* [SLM] Red-pajama Disco support (#1507)

Test commands:

```bash
QUANT=q4f16_1
MODEL_PATH=/PATH/TO/RedPajama-INCITE-Chat-3B-v1
OUTPUT_PATH=./dist/new-red-pajama-${QUANT}/

mkdir -p $OUTPUT_PATH

python -m mlc_chat gen_config $MODEL_PATH/config.json --quantization $QUANT \
        --conv-template redpajama_chat -o $OUTPUT_PATH/params --tensor-parallel-shards 2

python -m mlc_chat compile $OUTPUT_PATH/params -o $OUTPUT_PATH/red-pajama.so

python -m mlc_chat convert_weight $MODEL_PATH --quantization $QUANT -o $OUTPUT_PATH/params

python -m mlc_chat.cli.benchmark --model $OUTPUT_PATH/params --model-lib $OUTPUT_PATH/red-pajama.so \
        --device "cuda:0" --prompt "What is the meaning of life?" --generate-length 256
```

* Auto-downloading and Just-in-Time Compilation in ChatModule (#1508)

This PR introduces support for auto-downloading quantized model from
HuggingFace, and optionally JIT-compile and generate corresponding model
lib if missing.

**Auto-downloading.** When creating a ChatModule, if `model` is prefixed
with `HF://`, MLC will search its model cache and automatically download
it from HuggingFace if missing. This way, users won't have to download
the model on their own using Git LFS.

**JIT compilation.** When creating a ChatModule and `model_lib_path` is
neither supplied nor found using the existing path finding protocol, MLC
will generate a model lib DSO and cache it on disk.

**Caching.** Model weights are in `$MLC_CACHE_DIR/mlc_chat/model_weights`
once downloaded from HuggingFace, and the model libs are cached under
`$MLC_CACHE_DIR/mlc_chat/model_lib`. By default, `$MLC_CACHE_DIR` is
`$HOME/.cache` on Linux/macOS and `%LOCALAPPDATA%` on windows.
Right now only the very basic caching strategy is used, i.e. no eviction
and no file system locks.

Example:

```python
import logging
from mlc_chat import ChatModule
from mlc_chat.callback import StreamToStdout

logging.basicConfig(
    level=logging.INFO,
    style="{",
    datefmt="%Y-%m-%d %H:%M:%S",
    format="[{asctime}] {levelname} {filename}:{lineno}: {message}",
)

cm = ChatModule("HF://junrushao/Llama-2-7b-hf-q4f16_1-MLC", device="cuda")
cm.generate(
    prompt="What is the meaning of life?",
    progress_callback=StreamToStdout(callback_interval=2),
)
```

* [Fix][Delivery] Use sys executable in delivery (#1510)

Use sys executable in delivery

* [Doc] Update define new models page (#1511)

* [SLM] Add Prebuilt lib url for Mistral in Docs (#1514)

add mistral android lib url

* [Fix] tp bigcode (#1515)

fix tp bigcode

* [SLM] cublas dispatch  (#1380)

* cublas

* fix

* [SLM] Fused Rope Integration (#1518)

Integrate fused rope into model gpt_neox and phi. 

Add an optional parameter `rotary_dim` to `llama_rope`. `rotary_dim` indicates the number of dimensions in the embedding that RoPE is applied to. By default `rotary_dim` is the same as `head_dim`. In model `Phi`, `rotary_dim` is set to a different number based on the config.

* Rename subpackage operator => op (#1521)

This PR addresses a package name conflict issue introduced by #1502,
where `mlc_chat.operator` collides with python's official `operator`
library.

More details:
https://github.com/mlc-ai/mlc-llm/pull/1502#issuecomment-1872662522.

* [Doc] Minor Fix Android Doc (#1524)

A minor path fix in the Android Doc, as the file `prepare_libs.sh` is
under `library` folder.

* Introduce MLC_JIT_POLICY and Dissolve JITOptions into ChatConfig (#1522)

This PR introduces an environment variable `MLC_JIT_POLICY` as a
follow-up item to PR [#1508](https://github.com/mlc-ai/mlc-llm/pull/1508#issuecomment-1871611605).
It allows to enable/disable the JIT behavior by:
- `OFF`: never JIT, and will throw an error if `model_lib`
is missing;
- `ON` (default): JIT whenever the model lib is missing and there's
a cache miss;
- `REDO`: whenever the model lib is missing, always do JIT
compilation even if cache hits;
- `READONLY`: never do JIT compilation but look up the JIT cache
whenever the model lib is missing.

It also dissolves the newly-introduced `JITOption` into `ChatConfig` so
that it can be used more seamlessly with exactly the existing APIs.
By doing so, users can simply specify `context_window_size`,
`prefill_chunk_size` to control the VRAM used in each model without
having to recompile the model lib themselves.

Example: If one focuses on developing compiler/runtime rather than
quantization, we could simply run

```bash
MLC_JIT_POLICY=REDO python main.py
```

to test if the compiler/runtime work smoothly together, where `main.py`
is:

```python
from mlc_chat import ChatConfig, ChatModule, callback
from mlc_chat.support import logging

logging.enable_logging()
MODEL="HF://junrushao/Llama-2-7b-chat-hf-q4f16_1-MLC",

cm = ChatModule(
    MODEL,
    device="cuda",
    chat_config=ChatConfig(
        context_window_size=1024,
        prefill_chunk_size=1024,
    ),
)
cm.generate(
    "What is the meaning of life?",
    progress_callback=callback.StreamToStdout(callback_interval=2),
)
```

* Add support for loading weights from a safetensor file (#1523)

* Add support for loading weights from a safetensor file

* Set pylint to ignore the import error

* Move pylint-disable line

Co-authored-by: Junru Shao <junrushao1994@gmail.com>

---------

Co-authored-by: Junru Shao <junrushao1994@gmail.com>

* Report upper-bound memory usage based on SLM model lib (#1525)

This PR introduces a command that reports the estimated upper-bound
memory usage based on the metadata section of an SLM-compiled model.

Example:

```bash
>> python -m mlc_chat.cli.model_metadata /path/to/model_lib.so --memory-only
[2023-12-31 18:40:43] INFO model_metadata.py:49: Parameter size: 3885.14 MB
[2023-12-31 18:40:43] INFO model_metadata.py:58: Temporary buffer size: 7184.15 MB
[2023-12-31 18:40:43] INFO model_metadata.py:71: KVCache size when context/sliding window size is 4096: 512.00 MB
[2023-12-31 18:40:43] INFO model_metadata.py:79: Total memory usage: 11581.29 MB
[2023-12-31 18:40:43] INFO model_metadata.py:84: Tweaking `prefill_chunk_size`, `context_window_size` and `sliding_window_size` to reduce memory usage
```

Addresses both B1 and B2 in https://github.com/mlc-ai/mlc-llm/pull/1516#issuecomment-1872878113.

Another demo using Python API:

```python
from mlc_chat import ChatConfig, ChatModule, callback
from mlc_chat.support import logging

logging.enable_logging()

MODEL="HF://junrushao/NeuralHermes-2.5-Mistral-7B-q4f16_1-MLC"

cm = ChatModule(
    MODEL,
    device="cuda",
    chat_config=ChatConfig(
        sliding_window_size=4096,
        prefill_chunk_size=1024,
        opt="O2",
    ),
)
cm.generate(
    "What is the meaning of life?",
    progress_callback=callback.StreamToStdout(callback_interval=2),
)
```

```bash
>>> MLC_JIT_POLICY=REDO python main.py
```

<img width="958" alt="image" src="https://github.com/mlc-ai/mlc-llm/assets/22515877/8fcf1fb2-53b3-4768-91b4-89f90712dea8">

* Refactor tensor parallel (#1509)

1. support n-dimension tensor sharding
2. remove unnecessary `row`, `col` and `group` field

* Enable FlashInfer in O2 mode (#1527)

This PR turns on FlashInfer in O2 mode given it has been relatively
stable over the past few weeks.

This commits also brings a few misc improvements:
- Pass in scratch memory managed by RelaxVM's memory pool - this change
  depends on TVM's [PR #16327](https://github.com/apache/tvm/pull/16327)
  and FlashInfer's [PR #43](https://github.com/flashinfer-ai/flashinfer/pull/43)
- Enable FlashInfer for group size = 4, which is a setting used in
  Mistral models;
- Slightly shorten and clarify the log message on memory usage on model
  lib loading.
- Integrate FlashInfer into GPT-BigCode models.

With this PR, FlashInfer is integrated into Mistral, Llama, GPT-NeoX,
GPT-BigCode, Phi. The only left out is GPT2, which has a special flag
`scale_attn_by_inverse_layer_idx` which applies an elementwise
normalization term `1.0 / layer_id` to attn scores before masked
softmax.

* Auto updated submodule references

* [SLM] Enable FasterTransformer quantization (#1480)

This PR enbales the FasterTransformer quantization of `q4f16_ft`.

* Support TinyLlama in SLM flow (#1531)

This PR includes two minor fixes to support TinyLlama:

- Fix BF16 loading via SafeTensor - it was broken because numpy does not
  support bf16, which leads to an exception in safetensor internally.
- FlashInfer doesn't support `head_dim == 64`, which we skipped in this
  PR.

After this PR, the following snippet runs TinyLlama pretty conveniently:

```python
from mlc_chat import ChatConfig, ChatModule, callback
from mlc_chat.support import logging

logging.enable_logging()

MODEL = "HF://junrushao/TinyLlama-1.1B-Chat-v1.0-q4f16_1-MLC"

def main():
    cm = ChatModule(
        MODEL,
        device="metal",
        chat_config=ChatConfig(context_window_size=1024),
    )
    cm.generate(
        "What is the meaning of life?",
        progress_callback=callback.StreamToStdout(callback_interval=2),
    )

if __name__ == "__main__":
    main()
```

* [SLM] Mistral Multi-GPU support (#1532)

```
MODEL = "HF://junrushao/Mistral-7B-Instruct-v0.2-q4f16_1-MLC"
TP_SHARDS = 2

from mlc_chat import ChatConfig, ChatModule, callback
from mlc_chat.support import logging

logging.enable_logging()

cm = ChatModule(
    MODEL,
    device="cuda",
    chat_config=ChatConfig(
        context_window_size=1024,
        prefill_chunk_size=1024,
        …
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant