Skip to content

v1.7.0

Compare
Choose a tag to compare
@Jintao-Huang Jintao-Huang released this 09 Mar 07:54
· 396 commits to main since this release

New Features:

  1. Added support for swift export, enabling awq-int4 quantization and gpt-int2,3,4,8 quantization. Models can be pushed to the Modelscope Hub. You can view the documentation here.
  2. Enabled fine-tuning of awq quantized models.
  3. Enabled fine-tuning of aqlm quantized models.
  4. Added support for deploying LLM with infer_backend='pt'.
  5. Added web-ui with task management and visualization of training loss, eval loss, etc. Inference is accelerated using VLLM.

New Tuners:

  1. Lora+.
  2. LlamaPro.

New Models:

  1. qwen1.5 awq series.
  2. gemma series.
  3. yi-9b.
  4. deepseek-math series.
  5. internlm2-1_8b series.
  6. openbuddy-mixtral-moe-7b-chat.
  7. llama2 aqlm series.

New Datasets:

  1. ms-bench-mini.
  2. hh-rlhf-cn series.
  3. disc-law-sft-zh, disc-med-sft-zh.
  4. pileval.

What's Changed

Full Changelog: v1.6.0...v1.7.0