Skip to content

Conversation

tastelikefeet
Copy link
Collaborator

@tastelikefeet tastelikefeet commented Sep 5, 2023

New algorithms:

  1. Add bnb 4bit & 8bit and autogptq lora support
  2. Add lora support for torch.nn.Embedding
  3. Add sidetuner
  4. Add restuner-bypass
  5. Fix some bugs

New features:

  1. llm_sft support cross-validation with model.generate
  2. llm_sft support perf recording
  3. All tuners support activate and deactivate
  4. Add more unit tests
  5. Fix some bugs

* feat/replace_lora:
  add tests
  add restuner
…lace_lora

* commit 'f925f4297268bbc6a14a12157cbb23a06a225cfb':
  Add internlm (modelscope#59)
  Add bloom (modelscope#55)
  Add baichuan2 (modelscope#40)
  add feat: only save model (modelscope#49)
  Add openbuddy llama2 (modelscope#47)
  fix ddp bug (modelscope#45)
  fix template bug2 (modelscope#44)
* feat/replace_lora:
  Add internlm (modelscope#59)
  Add bloom (modelscope#55)
  Add baichuan2 (modelscope#40)
  add feat: only save model (modelscope#49)
  Add openbuddy llama2 (modelscope#47)
  fix ddp bug (modelscope#45)
  fix template bug2 (modelscope#44)

# Conflicts:
#	examples/pytorch/llm/src/llm_sft.py
#	examples/pytorch/llm/src/utils/preprocess.py
* feat/replace_lora:
  support activate/deactivate adapter

# Conflicts:
#	swift/tuners/adapter.py
#	swift/tuners/lora.py
#	swift/tuners/prompt.py
@wenmengzhou wenmengzhou merged commit ca955ba into modelscope:main Sep 15, 2023
hjh0119 pushed a commit to hjh0119/swift that referenced this pull request Jul 22, 2024
## New algorithms:

* Add bnb 4bit & 8bit and autogptq lora support
* Add lora support for torch.nn.Embedding
* Add sidetuner
* Add restuner-bypass
* Fix some bugs

## New features:

* llm_sft support cross-validation with model.generate
* llm_sft support perf recording
* All tuners support activate and deactivate
* Add more unit tests
* Fix some bugs
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants