Skip to content

[Feature] Support ChatGLM3-6B-Base#177

Merged
pppppM merged 4 commits intoInternLM:mainfrom
LZHgrla:lzh/chatglm3
Oct 27, 2023
Merged

[Feature] Support ChatGLM3-6B-Base#177
pppppM merged 4 commits intoInternLM:mainfrom
LZHgrla:lzh/chatglm3

Conversation

@LZHgrla
Copy link
Copy Markdown
Contributor

@LZHgrla LZHgrla commented Oct 27, 2023

Due to the injection attack prevention of ChatGLM3 tokenizer, the default prompt_template is currently used to fine-tune ChatGLM3-6B-Base.

TODO (in other PRs, future)

  • ChatGLM3 prompt_template fine-tuning
  • Support ChatGLM3-6B (a chat model)

Quick Start

  1. Fine-tune
xtuner train ${CONFIG}
  1. Convert
xtuner convert pth_to_hf ${CONFIG} ${PTH} ${SAVE_PATH}
  1. Chat
xtuner chat THUDM/chatglm3-6b-base --adapter ${ADAPTER_PATH} --prompt-template default --system-template ${SYSTEM_TEMPLATE}

@LZHgrla LZHgrla marked this pull request as draft October 27, 2023 08:57
@LZHgrla LZHgrla marked this pull request as ready for review October 27, 2023 09:38
@pppppM pppppM merged commit dfa8d2c into InternLM:main Oct 27, 2023
llkn-2 pushed a commit to llkn-2/xtuner that referenced this pull request Jul 31, 2024
* modify chatglm template name

* rename

* add chatglm3_6b_base cfgs

* update README
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants