Skip to content

[Fix] Use token_id instead of token for encode_fn & Set eval mode before generate#107

Merged
pppppM merged 2 commits intoInternLM:mainfrom
LZHgrla:lzh/fix_bugs
Sep 7, 2023
Merged

[Fix] Use token_id instead of token for encode_fn & Set eval mode before generate#107
pppppM merged 2 commits intoInternLM:mainfrom
LZHgrla:lzh/fix_bugs

Conversation

@LZHgrla
Copy link
Copy Markdown
Contributor

@LZHgrla LZHgrla commented Sep 7, 2023

Issue #105

@pppppM pppppM requested a review from HIT-cwh September 7, 2023 06:12
@pppppM pppppM merged commit abd9de1 into InternLM:main Sep 7, 2023
@LZHgrla LZHgrla linked an issue Sep 11, 2023 that may be closed by this pull request
llkn-2 pushed a commit to llkn-2/xtuner that referenced this pull request Jul 31, 2024
…de before generate (InternLM#107)

* set eval mode before generate

* use token_id instead of token
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Bug] ChatGLM2 tokenizer dismatch on eos_token_id and eos_token

2 participants