Skip to content
This repository was archived by the owner on Oct 25, 2024. It is now read-only.

Conversation

@Spycsh
Copy link
Contributor

@Spycsh Spycsh commented Dec 20, 2023

Type of Change

bug fix

Description

A simple example will reproduce this issue, when the second output contains template information and no correct output. I think we should by default clear the previous prompt template unless we know exactly our model can work on multi-round speech.

from intel_extension_for_transformers.neural_chat import build_chatbot, PipelineConfig
config = PipelineConfig()
chatbot = build_chatbot(config)
out = chatbot.predict("Tell me about Intel Xeon Scalable Processors.")
print(out)
out = chatbot.predict("Tell me about Intel")
print(out)

This is because the new query is simply appended to the previous template and the default model cannot deal with it.

Expected Behavior & Potential Risk

bug fix

How has this PR been tested?

example

Dependency Change?

None

@Spycsh Spycsh changed the title fix template output of multi-round predictiton fix wrong output of multi-round predictiton Dec 20, 2023
@Spycsh Spycsh changed the title fix wrong output of multi-round predictiton fix wrong output of multi-round prediction Dec 20, 2023
@Spycsh Spycsh changed the title fix wrong output of multi-round prediction [NeuralChat] fix wrong output of multi-round prediction Dec 20, 2023
@hshen14 hshen14 merged commit 6fe5a9f into main Dec 21, 2023
@hshen14 hshen14 deleted the spycsh/fix_rompt_template branch December 21, 2023 13:15
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants