Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

BlenderbotSmall incorrect usage of start and end tokens #22301

Closed
2 of 4 tasks
xenova opened this issue Mar 21, 2023 · 5 comments
Closed
2 of 4 tasks

BlenderbotSmall incorrect usage of start and end tokens #22301

xenova opened this issue Mar 21, 2023 · 5 comments
Assignees

Comments

@xenova
Copy link
Contributor

xenova commented Mar 21, 2023

System Info

  • transformers version: 4.27.2
  • Platform: Windows-10-10.0.19041-SP0
  • Python version: 3.8.3
  • Huggingface_hub version: 0.12.0
  • PyTorch version (GPU?): 1.13.0+cu117 (True)
  • Tensorflow version (GPU?): not installed (NA)
  • Flax version (CPU?/GPU?/TPU?): not installed (NA)
  • Jax version: not installed
  • JaxLib version: not installed
  • Using GPU in script?: no
  • Using distributed or parallel set-up in script?: no

Who can help?

@ArthurZucker @younesbelkada @Narsil

Information

  • The official example scripts
  • My own modified scripts

Tasks

  • An officially supported task in the examples folder (such as GLUE/SQuAD, ...)
  • My own task or dataset (give details below)

Reproduction

As stated in the documentation: https://huggingface.co/docs/transformers/model_doc/blenderbot-small#transformers.BlenderbotSmallForConditionalGeneration.forward.example
the model should use </s> and <s> for separating the user input and response:

from transformers import AutoTokenizer, BlenderbotSmallForConditionalGeneration

mname = "facebook/blenderbot_small-90M"
model = BlenderbotSmallForConditionalGeneration.from_pretrained(mname)
tokenizer = AutoTokenizer.from_pretrained(mname)
UTTERANCE = "My friends are cool but they eat too many carbs."
print("Human: ", UTTERANCE)

inputs = tokenizer([UTTERANCE], return_tensors="pt")
reply_ids = model.generate(**inputs)
print("Bot: ", tokenizer.batch_decode(reply_ids, skip_special_tokens=True)[0])

REPLY = "I'm not sure"
print("Human: ", REPLY)

NEXT_UTTERANCE = (
    "My friends are cool but they eat too many carbs.</s> <s>what kind of carbs do they eat? "
    "i don't know much about carbs</s> "
    "<s> I'm not sure."
)
inputs = tokenizer([NEXT_UTTERANCE], return_tensors="pt")
next_reply_ids = model.generate(**inputs)
print("Bot: ", tokenizer.batch_decode(next_reply_ids, skip_special_tokens=True)[0])

However, these tokens are not present in the vocabulary or special tokens

I assume they should be replaced with __start__ and __end__?


I have also tried to use the ConversationPipeline, and follow steps outlined here, but I always get nonsensical results.

Even when trying the hosted inference API for the model (https://huggingface.co/facebook/blenderbot_small-90M), it either repeats itself, or doesn't follow in conversation.

Expected behavior

The tokens should be correct, and the chatbot should engage in more realistic conversation

@ArthurZucker
Copy link
Collaborator

Hey! Thanks for reporting, will investigate!

@ArthurZucker
Copy link
Collaborator

Hey! When I use the Conversational pipeline I get the same outputs as you.
Regarding the content of the special tokens, it does not really matter as long as the mapping is correct. If the model's bos_id is 1, then as long as <s> maps to 1 then the generation will make sense.
And indeed we have:

In [35]: tokenizer.encode("<s>")
Out[35]: [3, 330, 1360]

In [36]: tokenizer.encode("__start__")
Out[36]: [1]

The doc example should be updated, or the tokenizer only should be updated.
Nice catch (however, this does not seem to really change the output for this example).
Also I am not entirely sure of how these eos and bos should be used in the context of BlenderBot. They should mark the start and end of a conversation when training the model on different converstations, while \n is used to sperate different prompts (so from the user and the bot).
I could not find anything online, gonna take a while to check with the messy original codebase

@ArthurZucker ArthurZucker self-assigned this Apr 11, 2023
@xenova
Copy link
Contributor Author

xenova commented May 6, 2023

Just bumping this again (in response to being marked as stale)

@huggingface huggingface deleted a comment from github-actions bot May 26, 2023
@ArthurZucker
Copy link
Collaborator

When I checked the original PR that added BlenderBot (could not really find anyting on the original repo ... ) seems like the doc example should be updated to use __end__ and __start__. See #4803.

@ArthurZucker
Copy link
Collaborator

Closed in #24092

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants