You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When trying to use chat templates in input_text (Like "[INST] hello. [/INST]"), I face the following error: rich.errors.MarkupError: closing tag '[/INST]' at position 17 doesn't match any open tag
Traceback:
Traceback (most recent call last):
File "inseq_test.py", line 31, in <module>
out = model.attribute(
^^^^^^^^^^^^^^^^
File "lib/python3.11/site-packages/inseq/models/attribution_model.py", line 445, in attribute
attribution_outputs = attribution_method.prepare_and_attribute(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "lib/python3.11/site-packages/inseq/attr/attribution_decorators.py", line 71, in batched_wrapper
out = f(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "lib/python3.11/site-packages/inseq/attr/feat/feature_attribution.py", line 237, in prepare_and_attribute
attribution_output = self.attribute(
^^^^^^^^^^^^^^^
File "lib/python3.11/site-packages/inseq/attr/feat/feature_attribution.py", line 414, in attribute
pbar = get_progress_bar(
^^^^^^^^^^^^^^^^^
.
.
.
File "lib/python3.11/site-packages/rich/progress.py", line 528, in __call__
renderable = self.render(task)
^^^^^^^^^^^^^^^^^
File "lib/python3.11/site-packages/rich/progress.py", line 626, in render
text = Text.from_markup(_text, style=self.style, justify=self.justify)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "lib/python3.11/site-packages/rich/text.py", line 286, in from_markup
rendered_text = render(text, style, emoji=emoji, emoji_variant=emoji_variant)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "lib/python3.11/site-packages/rich/markup.py", line 167, in render
raise MarkupError(
rich.errors.MarkupError: closing tag '[/INST]' at position 17 doesn't match any open tag
🔬 How To Reproduce
Steps to reproduce the behavior:
Run the following code.
Code sample
import inseq
from transformers import AutoModelForCausalLM, AutoTokenizer, AutoConfig
model_name = "meta-llama/Llama-2-7b-chat-hf"
model_4bit = AutoModelForCausalLM.from_pretrained(model_name, load_in_4bit=False)
model = inseq.load_model(model_4bit, "attention", tokenizer=model_name, max_new_tokens=2)
input_text1 = "[INST] hello."
input_text2 = "[INST] hello. [/INST]" # [/INST] Causes Error
for text in [input_text1, input_text2]:
out = model.attribute(
input_texts=text,
method="attention",
generation_args={
"max_new_tokens": 3
}
).show()
Environment
OS: Linux
Python version: Python 3.11.8
Inseq version: 0.5.0
Expected behavior
Showing the attributions without error.
Additional context
The chat template is essential in the sense that without it, the model will complete the text instead of answering the prompt.
It can be applied using the tokenizer as follows:
Hey @mohsenfayyaz, thanks for reporting this, good catch!
It appears the [...] [/...] format often used in chat templates clashes with the rich console markup, which can lead to such errors if left unchecked. I added sanitization on #256 and a toy example is not working for me, can you confirm it works on your side too when checking out the fix-rich-markup branch?
🐛 Bug Report
When trying to use chat templates in input_text (Like "[INST] hello. [/INST]"), I face the following error:
rich.errors.MarkupError: closing tag '[/INST]' at position 17 doesn't match any open tag
Traceback:
🔬 How To Reproduce
Steps to reproduce the behavior:
Code sample
Environment
OS: Linux
Python version: Python 3.11.8
Inseq version: 0.5.0
Expected behavior
Showing the attributions without error.
Additional context
The chat template is essential in the sense that without it, the model will complete the text instead of answering the prompt.
It can be applied using the tokenizer as follows:
The text was updated successfully, but these errors were encountered: