Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Metrics for sequence tagging tasks dependant on the max_seq_length parameter #90

Open
codedecde opened this issue Jun 19, 2022 · 1 comment

Comments

@codedecde
Copy link

codedecde commented Jun 19, 2022

Hi all
It seems like for sequence tagging tasks like WikiANN, the metrics are computed on truncated sequences (upto max sequence length). A consequence of that would be that for the same model, the metrics would change with changing max_seq_len in a way that may not be indicative of model quality (eg: by changing max_seq_len to 256, for the same exact model, we might see different results).

One potential fix would be for test eval to always be upto the sequence length supported by the model (eg: 512 for mBERT / XLM-Roberta); and for documents with larger sequences, might consider predictions for all other tokens as "O" (or use a windowed prediction mechanism, but that might be too involved).

@stefan-it
Copy link
Contributor

Hi @codedecde ,

I hope this is not the case, because RemBERT has this max length of 256.

But as far as I can tell from the pre-processing code, longer sequences are split into multiple sentences:

if (subword_len_counter + current_subwords_len) > max_seq_len:
fout.write(f"\n{token}\t{label}\n")
fidx.write(f"\n{idx}\n")
subword_len_counter = current_subwords_len
else:
fout.write(f"{token}\t{label}\n")
fidx.write(f"{idx}\n")
subword_len_counter += current_subwords_len

(see extra new line before token) 🤔

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants