-
Notifications
You must be signed in to change notification settings - Fork 107
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Chunking for the 384 words limit #82
Comments
Hi, I think that gliner-spacy (https://github.com/theirstory/gliner-spacy?ref=bramadams.dev) integrate a chunking function |
Hi all. Yes, Gliner spaCy handles the chunking for you. I kept it as an argument so that as the GliNER model improves (and can handle larger inputs), the package won't need to be updated. |
Thank you |
On that note, is it possible to use GLiNER SpaCy's chunking for finetuning GLiNER, Specifically the |
I believe there are a few of us working on gliner finetuning packages. I have one that's not ready yet, but I believe @urchade has made progress and has a few notebooks in this repository to get you started. In all these cases, you could use gliner spacy to help with the annotation process in something like Prodigy, from ExplosionAI. It's primarily what I use for annotating textual data because it works so easily with spaCy. You would then need to modify the output to align with the gliner finetuning approach. This is actually exactly what we did for the Placing the Holocaust project. You can see our GliNER finetuned model here: https://huggingface.co/placingholocaust/gliner_small-v2.1-holocaust |
What is the best way to chunk longer texts so each chunk fits under the 384 words (or 512 subtokens) ?
My articles on the average are around 1200 tokens / 5000 chars approx
Thank you
The text was updated successfully, but these errors were encountered: