-
Notifications
You must be signed in to change notification settings - Fork 25
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
failed to load bart model: unexpected EOF #24
Comments
Yes, I launch the binary and it starts downloading the model. The config.json is present in models/vblagoje/bart_lfqa/config.json and "max_length" is 142. My VM has 8GB but maybe it needs more.. {"level":"debug","time":"2023-06-28T10:22:39+02:00","message":"1.51 GiB of 1.51 GiB (99%) downloaded"} |
I increased RAM to 16GB and the spago serializing succeed, but then I got this error message: {"level":"trace","parameter":"model.decoder.embed_positions.weight","time":"2023-06-28T19:11:39+02:00","message":"parameter not mapped"} panic: input sequence too long: 145 > 0 goroutine 1 [running]: If I launch again, the same error message: [root@minix8 cybertron@v0.1.2]# ./main goroutine 1 [running]: |
Yes that's a bug due to |
I'm using this model: ./models/vblagoje/bart_lfqa/config.json and config.json contains max_length=142 |
I know, but the code doesn't look at |
Ok, so what do I need to do to make this example work ? Do you know which model to use for the Q&A example ? |
I downloaded models/deepset/bert-base-cased-squad2/ and now Q&A example works, BUT I don't understand the output. I supposed it should be able to answer to my question based on paragraph content.. ./main
|
The |
Ok, so I don't understand if this project is still maintained. What I should do to test Abstract Q&A examples over a text ??? :) |
Try: https://github.com/nlpodyssey/verbaflow |
Enough memory for SPAGO serialization ? Is it possible to do a buffered
serialization instead of loading all into memory ?
Il giorno ven 30 giu 2023 alle ore 16:09 Marten Mooij <
***@***.***> ha scritto:
… Try: https://github.com/nlpodyssey/verbaflow
But I don't think you have enough memory.
—
Reply to this email directly, view it on GitHub
<#24 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AL7ZDKHNQUEOCUO4O336SFLXN3M3LANCNFSM6AAAAAAZWAFBMM>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
--
*Quando cambi il modo di guardare le cose, le cose che guardi cambieranno
!! (Max Planck)*
|
Sure, if you have time you can contribute the changes yourself, it's open source after all. |
BART currently has no Question Answering support |
It would be very interesting to work on these projects, but first I have to
understand how they work and at the moment I still haven't figured out how
I can use the Abstractive Q&A example... :(
I remember some years ago I was able to get the example working from
Matteo's repository, but today the model is no longer available and I was
told to go to https://github.com/nlpodyssey/cybertron. Unfortunately, the
very example I was interested in investigating doesn't seem to work yet and
I don't understand why. Is it a cybertron bug?
Il giorno sab 1 lug 2023 alle ore 12:42 Marten Mooij <
***@***.***> ha scritto:
… Sure, if you have time you can contribute the changes yourself, it's open
source after all.
Work is also being done on a new serialization format:
https://github.com/nlpodyssey/safetensors
—
Reply to this email directly, view it on GitHub
<#24 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AL7ZDKGAQSAUPGEJSLC555DXN75KPANCNFSM6AAAAAAZWAFBMM>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
--
*Quando cambi il modo di guardare le cose, le cose che guardi cambieranno
!! (Max Planck)*
|
Only BERT supports Q&A so you will have to find (or train?) a model based on BERT. |
Some time ago, I did a test with code at
https://github.com/matteo-grella/gophercon-eu-2021/blob/main/examples/question_answering/main.go
If you open it you can find "
github.com/nlpodyssey/spago/pkg/nlp/transformers/bert".
At that time it worked. I Used a pdf manual text as input and I did some
questions about the content.
Now this example does not work because Matteo created a new project. So
BERT should be exist into your repository..
Il giorno mar 11 lug 2023 alle ore 14:29 Marten Mooij <
***@***.***> ha scritto:
… Only BERT supports Q&A so you will have to find (or train?) a model based
on BERT.
—
Reply to this email directly, view it on GitHub
<#24 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AL7ZDKFFRFRG5YAEIH2LPGDXPVBKHANCNFSM6AAAAAAZWAFBMM>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
--
*Quando cambi il modo di guardare le cose, le cose che guardi cambieranno
!! (Max Planck)*
|
Sorry guys I've been a bit far from the project due to other compelling priorities. I'll be back asap and provide you support. |
Many thank Matteo. I'd like to use the features made available by the
project, downloading some templates from https://huggingface.co/openai-gpt,
but right now I'm having some difficulties getting the examples to work.
Regards
Il giorno mer 2 ago 2023 alle ore 00:38 Matteo Grella <
***@***.***> ha scritto:
… Sorry guys I've been a bit far from the project due to other compelling
priorities. I'll be back asap and provide you support.
—
Reply to this email directly, view it on GitHub
<#24 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AL7ZDKEU6P3BKDAO5HB7FTDXTGAORANCNFSM6AAAAAAZWAFBMM>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
--
*Quando cambi il modo di guardare le cose, le cose che guardi cambieranno
!! (Max Planck)*
|
Dear, now Q&A works for me. Instead using an external paragraph file as in gopher examples, I put the content into a variable: [root@minix8 cybertron@v0.1.2]# sh exa1.sh
|
Dear Matteo,
i try to build abstract q&a but I got this error:
...
...
{"level":"debug","time":"2023-06-27T19:58:00+02:00","message":"1.45 GiB of 1.51 GiB (95%) downloaded"}
{"level":"debug","time":"2023-06-27T19:58:03+02:00","message":"1.46 GiB of 1.51 GiB (96%) downloaded"}
{"level":"debug","time":"2023-06-27T19:58:06+02:00","message":"1.47 GiB of 1.51 GiB (97%) downloaded"}
{"level":"debug","time":"2023-06-27T19:58:09+02:00","message":"1.48 GiB of 1.51 GiB (97%) downloaded"}
{"level":"debug","time":"2023-06-27T19:58:12+02:00","message":"1.49 GiB of 1.51 GiB (98%) downloaded"}
{"level":"debug","time":"2023-06-27T19:58:15+02:00","message":"1.50 GiB of 1.51 GiB (99%) downloaded"}
{"level":"debug","time":"2023-06-27T19:58:18+02:00","message":"1.51 GiB of 1.51 GiB (99%) downloaded"}
{"level":"debug","time":"2023-06-27T19:58:18+02:00","message":"1.51 GiB (100%) downloaded"}
{"level":"debug","url":"https://huggingface.co/vblagoje/bart_lfqa/resolve/main/vocab.json","destination":"models/vblagoje/bart_lfqa/vocab.json","time":"2023-06-27T19:58:18+02:00","message":"downloading"}
{"level":"debug","time":"2023-06-27T19:58:19+02:00","message":"877.76 KiB (100%) downloaded"}
{"level":"debug","url":"https://huggingface.co/vblagoje/bart_lfqa/resolve/main/merges.txt","destination":"models/vblagoje/bart_lfqa/merges.txt","time":"2023-06-27T19:58:19+02:00","message":"downloading"}
{"level":"debug","time":"2023-06-27T19:58:20+02:00","message":"445.62 KiB (100%) downloaded"}
{"level":"trace","time":"2023-06-27T19:58:32+02:00","message":"Reporting possible conversion mapping anomalies"}
{"level":"trace","parameter":"model.decoder.layer_norm.bias","time":"2023-06-27T19:58:32+02:00","message":"parameter not initialized"}
{"level":"trace","parameter":"model.decoder.layer_norm.weight","time":"2023-06-27T19:58:32+02:00","message":"parameter not initialized"}
{"level":"trace","parameter":"classification_head.dense.bias","time":"2023-06-27T19:58:32+02:00","message":"parameter not initialized"}
{"level":"trace","parameter":"classification_head.out_proj.weight","time":"2023-06-27T19:58:32+02:00","message":"parameter not initialized"}
{"level":"trace","parameter":"classification_head.dense.weight","time":"2023-06-27T19:58:32+02:00","message":"parameter not initialized"}
{"level":"trace","parameter":"model.encoder.layer_norm.weight","time":"2023-06-27T19:58:32+02:00","message":"parameter not initialized"}
{"level":"trace","parameter":"classification_head.out_proj.bias","time":"2023-06-27T19:58:32+02:00","message":"parameter not initialized"}
{"level":"trace","parameter":"model.encoder.layer_norm.bias","time":"2023-06-27T19:58:32+02:00","message":"parameter not initialized"}
{"level":"trace","parameter":"model.encoder.embed_positions.weight","time":"2023-06-27T19:58:32+02:00","message":"parameter not mapped"}
{"level":"trace","parameter":"lm_head.weight","time":"2023-06-27T19:58:32+02:00","message":"parameter not mapped"}
{"level":"trace","parameter":"model.encoder.embed_tokens.weight","time":"2023-06-27T19:58:32+02:00","message":"parameter not mapped"}
{"level":"trace","parameter":"model.decoder.embed_tokens.weight","time":"2023-06-27T19:58:32+02:00","message":"parameter not mapped"}
{"level":"trace","parameter":"model.decoder.embed_positions.weight","time":"2023-06-27T19:58:32+02:00","message":"parameter not mapped"}
Serializing model to "models/vblagoje/bart_lfqa/spago_model.bin"... signal: killed
I tried again, and I got:
[root@minix8 cybertron@v0.1.2]# GOARCH=amd64 CYBERTRON_MODEL=Helsinki-NLP/opus-mt-en-it CYBERTRON_MODELS_DIR=models go run ./examples/abstractivequestionasnwering/main.go
{"level":"debug","file":"models/vblagoje/bart_lfqa/config.json","time":"2023-06-27T19:59:17+02:00","message":"model file already exists, skipping download"}
{"level":"debug","file":"models/vblagoje/bart_lfqa/pytorch_model.bin","time":"2023-06-27T19:59:17+02:00","message":"model file already exists, skipping download"}
{"level":"debug","file":"models/vblagoje/bart_lfqa/vocab.json","time":"2023-06-27T19:59:17+02:00","message":"model file already exists, skipping download"}
{"level":"debug","file":"models/vblagoje/bart_lfqa/merges.txt","time":"2023-06-27T19:59:17+02:00","message":"model file already exists, skipping download"}
{"level":"info","model":"models/vblagoje/bart_lfqa/spago_model.bin","time":"2023-06-27T19:59:17+02:00","message":"model file already exists, skipping conversion"}
{"level":"fatal","error":"failed to load bart model: unexpected EOF","time":"2023-06-27T19:59:17+02:00"}
exit status 1
Any ideas ?
Thank you.
The text was updated successfully, but these errors were encountered: