Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error: invalid input 'encoder_attention_mask' #23

Closed
spencekim opened this issue Mar 18, 2023 · 8 comments
Closed

Error: invalid input 'encoder_attention_mask' #23

spencekim opened this issue Mar 18, 2023 · 8 comments

Comments

@spencekim
Copy link

I'm trying to run a slightly larger model (https://huggingface.co/facebook/bart-large-cnn). It's a Bart model, and I converted it to .onnx successfully using your script. Size comes out to ~1gb.

I get this error when running the summarization and text2text generation pipelines:

Error: invalid input 'encoder_attention_mask'
    at eval (ort-web.min.js?d8cf:6:446899)
    at Array.forEach (<anonymous>)
    at e.OnnxruntimeWebAssemblySessionHandler.run (ort-web.min.js?d8cf:6:446819)
    at InferenceSession.run (inference-session-impl.js?f23d:91:1)
    at sessionRun (models.js?a626:34:1)
    at seq2seq_forward (models.js?a626:111:1)
    at async Function.forward (models.js?a626:971:1)
    at async seq2seqRunBeam (models.js?a626:168:1)
    at async Function.runBeam (models.js?a626:964:1)
    at async Function.generate (models.js?a626:562:1)
@xenova
Copy link
Owner

xenova commented Mar 18, 2023

Could you provide the command used to convert it? Which task did you use? It may be because you are not exporting using -with-past (but that is just a guess).

The command I would use is:

python ./scripts/convert.py --model_id facebook/bart-large-cnn --from_hub --quantize --task seq2seq-lm-with-past

@spencekim
Copy link
Author

Yep that's the exact command I used. If you'd like, I could upload the files to your HuggingFace repo (https://huggingface.co/Xenova/transformers.js/tree/main) to make sure they look ok.

@xenova
Copy link
Owner

xenova commented Mar 18, 2023

No worries - I'll convert it on my side. Then it's most likely something I'd need to fix. The strange thing is, other model types (with the same model type, i.e., "bart") need the encoder attention mask (vs. what you are showing, which doesn't).

@xenova
Copy link
Owner

xenova commented Mar 18, 2023

Okay yep I was able to reproduce it on my side. As I said above, it's very strange that some bart models need it, while others dont... I'll make it work either way :)

EDIT: Okay I got it working (basically I just made it only add it if explicitly required by the decoder:
image

However, the output I got was a little strange:
[{"summary_text":"TheTower is 324 metres ( \"). It is the tallest in The Eiffel Tower is the tallest. The tower is the second tallest free-standing structure in France."}]

I assume it has a different type of tokenization that isn't yet supported. I'm off to bed now, but I'll look into it tomorrow. Once that's fixed, I'll push the changes :)

@xenova
Copy link
Owner

xenova commented Mar 18, 2023

Got it working :)
[{"summary_text":"The Eiffel Tower is the tallest structure in Paris. It is 324 metres (1,063 ft) tall, about the same height as an 81-storey building."}]

Will push changes soon.

@xenova xenova closed this as completed in 5025621 Mar 18, 2023
@xenova
Copy link
Owner

xenova commented Mar 18, 2023

Looks like the commit automatically closed the issue. Whoops.

Can you confirm the changes work (I think you can install an npm package from GitHub), and if so, I will push this in version 1.2.6

@spencekim
Copy link
Author

Sorry for the late response. It works great!

Side note, any plans to set up Typescript types, and/or support onnxruntime-node? Both would be super useful. I'd be open to helping implement it as well.

@xenova
Copy link
Owner

xenova commented Mar 18, 2023

Sorry for the late response. It works great!

Great!

Side note, any plans to set up Typescript types, and/or support onnxruntime-node? Both would be super useful. I'd be open to helping implement it as well.

Absolutely! We currently have a couple of people working on the typescript side of things (#26 and #28)

For onnxruntime-node, I suppose it would be a question of how to support both web and node versions of onnxruntime without duplicating code. If you have any ideas, or know how to do that, feel free to open up a PR!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants