Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reformer for Squad and other QA #47

Closed
hugokitano opened this issue Feb 28, 2020 · 4 comments
Closed

Reformer for Squad and other QA #47

hugokitano opened this issue Feb 28, 2020 · 4 comments

Comments

@hugokitano
Copy link

Hi, thanks for this codebase. I'm looking to adapt Reformer for Squad, and I'm not sure exactly how. Do you think it will look something like this, from the pytorch-transformers github?https://github.com/huggingface/transformers/blob/master/examples/run_squad.py

I'm having trouble seeing how I am going to pass two parameters (context and query) into the model. forward() function instead of one. Thanks!

@lucidrains
Copy link
Owner

@hugokitano Hi! I am working with non-language use cases, but for Squad, I think you should be able to encode the entire context and query, separated with some delimiter token, and pass it all into the Reformer?

@hugokitano
Copy link
Author

Interesting, kind've like the "full Reformer sequence → sequence" example in the ReadMe? And that means I will have to pass the outputs of the Reformer through a decoder to get the final answer?

@lucidrains
Copy link
Owner

@hugokitano I think you can frame it many ways, and with enough parameters, it probably won't matter. T5 treated almost all language tasks as a sequence to sequence and got SOTA just by sheer scale. Otherwise, in the past, people have gotten it to work on one stack of attention like BERT

@lucidrains
Copy link
Owner

closing due to inactivity

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants