-
Notifications
You must be signed in to change notification settings - Fork 199
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
LEX inference support and checkpoint #45
Comments
What's your extrapolation setting? Is it identical to our paper? Maybe you can try window attention which is much easier to implement to see the performance first. |
If you use it on LongEval setting, I think it doesn't work to retrieve very long topics. The local techniques maintain the local modeling where ppl is more stable. |
Thank for the reply @sunyt32 ! I was actually using the rotary embedding as implemented in the LLaMa HF codes. I only implemented the BCA to help it extrapolate to longer context. I did very simple tests for debugging: For example, I set the window size to be Thanks a lot for your time! |
I see, the reason here is similar, the window attention actually doesn't have the ability for longer context. However, using BCA or window attention should not cause gibberish. The reasonable generation sequence is at least coherent. I have to admit that the long context evaluation is much more reasonable nowadays...It's a wrong idea just to concentrate on ppl. Let's forget window attention styles... ntk extrapolation is a good technique for these tasks. But xPos still has its values. Our experiments show that xPos+ntk will have a more stable performance than RoPE, including ppl and retrieval. |
Gotcha! Thanks for the nice advice! I'll try the other way you suggested! |
Hello, thanks for your great work!
I was trying to benchmark your work on LEX, which I found in this fork.
However, that fork repo doesn't have issue feature so I'm posting my questions here.
I tried to test your BCA technique with the LLaMa models. So I implemented the BCA according to the commit I pasted above. However, my model failed to extrapolate when going beyond the block size. So I am wondering if you could provide one checkpoint of your LEX model such that I could test and compare with my codes to see where is the bug?
Thanks!
The text was updated successfully, but these errors were encountered: