Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

why is the process so slow #21

Closed
lihongxiacream opened this issue Jul 24, 2024 · 5 comments
Closed

why is the process so slow #21

lihongxiacream opened this issue Jul 24, 2024 · 5 comments

Comments

@lihongxiacream
Copy link

it costs 100 hours to select from 50000 multi_turn samples

@MingLiiii
Copy link
Collaborator

Firstly, thank you for your interest in our work. The calculation of IFD scores requires the inference on LLMs, thus it's naturally time-consuming. However, we also proposed Superfiltering(ACL'24), which utilizes small language models like GPT2 to select the data rather than LLMs, it tremendously lowers the time and cost for the data selection process. If efficiency is important to you, please try it.

Secondly, you did not provide enough information for your observation:

  1. Since this method was originally used for single-round data, how did you implement it for multi-round samples? Calculate IFD once every turn? Did you use the whole previous conversations when calculating or just use the question at each turn?
  2. How large are your 50k multi-turn samples? 50k is not a small number, even on 50k simple Alpaca data, it needs several hours. If the questions/answers are long in your sample, and there are a lot of rounds in each sample, it should definitely cost a lot more hours. Maybe you should first estimate the token count and inference count.
  3. What base LLM did you use?
  4. What GPU did you use?

Again, thank you for your interest! We highly recommend you try our Superfiltering(ACL'24) if efficiency is important to you!

@lihongxiacream
Copy link
Author

Thank you for your answer!!
The sample is indeed very large, which is 458 MB . I just use the question and answer at each turn instead of history, and I use Qwen1.5-7B-Chat Model and a A800 gpu. I calculate the loss once every turn during data analysis. Do you have any good ideas to accelerate inference.
Thank you again and I will also try to use Superfiltering Method.

@lihongxiacream
Copy link
Author

And does this project support Chinese datasets selection?

@MingLiiii
Copy link
Collaborator

Thank you for your interest!

Based on your data, I think it is quite reasonable that it will cost a lot of hours. Though it has only 50k samples, the size is almost 20 times of the alpaca data. Unfortunately, I am no expert on accelerating inferences, sorry about that.

As for whether this method supports Chinese datasets, I think the answer should be yes. Our method is a language-agnostic method, it computes and compares the losses/perplexities generated by base models. So if the base model itself supports other language, then our method should be useful.

@MingLiiii
Copy link
Collaborator

If you are interested in our method or have further questions, we can also add WeChat friends for better communication.
Please send me an email if you are interested!

Thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants