New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Question about sampler. It takes too much time #249
Comments
@sleepwalker2017 Thanks for trying out vLLM and reporting the performance issue! Yes, our sampler is indeed not optimized well yet. Particularly, vLLM performs sampling for one request at a time, because each request can have different sampling parameters. For example, request A may use a top-p sampling while request B in the same batch may use beam search with beam width 6. In such a case, it's not possible to simultaneously process the sampling operations for the two requests. Instead, vLLM process one request at a time. This can incur non-negligible overhead in latency, when you run small models. That being said, your profiling result is very weird. Could you provide more information about the |
Please refer to #264 for the comparison with FasterTransformer. |
Of course, I can provide the input_ids. Actually it's no special. I use batch = 128, seq_len = 32. |
Closing this issue as stale as there has been no discussion in the past 3 months. If you are still experiencing the issue you describe, feel free to re-open this issue. |
I noticed that, the sampler stage uses lots of repeated cuda kernels. Seems you do sampling in a for loop, launch each kernel for a sequence? Why is this?
BTW, do you compare the performance with FasterTransformer? I didn't see about this.
Thank you!
below is my code:
The text was updated successfully, but these errors were encountered: