Enhance performance of batch inferencing tutorial with vllm and running on L40s and H100 GPUs #247
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Following changes have been performed:
As a result, this tutorial can now support high-throughput batch inferencing use cases with any LLM.
Rendered version:
https://github.com/IBM/CodeEngine/blob/4a58342856c3828b7477f67af9aa2c8338d41a47/serverless-fleets/tutorials/inferencing/README.md
Since I added 8000 recipes pls review at a per commit basis