Add cnn_dailymail processing for summarization, and offline/online run config yamls#67
Add cnn_dailymail processing for summarization, and offline/online run config yamls#67
Conversation
Signed-off-by: attafosu <thomas.atta-fosu@intel.com>
|
MLCommons CLA bot All contributors have signed the MLCommons CLA ✍️ ✅ |
Summary of ChangesHello @attafosu, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request enhances the examples by integrating Llama3.1-8B for summarization tasks, leveraging a vLLM server. It provides a streamlined process for preparing the necessary Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Pull request overview
This PR adds support for benchmarking Llama 3.1-8B on the CNN/DailyMail summarization dataset. It provides configuration files for both offline and online benchmark modes, along with a script to download and preprocess the dataset, and documentation for running the benchmarks.
- Introduces a Python script to download and format the CNN/DailyMail dataset for summarization tasks
- Adds YAML configuration files for offline and online benchmark scenarios
- Provides comprehensive documentation on setting up and running the benchmarks
Reviewed changes
Copilot reviewed 4 out of 4 changed files in this pull request and generated 3 comments.
| File | Description |
|---|---|
| examples/05_Llama3.1-8B_Example/download_cnndm.py | Script to download CNN/DailyMail dataset and format it with summarization prompts |
| examples/05_Llama3.1-8B_Example/offline_llama3_8b_cnn.yaml | Configuration for offline throughput benchmarking |
| examples/05_Llama3.1-8B_Example/online_llama3_8b_cnn.yaml | Configuration for online latency benchmarking with Poisson load pattern |
| examples/05_Llama3.1-8B_Example/README.md | Documentation for setting up and running the Llama 3.1-8B benchmarks |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
There was a problem hiding this comment.
Code Review
This pull request adds a new example for running Llama3.1-8B on the cnn_dailymail dataset for summarization tasks. It includes a data processing script and configuration files for both offline and online benchmarking scenarios. The implementation is well-structured and provides a good starting point. My review focuses on improving the clarity and reproducibility of the documentation, increasing the robustness of the data download script, and providing better guidance within the configuration files to help users achieve optimal benchmark results.
Signed-off-by: attafosu <thomas.atta-fosu@intel.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 4 out of 4 changed files in this pull request and generated 2 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 4 out of 4 changed files in this pull request and generated 3 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
Signed-off-by: attafosu <thomas.atta-fosu@intel.com>
Signed-off-by: attafosu <thomas.atta-fosu@intel.com>
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 4 out of 4 changed files in this pull request and generated 1 comment.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
arekay-nv
left a comment
There was a problem hiding this comment.
Thanks. Please fix the readme and the parameters to the official versions.
Do you plan to do performance measurements in this PR or a followup?
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 4 out of 4 changed files in this pull request and generated 2 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
Got it. Fixed. |
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 4 out of 4 changed files in this pull request and generated 1 comment.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
Signed-off-by: attafosu <thomas.atta-fosu@intel.com>
What does this PR do?
Adds example for running endpoints with Llama3.1-8B on vllm server.
Type of change
Related issues
Addresses #53
Testing
Checklist