Conversation
|
Auto-sync is disabled for draft pull requests in this repository. Workflows must be run manually. Contributors can view more details about this message here. |
f5bb1ae to
8f00ea5
Compare
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## main #564 +/- ##
=======================================
Coverage 74.45% 74.45%
=======================================
Files 182 182
Lines 18250 18250
=======================================
Hits 13588 13588
Misses 4662 4662 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
| # See the License for the specific language governing permissions and | ||
| # limitations under the License. | ||
|
|
||
| # MIT License |
There was a problem hiding this comment.
Is this code borrowed from OSS? if so, you will need to follow the OSS code procedure to file a ticket and get approval
|
According to the figures, there is a big accuracy/loss gap between SGL trainer and our current trainer. We need to figure it out before this can be merged. Also, as this PR may change our training accuracy fundamentally, we need to train larger models than llama 1B |
Signed-off-by: h-guo18 <67671475+h-guo18@users.noreply.github.com>
Signed-off-by: h-guo18 <67671475+h-guo18@users.noreply.github.com>
Signed-off-by: h-guo18 <67671475+h-guo18@users.noreply.github.com>
6ea8d57 to
a22a948
Compare
What does this PR do?
Type of change: new feature
Overview:
New trainer with different base model backends available for online training.
Other improvements:
train_acc.item()out from eagle forward to avoid cuda graph break during torch compile;Usage
Testing
Parallelism
Training Quality Test
Compared previous
HF trainer,new trainer-HF backendandtrainer-SGL backend;Setting: Llama3.2-1B, magpie, bs=8, lr1e-4, seqlen1k;
Before your PR is "Ready for review"
Additional Information