-
Notifications
You must be signed in to change notification settings - Fork 463
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[huggingface_pytorch] Training - update for Transformers to 4.41.2 PyTorch 2.2 #3869
Conversation
I temporarily set fp16=False to pass the smdp gpu training test, since otherwise we will get the following error:
Which hints that the cuda environment is not correctly setup. The HF DLC is based on the PT training DLC in which the cuda shall be setup, I wonder if half precision is working in the base DLC? And maybe I need to do something in our dlc to activate the cuda env? |
This PR has been marked stale as a result of being open for 30 days without activity or updates. Please remove the stale label or comment in order to keep this open, otherwise the PR will be closed in 5 days. |
This PR has been marked stale as a result of being open for 30 days without activity or updates. Please remove the stale label or comment in order to keep this open, otherwise the PR will be closed in 5 days. |
This PR has had no activity or updates in the last 5 days since being marked stale. Closing this PR as a result. |
GitHub Issue #3870:
This PR updates Hugginface's PyTorch DLC for inference. Here are the corresponding updated dependencies versions:
transformers: 4.41.2
datasets: 2.19.0
evaluate: 0.4.2
accelerate: 0.31.0
torch: 2.2.0
diffusers: 0.28.2
trl: 0.9.4
peft: 0.11.1
flash-attn:2.5.8
Note:
If merging this PR should also close the associated Issue, please also add that Issue # to the Linked Issues section on the right.
All PR's are checked weekly for staleness. This PR will be closed if not updated in 30 days.
Description
Tests run
NOTE: By default, docker builds are disabled. In order to build your container, please update dlc_developer_config.toml and specify the framework to build in "build_frameworks"
NOTE: If you are creating a PR for a new framework version, please ensure success of the standard, rc, and efa sagemaker remote tests by updating the dlc_developer_config.toml file:
Expand
sagemaker_remote_tests = true
sagemaker_efa_tests = true
sagemaker_rc_tests = true
Additionally, please run the sagemaker local tests in at least one revision:
sagemaker_local_tests = true
Formatting
black -l 100
on my code (formatting tool: https://black.readthedocs.io/en/stable/getting_started.html)DLC image/dockerfile
Builds to Execute
Expand
Click the checkbox to enable a build to execute upon merge.
Note: By default, pipelines are set to "latest". Replace with major.minor framework version if you do not want "latest".
Additional context
PR Checklist
Expand
NEURON/GRAVITON Testing Checklist
dlc_developer_config.toml
in my PR branch by settingneuron_mode = true
orgraviton_mode = true
Benchmark Testing Checklist
dlc_developer_config.toml
in my PR branch by settingec2_benchmark_tests = true
orsagemaker_benchmark_tests = true
Pytest Marker Checklist
Expand
@pytest.mark.model("<model-type>")
to the new tests which I have added, to specify the Deep Learning model that is used in the test (use"N/A"
if the test doesn't use a model)@pytest.mark.integration("<feature-being-tested>")
to the new tests which I have added, to specify the feature that will be tested@pytest.mark.multinode(<integer-num-nodes>)
to the new tests which I have added, to specify the number of nodes used on a multi-node test@pytest.mark.processor(<"cpu"/"gpu"/"eia"/"neuron">)
to the new tests which I have added, if a test is specifically applicable to only one processor typeBy submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license. I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.