New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature] Add FlexibleRunner and Strategies #1183
Merged
zhouzaida
merged 55 commits into
open-mmlab:flexible-runner
from
zhouzaida:flexible-runner
Jun 27, 2023
Merged
[Feature] Add FlexibleRunner and Strategies #1183
zhouzaida
merged 55 commits into
open-mmlab:flexible-runner
from
zhouzaida:flexible-runner
Jun 27, 2023
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
HAOCHENYE
reviewed
Jun 19, 2023
HAOCHENYE
reviewed
Jun 19, 2023
HAOCHENYE
reviewed
Jun 21, 2023
HAOCHENYE
reviewed
Jun 26, 2023
HAOCHENYE
reviewed
Jun 26, 2023
HAOCHENYE
reviewed
Jun 26, 2023
map_location: Union[str, Callable] = 'cpu', | ||
strict: bool = False, | ||
revise_keys: list = [(r'^module.', '')], | ||
callback: Optional[Callable] = None, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
callback
is not used in DeepSpeedStrategy
, is that expected?
HAOCHENYE
reviewed
Jun 26, 2023
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Background
In the process of supporting FSDP, DeepSpeed, and ColossalAI, Runner's scalability has encountered challenges, mainly manifested in the following three aspects:
Incompatibility with existing fixed training processes and new training methods like ZeRO.
MMEngine achieves the unification of single-gpu training and DDP (Distributed Data Parallel) training processes, and this unified and fixed training process is written in code within the Runner. However, when supporting ZeRO series methods (FSDP, ColossalAI ZeroDDP, DeepSpeed ZeRO), this fixed training process becomes incompatible and requires adjustments in the order.
For example, after
model = FSDP(model)
is applied, the parameters and buffers in the model are split across different GPUs, resulting in incompleteness. Any direct modification operations on the model (such asinit_weights
in MMEngine) will result in errors.Furthermore, even within the ZeRO series methods, there are differences in the implementation across different frameworks, requiring flexibility to adjust the order and dispatch based on different frameworks. For example,
load_checkpoint
in FSDP must be called beforemodel = FSDP(model)
; while DeepSpeed and ColossalAI require it to be called aftermodel = initialize(model)
.Coupling between training components in other frameworks (DeepSpeed, ColossalAI).
In MMEngine, the Model Wrapper and Optim Wrapper are independent, while in DeepSpeed and ColossalAI, there is a coupling relationship between the model and optimizer, requiring mutual access to accomplish certain tasks. This can be observed in the
colossalai.initialize
anddeepspeed.initialize
interfaces.The unified save/load checkpoint function cannot meet the requirements of different models and frameworks.
The current
save_checkpoint
andload_checkpoint
are independent functions in Runner, with no association with the model, framework, etc., which is counterintuitive. For example, the FSDP training method requires collecting model parameters and optimizer states to GPU 0 before saving the model, while ColossalAI and DeepSpeed have their own complex logic for weight saving and loading.Design
To avoid impacting the existing Runner, we will re-implement a FlexibleRunner and introduce a new abstract Strategy.
The Strategy is primarily responsible for:
This PR will support three types of strategies:
Note: This is an experimental feature, and the interface is subject to change.
Environment
Validation
Experiment
MMPreTrain
vit-huge-p14_8xb128-coslr-50e_in1k.py
DDP: Out of memory
DDP + fp16: 58G per GPU
DeepSpeed ZeRO1 + fp16: 44G per GPU
DeepSpeed ZeRO3 + fp16:
vit-large-p16_8xb128-coslr-50e_in1k.py
Deespeed ZeRO1+fp16: 21G per GPU, accuracy: 85.6040
Deespeed ZeRO3
MMDet