Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix param.ds_id not found bug when call 'deepspeed.runtime.swap_tensor.optimizer_utils.OptimizerSwapper._create_param_swap_info #5193

Closed
wants to merge 5 commits into from

Conversation

getinglxf
Copy link

Fix param.ds_id not found bug when call 'deepspeed.runtime.swap_tensor.optimizer_utils.OptimizerSwapper._create_param_swap_info' method under condition
that 'offload_optimizer' and 'offload_param' set to nvme path in zero_config.
In new version of Deepspeed(0.13.3), the deepspeed.runtime.swap_tensor.optimizer_utils.OptimizerSwapper has added 'parameter_id(param)' method to get param id, while in older version like 0.12.4 the origin method is 'id(param)'. This change causes a new bug: when call deepspeed.runtime.zero.stage3.DeepSpeedZeroOptimizer_Stage3._create_fp32_partitions method in init process,the 'self.optimizer_swapper.initialize_paraeters(parameters=[self.fp32_partitioned_groups_flat[i]], src_tensors=[unpinned_fp32_buffer])' method will use unpinned_fp32_buffer without ds_id under optimizer offload condition, which will throw a error : "AttributeError: 'Tensor' object has no attribute 'ds_id'"
ds_id error

…r.optimizer_utils.OptimizerSwapper._create_param_swap_info' method under optimizer offload condition. Detail desc: In new version of Deepspeed(0.1.3.3), the deepspeed.runtime.swap_tensor.optimizer_utils.OptimizerSwapper has added 'parameter_id(param)' method to get param id, while in older version like 0.12.4 the origin method is 'id(param)'. Hower in deepspeed.runtime.zero.stage3.DeepSpeedZeroOptimizer_Stage3._create_fp32_partitions method, the 'self.optimizer_swapper.initialize_paraeters(parameters=[self.fp32_partitioned_groups_flat[i]], src_tensors=[unpinned_fp32_buffer])' method will use unpinned_fp32_buffer without ds_id under optimizer offload condition
…r.optimizer_utils.OptimizerSwapper._create_param_swap_info' method under optimizer offload condition. Detail desc: In new version of Deepspeed(0.1.3.3), the deepspeed.runtime.swap_tensor.optimizer_utils.OptimizerSwapper has added 'parameter_id(param)' method to get param id, while in older version like 0.12.4 the origin method is 'id(param)'. Hower in deepspeed.runtime.zero.stage3.DeepSpeedZeroOptimizer_Stage3._create_fp32_partitions method, the 'self.optimizer_swapper.initialize_paraeters(parameters=[self.fp32_partitioned_groups_flat[i]], src_tensors=[unpinned_fp32_buffer])' method will use unpinned_fp32_buffer without ds_id under optimizer offload condition
@tjruwase tjruwase requested review from jomayeri and removed request for mrwyattii June 19, 2024 12:15
@@ -840,6 +843,7 @@ def _create_fp32_partitions(self):
else:
unpinned_fp32_buffer = torch.empty(num_elements, device=self.device, dtype=torch.float)
self._swap_in_sub_group_to_flat_buffer(unpinned_fp32_buffer, i)
self.fp32_partitioned_groups_flat[i].ds_id = ds_id
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is currently redundancy between L846 and L872. To fix this, how about move L846 to after L831 for the if ... and move L846 into the else ...

@tjruwase tjruwase mentioned this pull request Aug 3, 2024
github-merge-queue bot pushed a commit that referenced this pull request Aug 14, 2024
Fix #5495 - Fix missing ds_id bug by copying solution from #5193 (credit
to @getinglxf)

Co-authored-by: Logan Adams <114770087+loadams@users.noreply.github.com>
@loadams
Copy link
Contributor

loadams commented Aug 14, 2024

Closing due to fix being applied in #5824

@loadams loadams closed this Aug 14, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants