Skip to content

Enhance recipe compatibility#1724

Merged
ksivaman merged 22 commits intoNVIDIA:mainfrom
negvet:et/recipe_compat
May 19, 2025
Merged

Enhance recipe compatibility#1724
ksivaman merged 22 commits intoNVIDIA:mainfrom
negvet:et/recipe_compat

Conversation

@negvet
Copy link
Copy Markdown
Collaborator

@negvet negvet commented Apr 28, 2025

Description

Enables recipe update on the fly. Added a user warning when this happens.

Enables TE1.x checkpoint loading.

Implemented check that is supposed to catch user-defined bug, when there is a mismatch between the recipes in fp8_model_init and fp8_autocast.
Example case to check: recipe is DelayedScaling (DelayedScaling is set in fp8_autocast()), but the weight tensor is MXFP8Tensor (MXFP8BlockScaling is set in fp8_model_init()).

Type of change

  • Documentation change (change only to the documentation, either a fix or a new content)
  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • Infra/Build change
  • Code refactoring

Changes

Please list the changes introduced in this PR:

  • Added check in the base module class
  • Added test

Checklist:

  • I have read and followed the contributing guidelines
  • The functionality is complete
  • I have commented my code, particularly in hard-to-understand areas
  • I have made corresponding changes to the documentation
  • My changes generate no new warnings
  • I have added tests that prove my fix is effective or that my feature works
  • New and existing unit tests pass locally with my changes

Signed-off-by: Evgeny Tsykunov <etsykunov@nvidia.com>
@negvet
Copy link
Copy Markdown
Collaborator Author

negvet commented Apr 28, 2025

/te-ci L0

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe add a field in the recipe that has this exepected tensor class? Also, I guess for completeness we should not only deal with QuantizedTensor, but also with the *Base (e.g. Float8TensorBase) classes (I know that they are not used for weights in fp8_model_init).

Copy link
Copy Markdown
Collaborator Author

@negvet negvet Apr 29, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe add a field in the recipe that has this exepected tensor class?

That is what I also like more. I did not do so due to circular import.
Now, I fixed it with local import, if this is acceptable.
But probably, we need some redesign to better fix this circular import issue (in another PR).

In the future, I would also propose to add a field in the recipe with quantizer class etc.

Copy link
Copy Markdown
Collaborator Author

@negvet negvet Apr 29, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, I guess for completeness we should not only deal with QuantizedTensor, but also with the *Base (e.g. Float8TensorBase) classes (I know that they are not used for weights in fp8_model_init).

Fixed

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See 98799f7

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@timmoon10 I vaguealy recall that getattr from nn.Module was slow and that you created a faster function for it at some point, do you remember the details?

negvet added 3 commits April 29, 2025 08:11
Signed-off-by: Evgeny Tsykunov <etsykunov@nvidia.com>
Signed-off-by: Evgeny Tsykunov <etsykunov@nvidia.com>
Signed-off-by: Evgeny Tsykunov <etsykunov@nvidia.com>
@negvet negvet force-pushed the et/recipe_compat branch from ecb8b96 to e64188b Compare April 29, 2025 10:02
@negvet
Copy link
Copy Markdown
Collaborator Author

negvet commented Apr 29, 2025

@ksivaman please take a look at e64188b (recipe change warning)

@negvet negvet marked this pull request as ready for review April 29, 2025 10:14
@negvet
Copy link
Copy Markdown
Collaborator Author

negvet commented Apr 29, 2025

/te-ci L0

Signed-off-by: Evgeny Tsykunov <etsykunov@nvidia.com>
@negvet
Copy link
Copy Markdown
Collaborator Author

negvet commented Apr 29, 2025

@ksivaman, 674a3c9 enables recipe change on the fly.

Signed-off-by: Evgeny Tsykunov <etsykunov@nvidia.com>
@negvet
Copy link
Copy Markdown
Collaborator Author

negvet commented Apr 29, 2025

TE1.x did not save the recipe in the state.
Based on that, the assumption in 5023a32 is the following: if no recipe in state -> it is TE1.x checkpoint with DelayedScaling.
Does it sound like a safe enough assumption?

With this fix, TE1.x checkpoint can be loaded in ToT TE2.x. (for both TE versions, used the same ToT Megatron-LM)

Signed-off-by: Evgeny Tsykunov <etsykunov@nvidia.com>
@negvet
Copy link
Copy Markdown
Collaborator Author

negvet commented Apr 29, 2025

/te-ci L0

@negvet negvet changed the title Check tensor-recipe compatibility Enhance recipe compatibility Apr 29, 2025
@negvet negvet requested a review from ptrendx April 29, 2025 16:35
negvet and others added 2 commits April 30, 2025 09:23
Signed-off-by: Evgeny Tsykunov <etsykunov@nvidia.com>
@negvet
Copy link
Copy Markdown
Collaborator Author

negvet commented Apr 30, 2025

/te-ci L0

ptrendx and others added 2 commits May 1, 2025 13:26
Signed-off-by: Kirthi Shankar Sivamani <ksivamani@nvidia.com>
@ksivaman
Copy link
Copy Markdown
Member

ksivaman commented May 6, 2025

/te-ci L0

_NUM_MAX_UB_STREAMS = 3
_MIN_STREAM_PRIORITY, _MAX_STREAM_PRIORITY = None, None
layers_atomic_ring_exchange = []
_QUANTIZED_WEIGHT_TENSOR_TYPES = (
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We recently merged the change to introduce QuantizedTensorBase class so you should be able to remove this.

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed in 7c2f5eb

negvet added 3 commits May 9, 2025 12:23
Signed-off-by: Evgeny Tsykunov <etsykunov@nvidia.com>
Signed-off-by: Evgeny Tsykunov <etsykunov@nvidia.com>
Signed-off-by: Evgeny Tsykunov <etsykunov@nvidia.com>
Signed-off-by: Evgeny Tsykunov <etsykunov@nvidia.com>
@negvet
Copy link
Copy Markdown
Collaborator Author

negvet commented May 9, 2025

/te-ci L0

@negvet negvet requested a review from ptrendx May 9, 2025 15:54
Comment on lines +13 to +16
from transformer_engine.pytorch.tensor.float8_blockwise_tensor import Float8BlockwiseQTensor
from transformer_engine.pytorch.tensor.float8_tensor import Float8Tensor
from transformer_engine.pytorch.tensor.mxfp8_tensor import MXFP8Tensor
from transformer_engine.pytorch.tensor.quantized_tensor import QuantizedTensor
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually you can't do it that way - this recipe class is in common, so is used in the JAX version as well. You will need to do the import and setting in the __post_init__ method as well (with try/except block).

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right, fixed in d6ea981

negvet and others added 2 commits May 13, 2025 08:09
Signed-off-by: Evgeny Tsykunov <etsykunov@nvidia.com>
@negvet
Copy link
Copy Markdown
Collaborator Author

negvet commented May 13, 2025

/te-ci L0

def __post_init__(self) -> None:
assert self.fp8_format != Format.E5M2, "Pure E5M2 training is not supported."
try:
from transformer_engine.pytorch.tensor.float8_tensor import Float8Tensor
Copy link
Copy Markdown
Member

@ksivaman ksivaman May 15, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not a fan of adding framework specific logic in the core lib at all, even if it is guarded with try-except. So far we haven't had to do this and probably shouldn't introduce it here unless a strong reason exists to do so.

How about we reverse the logic and instead of including tensor info in the recipe, we include some recipe info in the tensor, and that should achieve the same goal of being able to match the two together, right?

Comment on lines +1390 to +1392
if isinstance(tensor, QuantizedTensorBase) and not isinstance(
tensor, recipe.expected_tensor_class
):
Copy link
Copy Markdown
Member

@ksivaman ksivaman May 15, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
if isinstance(tensor, QuantizedTensorBase) and not isinstance(
tensor, recipe.expected_tensor_class
):
if isinstance(tensor, QuantizedTensorBase) and not tensor.recipe_name() == recipe.__class__.__name__:

Copy link
Copy Markdown
Member

@ksivaman ksivaman May 15, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Something like this to address my above comment maybe? This would require some hardcoding, i.e. each of the tensor types would be associated with a fixed "string" as recipe class name but I think that should be ok? In my example MXFP8Tensor.recipe_name() would return MXFP8BlockScaling. We could think of better ideas.

@negvet @ptrendx

Copy link
Copy Markdown
Collaborator Author

@negvet negvet May 16, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a good approach, although string-based comparison is not that robust (what if name is changed, inheritance is not naturally supported etc.).

What about the following update of your proposal:

# In each tensor class (e.g., Float8Tensor)
@classmethod
def get_compatible_recipe_types(cls):
    return [DelayedScaling, Float8CurrentScaling]

and then we just check isinstance(recipe, tuple(tensor.__class__.get_compatible_recipe_types()))

@ptrendx

Copy link
Copy Markdown
Collaborator Author

@negvet negvet May 16, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But this is still a workaround, fitted for this particular case.
Another one could be a static recipe_to_tensor_map defined in the pytorch module.
Although if there is a stronger demand for a framework-specific logic in the recipe, we might consider to define abstract base recipe classes in the common module and move the framework-specific implementations to their respective modules (class DelayedScaling(BaseDelayedScaling)). But not sure if we need that.

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The framework specific logic lives in the quantizers, the recipe itself makes sense in common.

I like the get_compatible_recipe_types idea. Some early nitpicks to help review later:

@classmethod
def get_compatible_recipes(cls):
    return (DelayedScaling, Float8CurrentScaling)

and we can check isinstance(recipe, tensor.get_compatible_recipes())

Copy link
Copy Markdown
Member

@ksivaman ksivaman left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

negvet and others added 4 commits May 19, 2025 17:33
Signed-off-by: Evgeny Tsykunov <etsykunov@nvidia.com>
Signed-off-by: Evgeny Tsykunov <etsykunov@nvidia.com>
@negvet
Copy link
Copy Markdown
Collaborator Author

negvet commented May 19, 2025

As discussed with @ptrendx, allowed quantizer to know about the recipe, rather than a tensor.
See a2910d7

@negvet
Copy link
Copy Markdown
Collaborator Author

negvet commented May 19, 2025

/te-ci L0

Copy link
Copy Markdown
Member

@ksivaman ksivaman left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

@ksivaman ksivaman merged commit 730fd11 into NVIDIA:main May 19, 2025
27 of 31 checks passed
KshitijLakhani pushed a commit that referenced this pull request May 21, 2025
* Check tensor-recipe compatibility

Signed-off-by: Evgeny Tsykunov <etsykunov@nvidia.com>

* Tensor class in recipe, checking for *Base

Signed-off-by: Evgeny Tsykunov <etsykunov@nvidia.com>

* Extend recipe __repr__ with recipe_type

Signed-off-by: Evgeny Tsykunov <etsykunov@nvidia.com>

* Warn about recipe change

Signed-off-by: Evgeny Tsykunov <etsykunov@nvidia.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Enable dynamic recipe change: clear fp8 workspace

Signed-off-by: Evgeny Tsykunov <etsykunov@nvidia.com>

* TE 1.x checkpoint compatibility

Signed-off-by: Evgeny Tsykunov <etsykunov@nvidia.com>

* Disable warning for recipe wrappers

Signed-off-by: Evgeny Tsykunov <etsykunov@nvidia.com>

* Test recipe change

Signed-off-by: Evgeny Tsykunov <etsykunov@nvidia.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Use QuantizedTensorBase

Signed-off-by: Evgeny Tsykunov <etsykunov@nvidia.com>

* Fix circular import

Signed-off-by: Evgeny Tsykunov <etsykunov@nvidia.com>

* Revert previous circular import fix

Signed-off-by: Evgeny Tsykunov <etsykunov@nvidia.com>

* Fix pytorch imports in common

Signed-off-by: Evgeny Tsykunov <etsykunov@nvidia.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Let quantizer know about the recipe

Signed-off-by: Evgeny Tsykunov <etsykunov@nvidia.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Fix imports

Signed-off-by: Evgeny Tsykunov <etsykunov@nvidia.com>

---------

Signed-off-by: Evgeny Tsykunov <etsykunov@nvidia.com>
Signed-off-by: Kirthi Shankar Sivamani <ksivamani@nvidia.com>
Co-authored-by: Przemyslaw Tredak <ptredak@nvidia.com>
Co-authored-by: Kirthi Shankar Sivamani <ksivamani@nvidia.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants