Skip to content

add back grad_norm metric#25

Merged
yzhangcs merged 4 commits intofla-org:mainfrom
kashif:grad_norm-metric
Apr 8, 2025
Merged

add back grad_norm metric#25
yzhangcs merged 4 commits intofla-org:mainfrom
kashif:grad_norm-metric

Conversation

@kashif
Copy link
Contributor

@kashif kashif commented Apr 8, 2025

add back the grad_norm and skipped_step metric via the new log api in torchtitan

Summary by CodeRabbit

This release delivers improvements to system stability and training process monitoring:

  • Chores

    • Upgraded a core third-party dependency to the latest version to bolster overall performance and reliability.
  • New Features

    • Enhanced training logging to capture additional diagnostic metrics, including gradient norms, last learning rate, and estimated time of arrival, providing more comprehensive insights during training sessions.

@coderabbitai
Copy link

coderabbitai bot commented Apr 8, 2025

Walkthrough

This pull request updates a third-party subproject commit in the 3rdparty/torchtitan directory to a new version. Additionally, the training logging in flame/train.py is enhanced by including an extra parameter to log the gradient norm. Correspondingly, the MetricLogger.log method in flame/components/metrics.py is updated to accept this new parameter. These changes improve the visibility of training metrics without altering the overall control flow.

Changes

File(s) Change Summary
3rdparty/torchtitan Updated the subproject commit from d8fc8aac...6784 to 5e2033c7...c0d3 for referencing a new version of torchtitan.
flame/train.py, flame/components/metrics.py Enhanced training logging: Added an extra_metrics parameter to include the gradient norm in the log output; updated the MetricLogger.log signature.

Sequence Diagram(s)

sequenceDiagram
    participant Trainer as flame/train.py
    participant Logger as flame/components/metrics.py
    Trainer->>Logger: log(step, avg_loss, max_loss, {gradient_norm: value})
    Logger-->>Trainer: Acknowledge logging with extra metrics
Loading

Possibly related PRs

  • Update torchtitan and train.py #21: The changes in the main PR, which update the commit identifier for the 3rdparty/torchtitan directory, are related to the retrieved PR that also updates the commit reference for the same subproject, indicating a direct connection at the code level.

Poem

I’m a rabbit on a coding spree,
With commits and logs hopping merrily.
New metrics bounce in every line,
Gradient norms now join the design.
I celebrate these changes with a twitch and a smile! 🐰


📜 Recent review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 7ef2985 and 0cb2b92.

📒 Files selected for processing (1)
  • flame/train.py (1 hunks)
🔇 Additional comments (3)
flame/train.py (3)

732-737: LGTM: Learning rate extraction and ETA calculation.

The code correctly extracts the last learning rate from the first scheduler and calculates the estimated time of arrival (ETA) based on elapsed time and training progress.


738-747: Well-structured metrics logging with gradient norm.

This implementation successfully adds the requested grad_norm metric to the logging system, along with learning rate and skipped step counts, using a clean dictionary structure.


749-752: Improved console output with training metrics.

The console output now includes learning rate, gradient norm, and a helpful elapsed/ETA time display, improving monitoring capabilities for users.

✨ Finishing Touches
  • 📝 Generate Docstrings

🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai plan to trigger planning for file edits and PR creation.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
3rdparty/torchtitan (1)

1-1289: Address Pipeline Lint Issues in Third-party Code

The pipeline log reports multiple lint issues in this file (e.g., E501 for lines that are too long, several E203 whitespace problems, as well as F841/F401 warnings for unused variables and imports). Since these issues are within a third-party subproject, you might consider one of the following:

  • Exclude the 3rdparty directory from linting checks if upstream compliance isn’t feasible.
  • Coordinate with the upstream repository to resolve these style issues if they impact integration.

Please verify that the lint configuration aligns with your policy regarding third-party code.

🧰 Tools
🪛 GitHub Actions: lint

[error] 65-65: E501 line too long (131 > 127 characters)


[error] 132-132: E203 whitespace before ':'


[error] 146-146: F821 undefined name 'n_microbatches'


[error] 71-71: E203 whitespace before ':'


[error] 39-39: E741 ambiguous variable name 'l'


[error] 59-59: E731 do not assign a lambda expression, use a def


[error] 92-92: E203 whitespace before ':'


[error] 285-285: E203 whitespace before ':'


[error] 948-948: E203 whitespace before ':'


[error] 952-952: E203 whitespace before ':'


[error] 1268-1268: E203 whitespace before ':'


[error] 1289-1289: E203 whitespace before ':'


[error] 205-205: E203 whitespace before ':'


[error] 158-158: E203 whitespace before ':'


[error] 82-82: F841 local variable 'token_indices' is assigned to but never used


[error] 262-262: E203 whitespace before ':'


[error] 306-306: E501 line too long (131 > 127 characters)


[error] 388-388: F841 local variable 'mean_prob' is assigned to but never used


[error] 11-11: F401 'functools' imported but unused


[error] 15-15: F401 'typing.Any' imported but unused


[error] 15-15: F401 'typing.Dict' imported but unused


[error] 15-15: F401 'typing.Optional' imported but unused


[error] 20-20: F401 'triton.Config as TConfig' imported but unused


[error] 21-21: F401 'triton.runtime.driver' imported but unused


[error] 25-25: F401 'tma_autotuning.ALIGN_SIZE_M' imported but unused


[error] 25-25: E402 module level import not at top of file


[error] 316-316: F841 local variable 'c_desc_ptr' is assigned to but never used


[error] 347-347: F841 local variable 'm_offset' is assigned to but never used


[error] 348-348: F841 local variable 'n_offset' is assigned to but never used


[error] 851-851: F841 local variable 'G' is assigned to but never used


[error] 885-885: F541 f-string is missing placeholders


[error] 902-902: F541 f-string is missing placeholders


[error] 1071-1071: F841 local variable 'has_tma_support' is assigned to but never used


[error] 11-11: F401 'functools' imported but unused


[error] 14-14: F401 'typing.Any' imported but unused


[error] 14-14: F401 'typing.Optional' imported but unused


[error] 14-14: F401 'typing.Tuple' imported but unused


[error] 19-19: F401 'triton.Config as TConfig' imported but unused


[error] 8-8: F401 'logging' imported but unused


[error] 13-13: F401 'torch.nn' imported but unused


[error] 45-45: E203 whitespace before ':'


[error] 347-347: E203 whitespace before ':'


[error] 296-296: E203 whitespace before ':'


[error] 338-338: E501 line too long (129 > 127 characters)

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 7f8789e and a0b1517.

📒 Files selected for processing (2)
  • 3rdparty/torchtitan (1 hunks)
  • flame/train.py (1 hunks)
🧰 Additional context used
🪛 GitHub Actions: lint
3rdparty/torchtitan

[error] 65-65: E501 line too long (131 > 127 characters)


[error] 132-132: E203 whitespace before ':'


[error] 146-146: F821 undefined name 'n_microbatches'


[error] 71-71: E203 whitespace before ':'


[error] 39-39: E741 ambiguous variable name 'l'


[error] 59-59: E731 do not assign a lambda expression, use a def


[error] 92-92: E203 whitespace before ':'


[error] 285-285: E203 whitespace before ':'


[error] 948-948: E203 whitespace before ':'


[error] 952-952: E203 whitespace before ':'


[error] 1268-1268: E203 whitespace before ':'


[error] 1289-1289: E203 whitespace before ':'


[error] 205-205: E203 whitespace before ':'


[error] 158-158: E203 whitespace before ':'


[error] 82-82: F841 local variable 'token_indices' is assigned to but never used


[error] 262-262: E203 whitespace before ':'


[error] 306-306: E501 line too long (131 > 127 characters)


[error] 388-388: F841 local variable 'mean_prob' is assigned to but never used


[error] 11-11: F401 'functools' imported but unused


[error] 15-15: F401 'typing.Any' imported but unused


[error] 15-15: F401 'typing.Dict' imported but unused


[error] 15-15: F401 'typing.Optional' imported but unused


[error] 20-20: F401 'triton.Config as TConfig' imported but unused


[error] 21-21: F401 'triton.runtime.driver' imported but unused


[error] 25-25: F401 'tma_autotuning.ALIGN_SIZE_M' imported but unused


[error] 25-25: E402 module level import not at top of file


[error] 316-316: F841 local variable 'c_desc_ptr' is assigned to but never used


[error] 347-347: F841 local variable 'm_offset' is assigned to but never used


[error] 348-348: F841 local variable 'n_offset' is assigned to but never used


[error] 851-851: F841 local variable 'G' is assigned to but never used


[error] 885-885: F541 f-string is missing placeholders


[error] 902-902: F541 f-string is missing placeholders


[error] 1071-1071: F841 local variable 'has_tma_support' is assigned to but never used


[error] 11-11: F401 'functools' imported but unused


[error] 14-14: F401 'typing.Any' imported but unused


[error] 14-14: F401 'typing.Optional' imported but unused


[error] 14-14: F401 'typing.Tuple' imported but unused


[error] 19-19: F401 'triton.Config as TConfig' imported but unused


[error] 8-8: F401 'logging' imported but unused


[error] 13-13: F401 'torch.nn' imported but unused


[error] 45-45: E203 whitespace before ':'


[error] 347-347: E203 whitespace before ':'


[error] 296-296: E203 whitespace before ':'


[error] 338-338: E501 line too long (129 > 127 characters)

🔇 Additional comments (2)
3rdparty/torchtitan (1)

1-1: Subproject Commit Update Verified

The subproject commit has been updated to
5e2033c75c3c6e82882f87631942b942fde2c0d3, which appears to incorporate the necessary changes to support the reintroduced grad_norm metric through the new logging API. This enables improved tracking of gradient norms during model training as intended by the PR objectives.

flame/train.py (1)

732-737: LGTM! The grad_norm metric is successfully added back.

This change enhances the logging functionality by including the gradient norm as an extra metric, which is valuable for monitoring training dynamics and diagnosing potential gradient issues during model training.

flame/train.py Outdated
train_state.step,
global_avg_loss,
global_max_loss,
extra_metrics={"loss_metrics/grad_norm": grad_norm.item()},
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How about adding some rounding for grad_norm, e.g., .4f?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ah sure!

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

actually, this is not used for printing but logging via e.g. wandb and there the formatting is not an issue

@yzhangcs
Copy link
Member

yzhangcs commented Apr 8, 2025

local running results:
image

@yzhangcs
Copy link
Member

yzhangcs commented Apr 8, 2025

The logging looks good enough to me for now.
Maybe we need one line log display per step, which could be achieved by making PRs to torchtitan

@yzhangcs yzhangcs merged commit fa5e448 into fla-org:main Apr 8, 2025
0 of 2 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants