Skip to content

chore(pt): mv the input stat update to model_change_out_bias#5266

Queued
wanghan-iapcm wants to merge 2 commits intodeepmodeling:masterfrom
wanghan-iapcm:chore-explicit-fitting-input-stat
Queued

chore(pt): mv the input stat update to model_change_out_bias#5266
wanghan-iapcm wants to merge 2 commits intodeepmodeling:masterfrom
wanghan-iapcm:chore-explicit-fitting-input-stat

Conversation

@wanghan-iapcm
Copy link
Collaborator

@wanghan-iapcm wanghan-iapcm commented Feb 25, 2026

cleanup the logic of change_out_bias: it only change the output bias, does not update the fitting input stat.

Summary by CodeRabbit

  • Refactor

    • Adjusted how bias-adjustment triggers recomputation of fitting/input statistics during model training, streamlining the behavior and adding a conditional recompute path for certain model types.
  • Tests

    • Added cross-checking tests to ensure the new and previous bias/statistic update paths produce consistent fitting statistics.

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: f599b6a696

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Feb 25, 2026

📝 Walkthrough

Walkthrough

Removed the statistic computation from model-layer change_out_bias and added a conditional call to the fitting network's compute_input_stats inside model_change_out_bias for DP models when bias_adjust_mode == "set-by-statistic". Tests were added to verify consistency.

Changes

Cohort / File(s) Summary
PyTorch model & training
deepmd/pt/model/model/make_model.py, deepmd/pt/train/training.py
Deleted the compute_fitting_input_stat(...) call from change_out_bias and introduced a conditional get_fitting_net().compute_input_stats(_sample_func) invocation inside model_change_out_bias for DPModelCommon when bias_adjust_mode == "set-by-statistic".
Paddle model & training
deepmd/pd/model/model/make_model.py, deepmd/pd/train/training.py
Same change as PyTorch: removed compute_fitting_input_stat call from model change_out_bias and added conditional fitting-net stats computation in model_change_out_bias for DP models under the statistic mode.
Tests (PyTorch)
source/tests/pt/test_training.py
Added TestModelChangeOutBiasFittingStat.test_fitting_stat_consistency() to assert equality of out_bias, fparam_avg, and fparam_inv_std between the old and new code paths.
Tests (Paddle)
source/tests/pd/test_training.py
Added TestModelChangeOutBiasFittingStat.test_fitting_stat_consistency() (duplicated block present) to assert consistency of out_bias and fitting statistics between old and new code paths.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Possibly related PRs

Suggested reviewers

  • iProzd
  • njzjz
🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'chore(pt): mv the input stat update to model_change_out_bias' accurately describes the main change: moving the input stat update logic from change_out_bias to model_change_out_bias, a refactoring of the bias adjustment workflow.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
  • 📝 Generate docstrings (stacked PR)
  • 📝 Generate docstrings (commit on current branch)
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (2)
source/tests/pt/test_training.py (1)

699-699: Nit: replace lambda assignment with a named function to avoid noqa suppression.

♻️ Proposed refactor
-        sample_func = lambda: merged  # noqa: E731
+        def sample_func() -> list:
+            return merged
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@source/tests/pt/test_training.py` at line 699, Replace the lambda assigned to
sample_func with a proper named function to avoid the noqa suppression: change
the assignment "sample_func = lambda: merged" to a function definition like "def
sample_func(): return merged" (or equivalent) and remove the "# noqa: E731"
comment; update any references to sample_func to use the new function as before.
deepmd/pt/train/training.py (1)

1758-1763: Hoist the import to improve code organization.

The import path is correct and PT models (EnergyModel, PolarModel, PropertyModel, etc.) properly inherit from DPModelCommon in deepmd/pt/model/model/dp_model.py, so the isinstance check works as intended. There is no circular import concern: deepmd/pt/train/training.py imports from deepmd.pt.model.model, but the model package does not import from training.

The deferred import sits awkwardly mid-function between unrelated statements. Moving it to the module level or to the top of model_change_out_bias() would improve readability without risk.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@deepmd/pt/train/training.py` around lines 1758 - 1763, The local import of
DPModelCommon should be hoisted out of the middle of the function: move the
"from deepmd.pt.model.model.dp_model import DPModelCommon" to the module top (or
at the start of model_change_out_bias()) and remove the inline import; keep the
existing isinstance(_model, DPModelCommon) and the conditional block that calls
_model.get_fitting_net().compute_input_stats(_sample_func) unchanged so the
logic using _bias_adjust_mode, _model, compute_input_stats, and _sample_func
still runs the same after the import is relocated.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@deepmd/pt/train/training.py`:
- Around line 1758-1763: The local import of DPModelCommon should be hoisted out
of the middle of the function: move the "from deepmd.pt.model.model.dp_model
import DPModelCommon" to the module top (or at the start of
model_change_out_bias()) and remove the inline import; keep the existing
isinstance(_model, DPModelCommon) and the conditional block that calls
_model.get_fitting_net().compute_input_stats(_sample_func) unchanged so the
logic using _bias_adjust_mode, _model, compute_input_stats, and _sample_func
still runs the same after the import is relocated.

In `@source/tests/pt/test_training.py`:
- Line 699: Replace the lambda assigned to sample_func with a proper named
function to avoid the noqa suppression: change the assignment "sample_func =
lambda: merged" to a function definition like "def sample_func(): return merged"
(or equivalent) and remove the "# noqa: E731" comment; update any references to
sample_func to use the new function as before.

ℹ️ Review info

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 65eea4b and f599b6a.

📒 Files selected for processing (3)
  • deepmd/pt/model/model/make_model.py
  • deepmd/pt/train/training.py
  • source/tests/pt/test_training.py
💤 Files with no reviewable changes (1)
  • deepmd/pt/model/model/make_model.py

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
deepmd/pd/train/training.py (1)

1351-1356: Move the deferred import inside the if guard.

The import of DPModelCommon is currently executed unconditionally on every call to model_change_out_bias, even when _bias_adjust_mode != "set-by-statistic". Placing it inside the conditional keeps the import tied to the only code path that actually needs it and matches the intent of a lazy/deferred import.

♻️ Proposed refactor
-    from deepmd.pd.model.model.dp_model import (
-        DPModelCommon,
-    )
-
-    if isinstance(_model, DPModelCommon) and _bias_adjust_mode == "set-by-statistic":
+    if _bias_adjust_mode == "set-by-statistic":
+        from deepmd.pd.model.model.dp_model import (
+            DPModelCommon,
+        )
+    if _bias_adjust_mode == "set-by-statistic" and isinstance(_model, DPModelCommon):
         _model.get_fitting_net().compute_input_stats(_sample_func)

Or more concisely:

-    from deepmd.pd.model.model.dp_model import (
-        DPModelCommon,
-    )
-
-    if isinstance(_model, DPModelCommon) and _bias_adjust_mode == "set-by-statistic":
-        _model.get_fitting_net().compute_input_stats(_sample_func)
+    if _bias_adjust_mode == "set-by-statistic":
+        from deepmd.pd.model.model.dp_model import DPModelCommon
+        if isinstance(_model, DPModelCommon):
+            _model.get_fitting_net().compute_input_stats(_sample_func)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@deepmd/pd/train/training.py` around lines 1351 - 1356, Move the deferred
import of DPModelCommon into the conditional so it only happens when needed:
inside the model_change_out_bias flow, replace the current top-level import and
unconditional isinstance check with an if _bias_adjust_mode ==
"set-by-statistic": from deepmd.pd.model.model.dp_model import DPModelCommon; if
isinstance(_model, DPModelCommon):
_model.get_fitting_net().compute_input_stats(_sample_func). This ensures the
DPModelCommon import occurs only when the "set-by-statistic" path executes and
preserves the existing call to
get_fitting_net().compute_input_stats(_sample_func).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@deepmd/pd/train/training.py`:
- Around line 1351-1356: Move the deferred import of DPModelCommon into the
conditional so it only happens when needed: inside the model_change_out_bias
flow, replace the current top-level import and unconditional isinstance check
with an if _bias_adjust_mode == "set-by-statistic": from
deepmd.pd.model.model.dp_model import DPModelCommon; if isinstance(_model,
DPModelCommon): _model.get_fitting_net().compute_input_stats(_sample_func). This
ensures the DPModelCommon import occurs only when the "set-by-statistic" path
executes and preserves the existing call to
get_fitting_net().compute_input_stats(_sample_func).

ℹ️ Review info

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between f599b6a and 3b82392.

📒 Files selected for processing (3)
  • deepmd/pd/model/model/make_model.py
  • deepmd/pd/train/training.py
  • source/tests/pd/test_training.py
💤 Files with no reviewable changes (1)
  • deepmd/pd/model/model/make_model.py

@codecov
Copy link

codecov bot commented Feb 25, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 82.00%. Comparing base (4ddc37d) to head (3b82392).
⚠️ Report is 8 commits behind head on master.

Additional details and impacted files
@@            Coverage Diff             @@
##           master    #5266      +/-   ##
==========================================
- Coverage   82.12%   82.00%   -0.13%     
==========================================
  Files         740      750      +10     
  Lines       74473    75083     +610     
  Branches     3616     3615       -1     
==========================================
+ Hits        61162    61571     +409     
- Misses      12149    12347     +198     
- Partials     1162     1165       +3     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

Copy link
Collaborator

@Chengqian-Zhang Chengqian-Zhang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See below

@iProzd iProzd enabled auto-merge February 25, 2026 12:33
@iProzd iProzd added this pull request to the merge queue Feb 26, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants