-
Notifications
You must be signed in to change notification settings - Fork 554
update NvNMD training code #4800
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: devel
Are you sure you want to change the base?
Conversation
update nvnmd training code
Signed-off-by: jiongwalai <89775787+jiongwalai@users.noreply.github.com>
for more information, see https://pre-commit.ci
📝 WalkthroughWalkthroughThe updates introduce new configuration options, device-dependent logic, and enhanced descriptor processing for the NVNMD pipeline. Key changes include support for a frozen model initialization, more robust configuration copying, new descriptor quantization and recovery mechanisms, device-aware parameter selection, and updated mapping logic with additional components. Test expectations and configuration data are revised to align with these functional updates. Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant CLI
participant Config
participant Trainer
participant Model
User->>CLI: Invoke train-nvnmd with --init-frz-model
CLI->>Config: Parse arguments (including device, init_frz_model)
Config->>Config: Apply device-dependent logic (vu9p/vu13p)
CLI->>Trainer: Call train_nvnmd(INPUT, init_model, restart, init_frz_model, ...)
Trainer->>Trainer: Use deep copies for job configs
Trainer->>Model: Initialize model (possibly from frozen model)
Model->>Model: If quantized descriptor recovery enabled, use build_recovered
Model-->>Trainer: Model ready for training
Trainer-->>CLI: Training proceeds
sequenceDiagram
participant Model
participant Descriptor
participant MapTable
Model->>Descriptor: Pass input for descriptor computation
Descriptor->>Descriptor: If quantized, call build_recovered
Descriptor->>MapTable: Use updated s, h, k components for mapping
MapTable->>MapTable: Build/lookup mapping for s, h, k (higher resolution, new mode)
MapTable-->>Descriptor: Return mapped values
Descriptor-->>Model: Return recovered/processed descriptor
sequenceDiagram
participant Config
participant Data
participant Trainer
Config->>Data: Generate config dicts (deepcopy, add device/seed/M3)
Data-->>Config: Return updated config
Config->>Trainer: Pass config for training/initialization
Trainer-->>Config: Use config for model setup and training
Suggested reviewers
Warning There were issues while running some tools. Please review the errors and either fix the tool's configuration or disable the tool if it's a critical failure. 🔧 Pylint (3.3.7)deepmd/tf/nvnmd/descriptor/se_atten.pyNo files to lint: exiting. deepmd/tf/nvnmd/entrypoints/wrap.pyNo files to lint: exiting. ✨ Finishing Touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 5
🔭 Outside diff range comments (1)
deepmd/main.py (1)
243-248
: 🛠️ Refactor suggestionCLI breaking change:
-f
short option repurposed
-f
used to map to--finetune
; it now maps to--init-frz-model
, while--finetune
is moved to-t
.
Existing user scripts will break silently.Consider either:
- Keeping the original mapping and adding a new short flag (e.g.
-F
) forinit-frz-model
, or- Providing both aliases (
-f
&-t
) for--finetune
and issuing a deprecation warning for-f
.This guards backwards compatibility.
🧹 Nitpick comments (7)
deepmd/main.py (1)
781-787
: Duplicate option definition – risk of diverging behaviourThe same
--init-frz-model / -f
flag is re-defined for thetrain-nvnmd
sub-parser.
Keeping two independent definitions invites accidental drift (e.g. help text, default).Refactor into a small helper that registers the shared mutually-exclusive group for both sub-parsers, ensuring consistency in one place.
deepmd/tf/nvnmd/entrypoints/train.py (1)
70-75
: Seed injection is duplicated – centralise to avoid divergence
seed
is picked fromjdata_nvnmd_
twice. Extract once into a local variable and reuse:- "descriptor": { - "seed": jdata_nvnmd_.get("seed", 1), + seed = jdata_nvnmd_.get("seed", 1) + "descriptor": { + "seed": seed, ... - "fitting_net": {"seed": jdata_nvnmd_.get("seed", 1)}, + "fitting_net": {"seed": seed},Reduces chance of the two defaults diverging in future edits.
deepmd/tf/nvnmd/entrypoints/wrap.py (3)
140-141
: Consider keeping f-string formatting for better readability.The change from f-strings to % formatting is a step backward in terms of Python best practices. F-strings are more readable and performant.
- log.info("%s: %d x % d bit" % (k, h, w * 4)) - FioTxt().save("nvnmd/wrap/h%s.txt" % (k), d) + log.info(f"{k}: {h} x {w * 4} bit") + FioTxt().save(f"nvnmd/wrap/h{k}.txt", d)
469-473
: Specify explicit dtype for numpy arrays.For clarity and consistency, specify the dtype explicitly when creating numpy arrays.
- nrs = np.zeros(nr) - ncs = np.zeros(nc) - wrs = np.zeros([nr, nc]) - wcs = np.zeros([nr, nc]) + nrs = np.zeros(nr, dtype=GLOBAL_NP_FLOAT_PRECISION) + ncs = np.zeros(nc, dtype=GLOBAL_NP_FLOAT_PRECISION) + wrs = np.zeros([nr, nc], dtype=GLOBAL_NP_FLOAT_PRECISION) + wcs = np.zeros([nr, nc], dtype=GLOBAL_NP_FLOAT_PRECISION)
548-549
: Remove commented debug code.Commented debug code should be removed to keep the codebase clean.
- # k = 2**23 - # print(dsws[0][42] * k)deepmd/tf/nvnmd/entrypoints/mapt.py (1)
455-455
: Fix typo in comment.-# N+1 ranther than N for calculating defference +# N+1 rather than N for calculating differencedeepmd/tf/nvnmd/utils/config.py (1)
170-177
: Remove redundant logging statements.There are three consecutive log statements for
self.config["dscp"]
which seems excessive. Consider keeping only one or making them conditional on debug level.-log.info(self.config["dscp"]) dp_in = {"type_map": fioObj.get(jdata, "type_map", [])} self.config["dpin"] = fioObj.update(dp_in, self.config["dpin"]) # -log.info(self.config["dscp"]) self.init_net_size() self.init_value() log.info(self.config["dscp"])
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (11)
deepmd/main.py
(1 hunks)deepmd/tf/descriptor/se_atten.py
(4 hunks)deepmd/tf/nvnmd/data/data.py
(13 hunks)deepmd/tf/nvnmd/descriptor/se_atten.py
(4 hunks)deepmd/tf/nvnmd/entrypoints/mapt.py
(11 hunks)deepmd/tf/nvnmd/entrypoints/train.py
(5 hunks)deepmd/tf/nvnmd/entrypoints/wrap.py
(12 hunks)deepmd/tf/nvnmd/utils/config.py
(18 hunks)deepmd/tf/utils/type_embed.py
(1 hunks)deepmd/utils/argcheck_nvnmd.py
(2 hunks)source/tests/tf/test_nvnmd_entrypoints.py
(9 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (3)
deepmd/tf/descriptor/se_atten.py (1)
deepmd/tf/nvnmd/descriptor/se_atten.py (1)
build_recovered
(79-123)
deepmd/tf/nvnmd/data/data.py (1)
data/raw/copy_raw.py (1)
copy
(11-71)
deepmd/tf/nvnmd/entrypoints/mapt.py (4)
deepmd/tf/nvnmd/utils/config.py (1)
init_from_jdata
(55-81)deepmd/tf/utils/sess.py (1)
run_sess
(12-50)deepmd/tf/infer/deep_eval.py (2)
sess
(311-314)sess
(1223-1226)deepmd/utils/data.py (1)
avg
(379-394)
🪛 Ruff (0.11.9)
deepmd/tf/nvnmd/descriptor/se_atten.py
202-202: Local variable M3
is assigned to but never used
Remove assignment to unused variable M3
(F841)
deepmd/tf/nvnmd/utils/config.py
55-55: Do not use mutable data structures for argument defaults
Replace with None
; initialize within function
(B006)
🔇 Additional comments (21)
deepmd/tf/nvnmd/entrypoints/train.py (2)
156-160
: Deep-copying template dicts – good, but templates should be immutableUsing
copy.deepcopy
avoids side-effects – nice!
Consider makingjdata_cmd_train
/jdata_cmd_freeze
constants (UPPER_CASE) and never mutating them; deep-copy whenever you need a writable version.Also applies to: 193-197
147-151
:✅ Verification successful
Passing
init_frz_model
to generictrain()
may raiseTypeError
jdata_cmd_train
now containsinit_frz_model
, and the dict is forwarded verbatim:train(**jdata)If
deepmd.tf.entrypoints.train.train()
has not been updated to accept this kw-arg, runtime will fail.Expected: parameter list includes
init_frz_model
or**kwargs
.
If absent, either update the signature or remove the key before the call.
🏁 Script executed:
#!/bin/bash # Verify `train()` signature contains the new parameter rg -n --context 2 $'def +train\\(' deepmd/tf/entrypoints | headLength of output: 269
🏁 Script executed:
#!/bin/bash # Show the full signature of train() to verify accepted parameters sed -n '50,100p' deepmd/tf/entrypoints/train.pyLength of output: 1516
No action needed:
train()
already acceptsinit_frz_model
The
train()
signature indeepmd/tf/entrypoints/train.py
(lines 55–68) includes bothinit_frz_model
and**kwargs
, so forwardinginit_frz_model
viatrain(**jdata)
will not raise aTypeError
.source/tests/tf/test_nvnmd_entrypoints.py (1)
475-489
: Updated golden numbers only – ensure underlying logic is unchangedLarge blocks of expected values were replaced without explanation in the test docstrings.
Verify that they stem from intentional algorithmic changes and not from an unnoticed regression or random seed drift.Also applies to: 791-805, 576-704
deepmd/tf/nvnmd/entrypoints/wrap.py (2)
519-546
: Good implementation of k component handling.The addition of the k component to the mapping tables is implemented correctly, and the use of
copy.deepcopy
prevents reference issues when appending to lists. Thekkk
factor appropriately handles type boundaries.
566-566
: Dimension update correctly reflects the addition of k component.The change from 2 to 3 dimensions properly accounts for the new k component alongside the existing s and h components.
deepmd/tf/descriptor/se_atten.py (1)
719-731
: Proper integration of the new descriptor recovery mechanism.The conditional block correctly checks for NVNMD quantization before calling
build_recovered
, and the returnedrecovered_switch
is appropriately stored for later use in filtering.deepmd/tf/nvnmd/descriptor/se_atten.py (2)
79-124
: Well-implemented descriptor recovery function.The
build_recovered
function properly normalizes the s and h components, applies filtering operations, and returns both the normalized descriptor and the k component as a smoothing switch. The use oftf.ensure_shape
provides good runtime validation.
241-252
: Verify the formula change is intentional.The computation has changed from
G = Gs * Gt
toG = Gs * Gt + Gs
. This adds a residual connection which may be intentional, but please confirm this mathematical change is correct for the NVNMD algorithm.The modulation of
two_embd
byrecovered_switch
is correctly implemented for smoothing.deepmd/tf/nvnmd/data/data.py (6)
1-1
: Good use of deep copy for configuration data isolation.The change from shallow copy to deep copy ensures that nested dictionaries are properly isolated, preventing unintended mutations when configurations are modified. This is particularly important for configuration data that contains nested structures.
Also applies to: 125-127, 261-263, 335-336, 404-405
15-15
: Seed parameters correctly added for reproducibility.The addition of
"seed": 1
to descriptor and fitting network configurations ensures reproducible training runs, which aligns with the PR objectives.Also applies to: 45-45, 149-149, 179-179
158-158
: Verify the device-specific parameter adjustments.The following changes were made:
- Added
"M3": 2
parameter to v1 configurations- Incremented
SUB_VERSION
from 1 to 2- Reduced
NSTDM
from 128 to 64 andNSTDM_M2
from 4 to 2 in v1_ni256- Added
"NBIT_NSTEP": 8
Please confirm these parameter adjustments are correct for the vu9p device configuration and document the rationale for these specific values.
Also applies to: 210-210, 266-268, 273-273, 253-253
294-294
: Device field correctly added for FPGA support.The addition of
"device": "vu9p"
supports the PR objective of enabling different FPGA devices (vu9p and vu13p).Also applies to: 363-363
344-346
: Descriptor configuration properly updated with granular controls.The replacement of
"tebd_input_mode": "strip"
with three boolean flags ("stripped_type_embedding": True
,"smooth_type_embdding": True
,"set_davg_zero": False
) provides more precise control over the embedding behavior, as specified in the PR objectives.
331-331
: Training data configuration correctly extended.The addition of
"set_prefix": "set"
to the training data configuration is appropriate for dataset organization.Also applies to: 400-400
deepmd/tf/nvnmd/entrypoints/mapt.py (3)
94-104
: New mode 2 implementation for recovered switch handling.The change from
Gs_Gt_mode = 1
toGs_Gt_mode = 2
introduces a new calculation mode that usesxyz_scatter * two_embd * recovered_switch + xyz_scatter
with zero shifts for both Gs and Gt. This appears to integrate with the newk
tensor (recovered switch) functionality.
407-414
: New k tensor (recovered switch) correctly implemented.The implementation adds a new mapping component
k
calculated ask = -kk^3 + 1
wherekk = 1 - rmin * s
, clipped to [0,1]. This recovered switch function and its gradients are properly integrated into the mapping table structure alongside the existings
andh
components.Also applies to: 139-147
550-551
: Mapping resolution appropriately increased for finer control.The s2g mapping resolution has been doubled (N: 4096→8192, N2: 16→32) and the warning threshold increased from 16.0 to 32.0. These changes provide finer granularity in the mapping table, which should improve accuracy for the new device configurations.
Also applies to: 153-153, 554-555
deepmd/tf/nvnmd/utils/config.py (4)
47-47
: Device-specific configuration properly implemented.The code correctly handles device-specific parameters for "vu9p" and "vu13p" devices:
- NSTEP values are set based on sel thresholds (128, 160, 192, 224, 256)
- NSTDM parameters differ between devices (vu13p: 32, vu9p: 64)
Please verify these device-specific values are correct for the respective FPGA hardware.
Also applies to: 61-61, 219-247, 286-295
366-381
: Sel parameter validation and rounding correctly implemented.The code properly validates and rounds the
sel
parameter to hardware-supported values (128, 160, 192, 224, 256) for version 1 configurations. This ensures compatibility with FPGA implementations.
2-2
: Deep copy usage ensures proper configuration isolation.The consistent use of
copy.deepcopy()
for configuration dictionaries prevents unintended mutations when configurations are modified, maintaining data integrity across different configuration variants.Also applies to: 50-50, 140-144, 149-153, 438-438
355-356
: Warning threshold correctly updated to match mapping changes.The increase from 16.0 to 32.0 is consistent with the expanded mapping resolution in
mapt.py
and allows for a wider valid range of switch function values.
if nvnmd_cfg.enable: | ||
self.use_tebd_bias = True |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Implicitly overriding use_tebd_bias
may break caller expectations
self.use_tebd_bias
is force-set to True
whenever nvnmd_cfg.enable
is True, regardless of the value passed in the constructor.
This silently discards a user-supplied use_tebd_bias=False
, which is surprising and makes the public parameter meaningless as soon as NVNMD is on.
-if nvnmd_cfg.enable:
- self.use_tebd_bias = True
+if nvnmd_cfg.enable and not self.use_tebd_bias:
+ log.debug("nvnmd enabled → enabling type-embedding bias")
+ self.use_tebd_bias = True
At minimum, emit a warning or honour the original flag unless NVNMD strictly requires the bias.
Document the behaviour in the class docstring as well.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
if nvnmd_cfg.enable: | |
self.use_tebd_bias = True | |
if nvnmd_cfg.enable and not self.use_tebd_bias: | |
log.debug("nvnmd enabled → enabling type-embedding bias") | |
self.use_tebd_bias = True |
🤖 Prompt for AI Agents
In deepmd/tf/utils/type_embed.py around lines 190 to 191, the code forcibly sets
self.use_tebd_bias to True when nvnmd_cfg.enable is True, ignoring any
user-provided value for use_tebd_bias. To fix this, modify the logic to either
honor the original use_tebd_bias value unless NVNMD strictly requires it to be
True, or emit a warning when overriding it. Additionally, update the class
docstring to clearly document this behavior so users understand when and why
use_tebd_bias might be changed automatically.
doc_device = "hardware used by model, vu9p or vu13p" | ||
doc_net_size_file = ( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
device
argument lacks value validation
A new "device"
option is introduced but any string is currently accepted. Down-stream code (e.g. NvnmdConfig
) seems to assume "vu9p"
/ "vu13p"
or "none"
.
-Argument("device", str, optional=False, default="none", doc=doc_device),
+Argument(
+ "device",
+ str,
+ optional=False,
+ default="none",
+ doc=doc_device,
+ choices=["none", "vu9p", "vu13p"], # enforce valid values
+),
Fail-fast validation here prevents cryptic errors later in the pipeline.
Also applies to: 30-31
🤖 Prompt for AI Agents
In deepmd/utils/argcheck_nvnmd.py around lines 12-13 and 30-31, the "device"
argument accepts any string but downstream code expects only "vu9p", "vu13p", or
"none". Add validation logic to check if the provided "device" value is one of
these allowed strings and raise an error immediately if not. This fail-fast
validation will prevent obscure errors later in the pipeline.
deepmd/tf/nvnmd/entrypoints/wrap.py
Outdated
NIX = dscp["NIX"] | ||
print(dscp) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Replace print statement with proper logging.
Debug print statements should use the logging framework for consistency and control.
- print(dscp)
+ log.debug("Descriptor configuration: %s", dscp)
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
print(dscp) | |
log.debug("Descriptor configuration: %s", dscp) |
🤖 Prompt for AI Agents
In deepmd/tf/nvnmd/entrypoints/wrap.py at line 333, replace the print statement
with a logging call. Import the logging module if not already done, then use an
appropriate logging level such as logging.debug or logging.info to log the
variable dscp instead of printing it directly. This ensures consistent and
configurable output handling.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Nitpick comments (2)
deepmd/tf/nvnmd/descriptor/se_atten.py (2)
98-99
: Superfluous subtraction
(h - 0) / std_h
is equivalent toh / std_h
and just adds noise to the expression.- h_norm = (h - 0) / std_h + h_norm = h / std_h
127-133
: Doc-string and dead code driftThe doc-string still says the routine maps to “s and h”, but the code now produces
k
as well.
Also,ndescrpt
is computed but never used.Please update the documentation and remove the unused variable to keep the intent clear.
🧰 Tools
🪛 Ruff (0.11.9)
132-132: Local variable
ndescrpt
is assigned to but never usedRemove assignment to unused variable
ndescrpt
(F841)
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
deepmd/tf/nvnmd/descriptor/se_atten.py
(4 hunks)deepmd/tf/nvnmd/entrypoints/wrap.py
(12 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- deepmd/tf/nvnmd/entrypoints/wrap.py
🔇 Additional comments (1)
deepmd/tf/nvnmd/descriptor/se_atten.py (1)
197-204
: New parameter not reflected in API contract
filter_lower_R42GR()
now expectsrecovered_switch
, yet the function doc-string and upstream callers were not updated.
For clarity and to avoid silent breakage:
- Document the new argument in the doc-string.
- Audit all call-sites to make sure the extra tensor is always supplied.
s_norm = op_module.flt_nvnmd(s_norm) | ||
log.debug("#s: %s", s_norm) | ||
s_norm = tf.ensure_shape(s_norm, [None, 1]) | ||
with tf.variable_scope("h", reuse=True): | ||
h_norm = op_module.flt_nvnmd(h_norm) | ||
log.debug("#h: %s", h_norm) | ||
h_norm = tf.ensure_shape(h_norm, [None, 1]) | ||
# merge into [sji, hji*xji, hji*yji, hji*zji] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
reuse=True
may fail on the first graph build
tf.variable_scope(..., reuse=True)
assumes the scope already exists.
When this function is executed in a fresh graph the variables/ops are not yet defined and TensorFlow will raise a ValueError
.
Use reuse=tf.AUTO_REUSE
(TF >= 1.4) or create the scope without reuse
on the first pass.
🤖 Prompt for AI Agents
In deepmd/tf/nvnmd/descriptor/se_atten.py around lines 103 to 110, the use of
tf.variable_scope with reuse=True can cause a ValueError if the scope does not
exist yet during the first graph build. To fix this, change the reuse argument
to tf.AUTO_REUSE to allow automatic reuse or create the variable scope without
reuse on the first pass to ensure the scope is created before reuse is applied.
def build_recovered( | ||
descrpt, t_avg, t_std, atype, Na, ntypes, rcut_r_smth, filter_precision | ||
): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove or use the filter_precision
parameter
filter_precision
is accepted by build_recovered()
but never referenced inside the function, which is misleading and may confuse future maintainers.
-def build_recovered(
- descrpt, t_avg, t_std, atype, Na, ntypes, rcut_r_smth, filter_precision
-):
+def build_recovered(
+ descrpt, t_avg, t_std, atype, Na, ntypes, rcut_r_smth
+):
Either drop the argument (and update all call-sites) or apply it where you originally intended.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
def build_recovered( | |
descrpt, t_avg, t_std, atype, Na, ntypes, rcut_r_smth, filter_precision | |
): | |
def build_recovered( | |
descrpt, t_avg, t_std, atype, Na, ntypes, rcut_r_smth | |
): | |
... |
🤖 Prompt for AI Agents
In deepmd/tf/nvnmd/descriptor/se_atten.py around lines 79 to 81, the function
build_recovered() accepts a parameter filter_precision that is never used inside
the function. To fix this, either remove the filter_precision parameter from the
function signature and update all places where build_recovered() is called to no
longer pass this argument, or if filter_precision was intended to be used,
incorporate it appropriately within the function logic.
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## devel #4800 +/- ##
==========================================
- Coverage 84.79% 84.77% -0.02%
==========================================
Files 698 698
Lines 67816 67918 +102
Branches 3540 3541 +1
==========================================
+ Hits 57505 57580 +75
- Misses 9177 9202 +25
- Partials 1134 1136 +2 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
@jiongwalai please fix the precommit issues. |
Update NvNMD training code:
"stripped_type_embedding": true, "smooth_type_embdding": true, "set_davg_zero": false
in nvnmd-v1Summary by CodeRabbit
New Features
Improvements
Bug Fixes
Chores