You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Because the tokens are all padded, if you use len(tokens[bs_i]) to obtain the cap_len, then all sentence lengths will be max_text_len=20 + 2. This will influence the language feature extraction for computing metrics.
And the following code is the original code in HumanML3D, which uses the right token length.
Hi,
I have noticed that there may be a bug in your modified evaluation code as follows.
motion-diffusion-model/data_loaders/humanml/motion_loaders/comp_v6_model_dataset.py
Line 214 in af061ca
Because the tokens are all padded, if you use
len(tokens[bs_i])
to obtain thecap_len
, then all sentence lengths will bemax_text_len=20
+ 2. This will influence the language feature extraction for computing metrics.And the following code is the original code in HumanML3D, which uses the right token length.
motion-diffusion-model/data_loaders/humanml/motion_loaders/comp_v6_model_dataset.py
Line 100 in af061ca
I think this bug may lead to a performance drop in MatchingScore, R-Precision, and so on.
The text was updated successfully, but these errors were encountered: