You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi. First of all, thank you for the great tool you are developing. However, I am puzzled already for a week about how can I replicate trainer.evaluation() results with inferencing the model.
My initial idea was to truncate every sessions by one (removing last item_id), call trainer.predict(truncated_sessions), and then compute recall(last_item_ids, predictions[:20]). However, I am getting different recall metric.
The only way I managed to "replicate" evaluate() results is by: (1) providing not-truncated inputs to the trainer.predict() and (2) changing -1 into -2 in
Is it because trainer.evaluate() shifts the inputs to the left by one position? Or what am I doing incorrectly? Could any provide me insights how to do it "correctly", please?
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Hi. First of all, thank you for the great tool you are developing. However, I am puzzled already for a week about how can I replicate
trainer.evaluation()
results with inferencing the model.My initial idea was to truncate every sessions by one (removing last item_id), call
trainer.predict(truncated_sessions)
, and then computerecall(last_item_ids, predictions[:20])
. However, I am getting different recall metric.The only way I managed to "replicate" evaluate() results is by: (1) providing not-truncated inputs to the
trainer.predict()
and (2) changing-1
into-2
inTransformers4Rec/transformers4rec/torch/model/prediction_task.py
Line 460 in 348c963
I am puzzled why, but this was the only way I could ensure that the
x
inTransformers4Rec/transformers4rec/torch/model/prediction_task.py
Line 464 in 348c963
x
inTransformers4Rec/transformers4rec/torch/model/prediction_task.py
Line 444 in 348c963
Is it because
trainer.evaluate()
shifts the inputs to the left by one position? Or what am I doing incorrectly? Could any provide me insights how to do it "correctly", please?Thanks a lot.
Beta Was this translation helpful? Give feedback.
All reactions