You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello, recently I have noticed this issue that when I try to standardize my test set using my training data generator (let's call it "datagen"), using the line:
X_TEST_scaled = datagen.standardize(X_TEST)
It returns the scaled version of X_TEST, but it also alters the original X_TEST tensor, which makes me lose my original X_TEST data.
Please let me know if I am doing something wrong, or is this the intended behaviour (in which it's pretty weird).
The text was updated successfully, but these errors were encountered:
Of course I can create a copy, that's what I've been doing to counter this issue. But I wanted to know if this is the intended behaviour or a bug with implementation? Traditionally functions don't modify arguments, and return new variables while preserving the arguments. This doesn't seem to be the case here.
Hello, recently I have noticed this issue that when I try to standardize my test set using my training data generator (let's call it "datagen"), using the line:
X_TEST_scaled = datagen.standardize(X_TEST)
It returns the scaled version of X_TEST, but it also alters the original X_TEST tensor, which makes me lose my original X_TEST data.
Please let me know if I am doing something wrong, or is this the intended behaviour (in which it's pretty weird).
The text was updated successfully, but these errors were encountered: