You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In the wide model part, one-hot encoders are used to label categorical features with just few unique values.
# Wide feature 2: one-hot vector of variety categories# Use sklearn utility to convert label strings to numbered indexencoder=LabelEncoder()
encoder.fit(variety_train)
variety_train=encoder.transform(variety_train)
variety_test=encoder.transform(variety_test)
num_classes=np.max(variety_train) +1# Convert labels to one hotvariety_train=keras.utils.to_categorical(variety_train, num_classes)
variety_test=keras.utils.to_categorical(variety_test, num_classes)
However, some values may just occur in test_set (fortunately, no such instance in the wine dataset). It's safer to fit the encoder with more possible values. Similar to label encoder, the tokenizer used preprocess descriptions also should learn on more possible information, which can be provided by full data set (including test set part), and without data leaking (because of no use of target label data).
The text was updated successfully, but these errors were encountered:
I think in real production environment, you can't get "test" set because it comes in the future. So only the train set is used when encoding. If value in future test set not seen in the training set, their should be some other way to process.
I think in real production environment, you can't get "test" set because it comes in the future. So only the train set is used when encoding. If value in future test set not seen in the training set, their should be some other way to process.
Thanks to @sergiowang. Sure. I totally agree. It is exactly the practice problem in the real world which I came across many times.
An alternative solution for that situation is to export the factorization mapping vector (or the entire factorizer if allowed) during training and append it with a new categorical feature named unknown, which is actually not used in training networks. When deploying and testing, before factorizing, you have to first check if all the values of such a categorical feature in test/predict set are there in the mapping vector. In other words, if you have already seen it in train set. If it is not in the vector, just factorize it as the int value corresponding to unknown. So the whole program will stably run without stuck with an error message. Even though the accuracy looks limited.
In the wide model part, one-hot encoders are used to label categorical features with just few unique values.
However, some values may just occur in test_set (fortunately, no such instance in the wine dataset). It's safer to fit the encoder with more possible values. Similar to label encoder, the tokenizer used preprocess descriptions also should learn on more possible information, which can be provided by full data set (including test set part), and without data leaking (because of no use of target label data).
The text was updated successfully, but these errors were encountered: