Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

features feed into label encoders should be in the whole data set instead of just train set part #5

Open
li-xin-yi opened this issue Jul 25, 2019 · 2 comments

Comments

@li-xin-yi
Copy link

li-xin-yi commented Jul 25, 2019

In the wide model part, one-hot encoders are used to label categorical features with just few unique values.

# Wide feature 2: one-hot vector of variety categories

# Use sklearn utility to convert label strings to numbered index
encoder = LabelEncoder()
encoder.fit(variety_train)
variety_train = encoder.transform(variety_train)
variety_test = encoder.transform(variety_test)
num_classes = np.max(variety_train) + 1

# Convert labels to one hot
variety_train = keras.utils.to_categorical(variety_train, num_classes)
variety_test = keras.utils.to_categorical(variety_test, num_classes)

However, some values may just occur in test_set (fortunately, no such instance in the wine dataset). It's safer to fit the encoder with more possible values. Similar to label encoder, the tokenizer used preprocess descriptions also should learn on more possible information, which can be provided by full data set (including test set part), and without data leaking (because of no use of target label data).

@Wang-Yu-Qing
Copy link

I think in real production environment, you can't get "test" set because it comes in the future. So only the train set is used when encoding. If value in future test set not seen in the training set, their should be some other way to process.

@li-xin-yi
Copy link
Author

li-xin-yi commented Feb 27, 2020

I think in real production environment, you can't get "test" set because it comes in the future. So only the train set is used when encoding. If value in future test set not seen in the training set, their should be some other way to process.

Thanks to @sergiowang. Sure. I totally agree. It is exactly the practice problem in the real world which I came across many times.

An alternative solution for that situation is to export the factorization mapping vector (or the entire factorizer if allowed) during training and append it with a new categorical feature named unknown, which is actually not used in training networks. When deploying and testing, before factorizing, you have to first check if all the values of such a categorical feature in test/predict set are there in the mapping vector. In other words, if you have already seen it in train set. If it is not in the vector, just factorize it as the int value corresponding to unknown. So the whole program will stably run without stuck with an error message. Even though the accuracy looks limited.

For instance, during training:

for cat in cat_features:
    encoder = LabelEncoder()
    cat_feat = train_df[cat].tolist()
    cat_feat.append("unseen")
    encoder.fit(cat_feat)
    num_classes = len(encoder.classes_)
    cat_train = encoder.transform(train_df[cat])
    cat_train = keras.utils.to_categorical(cat_train, num_classes)
    train_features.append(cat_train)
    export_encoder[cat] = encoder

Then when the test set is fed into the pre-processing program

for cat, encoder in export_encoder.items():
    num_classes = len(encoder.classes_)
    label_dict = dict(zip(encoder.classes_, encoder.transform(encoder.classes_)))
    feat = predict_df[cat].apply(lambda x: label_dict.get(x, label_dict["unseen"]))
    cat_test = keras.utils.to_categorical(feat, num_classes)
    test_features.append(cat_test)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants