This is the implementation of our work on "Predicting User Intents and Satisfaction with Dialogue-based Conversational Recommendations". This paper has been accepted to UMAP 2020. If you find our repository useful in your paper, please cite our paper.
- Wanling Cai and Li Chen. 2020. Predicting User Intents and Satisfaction with Dialogue-based Conversational Recommendations. In Proceedings of the 28th ACM Conference on User Modeling, Adaptation and Personalization (UMAP '20), July 14-17, 2020. Link
Citation (Bibtex entry):
@inproceedings{IARD,
author = {Wanling Cai and Li Chen},
title = {Predicting User Intents and Satisfaction with Dialogue-based Conversational Recommendations},
booktitle = {Proceedings of the 28th ACM Conference on User Modeling, Adaptation and Personalization}
series = {UMAP '20},
year = {2020},
}
We used the IARD data (see below), which is also included in the data folder in this repo.
Intent Annotation of Recommendation Dialogue (IARD) Dataset [Download]
Python 3.7
Required Packages: Scikit-learn, Scikit-multilearn, xgboost, TensorFlow, Keras, NLTK, gensim (for word-embedding)
Below are examples of how to run our codes for predicting user intents with ML models.
Go to the folder \user_intent_prediction\machine_learning_model
and run the example scripti. For instance:
python Main.py \
--file_input_data ../../data/annotation_data.json \
--neural_model 0 \
--algorithm_adaption 0 \
--problem_transformation 1 \
--cross_validation 10 \
--feature_normalization 0
--content_features 1 \
--discourse_features 1 \
--sentiment_features 1 \
--conversational_features 1
--num_previous_turns 1 \
--problem_transformation_method BR \
--model_name XGBoost
Note: It would run a long time (more than 1 hour) if you follow this example, as we use the 5-fold cross validation to select the best hyper-parameter and evaluate the model with the 10-fold cross-validation.