The implementation of “A Context-Aware User-Item Representation Learning for Item Recommendation”, Libing Wu, Cong Quan, Chenliang Li, Qian Wang, Bolong Zheng, Xiangyang Luo, https://dl.acm.org/citation.cfm?id=3298988
Tensorflow 1.2
Python 2.7
Numpy
Scipy
To run CARL, 6 files are required:
file_name=TrainInteraction.out
each training sample is a sequence as:
UserId\tItemId\tRating\tDate
Example: 0\t3\t5.0\t1393545600
file_name=ValInteraction.out
The format is the same as the training data format.
file_name=TestInteraction.out
The format is the same as the training data format.
file_name=WordDict.out
Each line follows the format as:
Word\tWord_Id
Example: love\t0
file_name=UserReviews.out
each line is the format as:
UserId\tWord1 Word2 Word3 …
Example:0\tI love to eat hamburger …
file_name=ItemReviews.out
The format is the same as the user review doc format.
All files need to be located in the same directory.
Besides, the code also supports to leverage the pretrained word embedding via uncomment the loading function “word2vec_word_embed” in the main file .
Carl.py denotes the model named CARL; Review.py denotes the review-based component while Interaction.py denotes the interaction-based component.
word_latent_dim: the dimension size of word embedding;
latent_dim: the latent dimension of the representation learned from the review documents (entity);
max_len: the maximum doc length;
num_filters: the number of filters of CNN network;
window_size: the length of the sliding window of CNN;
learning_rate: learning rate;
lambda_1: the weight of the regularization part;
drop_out: the keep probability of the drop out strategy;
batch_size: batch size;
epochs: number of training epoch;