Skip to content

Kimyungi/MONET

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 
 
 
 
 
 
 

Repository files navigation

MONET: Modality-Embracing Graph Convolutional Network and Target-Aware Attention for Multimedia Recommendation

This repository provides a reference implementation of MONET as described in the following paper:

MONET: Modality-Embracing Graph Convolutional Network and Target-Aware Attention for Multimedia Recommendation
Yungi Kim, Taeri Kim, Won-Yong Shin and Sang-Wook Kim
17th ACM Int'l Conf. on Web Search and Data Mining (ACM WSDM 2024)

Overview of MONET

monet

Authors

Requirements

The code has been tested running under Python 3.6.13. The required packages are as follows:

  • gensim==3.8.3
  • pytorch==1.10.2+cu113
  • torch_geometric==2.0.3
  • sentence_transformers==2.2.0
  • pandas
  • numpy
  • tqdm
  • torch-scatter
  • torch-sparse
  • torch-cluster
  • torch-spline-conv
  • torch-geometric

Dataset Preparation

Dataset Download

Men Clothing and Women Clothing: Download Amazon product dataset provided by MAML. Put data folder into the directory data/.

Beauty and Toys & Games: Download 5-core reviews data, meta data, and image features from Amazon product dataset. Put data into the directory data/{folder}/meta-data/.

Dataset Preprocessing

Run python build_data.py --name={Dataset}

Usage

For simplicity, we provide usage for the Women Clothing dataset.


  • For MONET in RQ1,
python main.py --agg=concat --n_layers=2 --alpha=1.0 --beta=0.3 --dataset=WomenClothing --model_name=MONET_2_10_3

  • For RQ2, refer the second cell in "Preliminaries.ipynb".

  • For MONET_w/o_MeGCN and MONET_w/o_TA in RQ3,
python main.py --agg=concat --n_layers=0 --alpha=1.0 --beta=0.3 --dataset=WomenClothing --model_name=MONET_wo_MeGCN
python main.py --target_aware --agg=concat --n_layers=2 --alpha=1.0 --beta=0.3 --dataset=WomenClothing --model_name=MONET_wo_TA

  • For RQ4 (hyperparameters $\alpha$, $\beta$ sensitivity),
python main.py --agg=concat --n_layers=2 --alpha={value} --beta=0.3 --dataset=WomenClothing --model_name=MONET_2_{alpha}_3
python main.py --agg=concat --n_layers=2 --alpha=1.0 --beta={value} --dataset=WomenClothing --model_name=MONET_2_10_{beta}

Cite

We encourage you to cite our paper if you have used the code in your work. You can use the following BibTex citation:

@inproceedings{kim24wsdm,
  author   = {Yungi Kim and Taeri Kim and Won{-}Yong Shin and Sang{-}Wook Kim},
  title     = {MONET: Modality-Embracing Graph Convolutional Network and Target-Aware Attention for Multimedia Recommendation},
  booktitle = {ACM International Conference on Web Search and Data Mining (ACM WSDM 2024)},      
  year      = {2024}
}

Acknowledgement

The structure of this code is largely based on LATTICE. Thank for their work.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published