Official implementation of "Multi-attention recommender system for non-fungible tokens" (Engineering Applications of Artificial Intelligence)"
All experiments were repeated three times, which can be replicated with three different random seeds (2022, 2023, 2024).
Explore our Data Description for detailed data information, and Experimental Details for the detailed experiment settings.
-
Install Python 3.10.9
-
Download data. You can obtain all pre-processed data from Google Drive.
(For detailed description about the data, please refer to
Our model/Create_dataset.ipynb) -
Create a directory
dataset/collectionsand place the downladed data in that location. -
Download requirement packages pip install -r requirements.txt
-
Train the model.
We provide the experiment scripts of all datasets in the file
run.sh. You can reproduce the experiment results by:bash run.sh
-
(Ablation studies) Train the model using a single graph.
In this case, since the multi-modal attention used in the existing NFT-MARS model cannot be applied, the model name has been changed to "MO", which stands for Multi Objective. For example, "MO_v" is a single graph model that utilizes visual features.
We provide the experiment scripts of all datasets in the file
run_MO.sh. You can reproduce the experiment results by:bash run_MO.sh
- We develop a model to address three unique challenges that NFT recommender systems face. Our method consists of three key components:
- Graph attention to handle extremely sparse user-item interactions
- Multi-modal attention to incorporate user-specific feature preferences
- Multi-task learning to address the dual nature of NFTs as artworks and investment assets
- We demonstrate the effectiveness of our model compared to various baseline models using the actual transaction data of NFTs collected directly from blockchain for four of the most popular NFT collections.
- We constructed a dataset by combining this transaction data with hand-crafted features, which can be used as a benchmark dataset for any NFT recommendation model. Datasets are available on Google Drive.
Our repository includes two additional folders: "Baseline models (MGAT)" and "Baseline models (Others)". All baselines except for MGAT were implemented using RecBole, so they are separated into their own folder.
contains the code to implement the MGAT model.
You can follow the steps in the "Get started" section above, but note that the experiment script names are different.
bash run_MGAT.shcontains code to implement other models including Pop, ItemKNN, BPR, DMF, LightGCN, FM, DeepFM, WideDeep, DCN, and AutoInt.
You can follow the steps in the "Get started" section above, but note that you need to install different version of Python.
Python 3.7.12
We appreciate the following github repos a lot for their valuable code base or datasets:
