This repository extends Ken Gu's PyTorch implementation of "Graph Convolutional Networks for Text Classification." (AAAI 2019) by introducing doc2doc
edges from the WikiData Knowledge Graphs to the word document graph.
$ python main.py
--show_eval
: Prints all evaluation metrics to the console--plot
: Plots textKGCN embeddings, training curves, recent model performance--word-window-size
: Specifies the window size used for the model (default: 15)--use_edge_weights
: Defines whether edge weights should be used--method
: Select doc2doc edge weighting method (count
,idf
,idf_wiki
)--threshold
: Filter threshold for doc2doc edges (default: 2)--no_wiki
: Disabledoc2doc
edges to run the basetextGCN
model--debug
: Activate debug mode (changes number of epochs)--version
: Specify the version of filtered relations--drop_out
: Perform random drop-out on the doc2doc edges
Other configuration options can be set in config.py
.
The code runs with Python 3.6
.
All dependencies can be installed automatically with this command (for cpu
usage only, macOS
& linux
only):
sh install_dependencies.sh
Note: The script requires python3.6
and pip
installed. It is recommended to install all dependencies into a separate Python environment.
These dependencies will be installed:
torch==1.6.0
torchvision==0.7.0
torch-cluster==1.5.7
torch-scatter==2.0.5
torch-sparse==0.6.7
torch-spline-conv==1.2.0
torch-geometric==1.6.1
klepto==0.1.9
sklearn
,matplotlib
,seaborn
,pytz
,pandas
,spacy
,nltk
,lxml
Additionally, the model requires the spacy 'en'
dataset which is will be installed automatically when you run the bash script. Otherwise it can be installed manually with python -m spacy download en
These datasets are already included and pre-processed:
r8
andr52
ohsumed
20NG
MR
The r8_small
dataset is a small subset of r8
and is only intendend for debugging purposes.
The following steps must be performed to include a custom dataset:
- Add the dataset name (
[dataset_name]
) to the dataset section inconfig.py
- Include
[dataset_name]_labels.txt
and[dataset_name]_sentences.txt
in the_data/corpus/[dataset_name]
directory. Each line of the files should correspond to a document and its label. - Run
python prep_data.py
to generate the[dataset_name]_sentences_clean.txt
and[dataset_name]_vocab.txt
files - Run
python prep_graph.py
to start the knowledge graph mapping process (may take several hours)
The dataset must be mapped to the WikiData knowledge graph to build doc2doc
edges.
In order to run the model the dataset must be mapped to the WikiData knowledge graph.
The following steps are performed when you run `python prep_graph.py:
- All available WikiData relations are downloaded and filtered by category and number of occurences
- The dataset's vocabulary gets mapped to WikiData entities including relations to other entities (only the relevant relations from step 1 are used by the model)
- The
doc2doc
edges are generated by analyzing all relations between all documents
Note: The steps require an active internet connection (step 1 & 2) and may take several hours to complete (step 3).
All results are saved automatically.
Models and model information are stored in the _logs
directory of each dataset directory.
All training metrics are stored in _data/results_log
as csv
.
To see average metrics split by the model parameters run python analyze_results.py --dataset [dataset_name]
.
Graph Convolutional Networks for Text Classification. Liang Yao, Chengsheng Mao, Yuan Luo. AAAI, 2019. (Paper)