Skip to content

A text classification model based on textGCN and the WikiData knowledge graph

License

Notifications You must be signed in to change notification settings

niklasamslgruber/textKGCN

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

27 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

textKGCN

License: MIT Licsense: MIT

This repository extends Ken Gu's PyTorch implementation of "Graph Convolutional Networks for Text Classification." (AAAI 2019) by introducing doc2doc edges from the WikiData Knowledge Graphs to the word document graph.

Running the project

$ python main.py
Optional Arguments:
  • --show_eval: Prints all evaluation metrics to the console
  • --plot: Plots textKGCN embeddings, training curves, recent model performance
  • --word-window-size: Specifies the window size used for the model (default: 15)
  • --use_edge_weights: Defines whether edge weights should be used
  • --method: Select doc2doc edge weighting method (count, idf, idf_wiki)
  • --threshold: Filter threshold for doc2doc edges (default: 2)
  • --no_wiki: Disable doc2doc edges to run the base textGCN model
  • --debug: Activate debug mode (changes number of epochs)
  • --version: Specify the version of filtered relations
  • --drop_out: Perform random drop-out on the doc2doc edges

Other configuration options can be set in config.py.

Dependencies

The code runs with Python 3.6. All dependencies can be installed automatically with this command (for cpu usage only, macOS & linux only):

sh install_dependencies.sh
Note: The script requires python3.6 and pip installed. It is recommended to install all dependencies into a separate Python environment.

These dependencies will be installed:

  • torch==1.6.0
  • torchvision==0.7.0
  • torch-cluster==1.5.7
  • torch-scatter==2.0.5
  • torch-sparse==0.6.7
  • torch-spline-conv==1.2.0
  • torch-geometric==1.6.1
  • klepto==0.1.9
  • sklearn, matplotlib, seaborn, pytz, pandas, spacy, nltk, lxml
Additionally, the model requires the spacy 'en' dataset which is will be installed automatically when you run the bash script. Otherwise it can be installed manually with python -m spacy download en

Datasets

These datasets are already included and pre-processed:

  • r8 and r52
  • ohsumed
  • 20NG
  • MR

The r8_small dataset is a small subset of r8 and is only intendend for debugging purposes.

Custom datasets

The following steps must be performed to include a custom dataset:

  1. Add the dataset name ([dataset_name]) to the dataset section in config.py
  2. Include [dataset_name]_labels.txt and [dataset_name]_sentences.txt in the _data/corpus/[dataset_name] directory. Each line of the files should correspond to a document and its label.
  3. Run python prep_data.py to generate the [dataset_name]_sentences_clean.txt and [dataset_name]_vocab.txt files
  4. Run python prep_graph.py to start the knowledge graph mapping process (may take several hours)

Knowledge Graph Mapping

The dataset must be mapped to the WikiData knowledge graph to build doc2doc edges. In order to run the model the dataset must be mapped to the WikiData knowledge graph. The following steps are performed when you run `python prep_graph.py:

  1. All available WikiData relations are downloaded and filtered by category and number of occurences
  2. The dataset's vocabulary gets mapped to WikiData entities including relations to other entities (only the relevant relations from step 1 are used by the model)
  3. The doc2doc edges are generated by analyzing all relations between all documents
Note: The steps require an active internet connection (step 1 & 2) and may take several hours to complete (step 3).

Results

All results are saved automatically. Models and model information are stored in the _logs directory of each dataset directory. All training metrics are stored in _data/results_log as csv. To see average metrics split by the model parameters run python analyze_results.py --dataset [dataset_name].

Base Paper

Graph Convolutional Networks for Text Classification. Liang Yao, Chengsheng Mao, Yuan Luo. AAAI, 2019. (Paper)