Skip to content
ACL 2019: Learning Attention-based Embeddings for Relation Prediction in Knowledge Graphs
Python Shell
Branch: master
Clone or download
Latest commit 785721b Jul 30, 2019
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
data Updated main.py Mar 5, 2019
.gitignore Bug Fixed Jul 30, 2019
README.md Update README.md Jul 18, 2019
create_batch.py Indentation error Jun 3, 2019
create_dataset_files.py Indentation error Jun 3, 2019
layers.py Fixed layer.py Jun 6, 2019
main.py Bug Fixed Jul 30, 2019
models.py Indentation error Jun 3, 2019
prepare.sh Indentation error Jun 3, 2019
preprocess.py Indentation error Jun 3, 2019
pytorch35.yml Indentation error Jun 3, 2019
utils.py Added support for new dataset Jul 2, 2019

README.md

Learning Attention-based Embeddings for Relation Prediction in Knowledge Graphs

PWC PWC

Source code for our ACL 2019 paper: Learning Attention-based Embeddings for Relation Prediction in Knowledge Graphs Blog link for this publication.

Requirements

Please download miniconda from above link and create an environment using the following command:

    conda env create -f pytorch35.yml

Activate the environment before executing the program as follows:

    source activate pytorch35

Dataset

We used five different datasets for evaluating our model. All the datasets and their folder names are given below.

  • Freebase: FB15k-237
  • Wordnet: WN18RR
  • Nell: NELL-995
  • Kinship: kinship
  • UMLS: umls

Training

Parameters:

--data: Specify the folder name of the dataset.

--epochs_gat: Number of epochs for gat training.

--epochs_conv: Number of epochs for convolution training.

--lr: Initial learning rate.

--weight_decay_gat: L2 reglarization for gat.

--weight_decay_conv: L2 reglarization for conv.

--get_2hop: Get a pickle object of 2 hop neighbors.

--use_2hop: Use 2 hop neighbors for training.

--partial_2hop: Use only 1 2-hop neighbor per node for training.

--output_folder: Path of output folder for saving models.

--batch_size_gat: Batch size for gat model.

--valid_invalid_ratio_gat: Ratio of valid to invalid triples for GAT training.

--drop_gat: Dropout probability for attention layer.

--alpha: LeakyRelu alphas for attention layer.

--nhead_GAT: Number of heads for multihead attention.

--margin: Margin used in hinge loss.

--batch_size_conv: Batch size for convolution model.

--alpha_conv: LeakyRelu alphas for conv layer.

--valid_invalid_ratio_conv: Ratio of valid to invalid triples for conv training.

--out_channels: Number of output channels in conv layer.

--drop_conv: Dropout probability for conv layer.

Reproducing results

To reproduce the results published in the paper:
When running for first time, run preparation script with:

    $ sh prepare.sh
  • Wordnet

      $ python3 main.py --get_2hop True
    
  • Freebase

      $ python3 main.py --data ./data/FB15k-237/ --epochs_gat 3000 --epochs_conv 150 --weight_decay_gat 0.00001 --get_2hop True --partial_2hop True --batch_size_gat 272115 --margin 1 --out_channels 50 --drop_conv 0.3 --output_folder ./checkpoints/fb/out/
    

Citation

Please cite the following paper if you use this code in your work.

    @InProceedings{KBGAT2019,
    author = "Nathani, Deepak and Chauhan, Jatin and Sharma, Charu and Kaul, Manohar",
    title = "Learning Attention-based Embeddings for Relation Prediction in Knowledge Graphs",
    booktitle = "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    location = "Florence, Italy",
    }

For any clarification, comments, or suggestions please create an issue or contact deepakn1019@gmail.com

You can’t perform that action at this time.