Code for Chinese LIWC Lexicon Expansion via Hierarchical Classification of Word Embeddings with Sememe Attention (AAAI18)
Switch branches/tags
Nothing to show
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
bin_data
datasets Add missing datasets file 'HowNet.txt'. Fix issue #1. Mar 6, 2018
utils Initial Commit Nov 26, 2017
LICENSE Create LICENSE Dec 7, 2017
README.md Update README.md Nov 29, 2017
train_liwc.py
word2vec.py

README.md

Auto LIWC

The code for Chinese LIWC Lexicon Expansion via Hierarchical Classification of Word Embeddings with Sememe Attention (AAAI18).

Datasets

This folder datasets contains two datasets.

  1. HowNet.txt is an Chinese knowledge base with annotated word-sense-sememe information.
  2. sc_liwc.dic is the Chinese LIWC lexicon. This is revised version of the original C-LIWC file. Because the original contains part of speech (POS) categories such as verb, adverb, and auxverb, we believe it is more accurate to utilize POS tagging programs when conducting text analysis in a given text. Therefore, we delete POS categories in our experiment. Furthermore, the hierarchical structure is slightly different from the original English version of LIWC, so we altered the hierarchical structure based on the English LIWC. As for the exact meaning of each category, you can refer to here and here.

Please note that the above datasets files are for academic and educational use only. They are not for commercial use. If you have any questions, please contact us first before downloading the datasets.

Due to the large size of the embedding file, we can only release the code for training the word embeddings. Please see word2vec.py for details.

Run

Run the following command for training and testing:

python3 train_liwc.py

If the datasets are in a different folder, please change the path here.

The current code generates different training and testing set every time. To reproduce the results in the paper, you can load train.bin and test.bin located in bin_data using pickle.

Dependencies

  • Tensorflow == 1.4.0
  • Scipy == 0.19.0
  • Numpy == 1.13.1
  • Scikit-learn == 0.18.1
  • Gensim == 2.0.0

Cite

If you use the code, please cite this paper:

Xiangkai Zeng, Cheng Yang, Cunchao Tu, Zhiyuan Liu, Maosong Sun. Chinese LIWC Lexicon Expansion via Hierarchical Classification of Word Embeddings with Sememe Attention. The 32nd AAAI Conference on Artificial Intelligence (AAAI 2018).