Please check out our new tool AutoPhrase, which is significantly more efficient and can support multiple languages.
- Jialu Liu*, Jingbo Shang*, Chi Wang, Xiang Ren and Jiawei Han, "Mining Quality Phrases from Massive Text Corpora”, Proc. of 2015 ACM SIGMOD Int. Conf. on Management of Data (SIGMOD'15), Melbourne, Australia, May 2015. (* equally contributed, slides)
The current results support quality unigram mining, which is not covered in the original paper. We plan to improve this part in the future updates.
Automatic labeling is another addon feature based on Wikipedia entities. We suggest you to provide your own labels in order to achieve the best performance.
We will take Ubuntu for example.
- g++ 4.8
$ sudo apt-get install g++-4.8
- python 2.7
$ sudo apt-get install python
$ sudo apt-get install pip $ sudo pip install sklearn
- nltk (required only when WORDNET_NOUN=1)
$ sudo pip install nltk
SegPhrase can be easily built by Makefile in the terminal.
$ ./train_toy.sh #train a toy segmenter and output phrase list as results/unified.csv $ ./train_dblp.sh #train a segmenter and output phrase list for DBLP data $ ./parse.sh #use the segmenter to parse new documents
Parameters - training
RAW_TEXT is the input of SegPhrase, where each line is a single document.
When AUTO_LABEL is set to 1, SegPhrase will automatically generate labels and save it to DATA_LABEL. Otherwise, it will load labels from DATA_LABEL.
when WORDNET_NOUN is set to 1, SegPhrase will resort to wordnet synsets to keep only noun candidates as the last step of training. This requires you to install nltk in python.
We have two knowledge bases, the smaller one contains high quality phrases for positive labels while the larger one is used to exclude medium quality phrases for negative labels.
A hard threshold of raw frequency is specified for frequent phrase mining, which will generate a candidate set.
You can also specify how many threads can be used for SegPhrase
The discard ratio (between 0 and 1) controls how many positive labels can be broken. It is typically small, for example, 0.00, 0.05, or 0.10. It should be EXACTLY 2 digits after decimal point.
This is the number of iterations of Viterbi training.
Alpha is used in the label propagation from phrases to unigrams.
Parameters - parse.sh
./bin/segphrase_parser results/segmentation.model results/salient.csv 50000 ./data/test.txt ./results/parsed.txt 0
The first parameter is the segmentation model, which we saved in training process. The second parameter is the high quality phrases ranking list (together with unigrams). The third one determines the ratio of top ranked phrases (unigrams) will be considered in this run of segmentation. This parameter should be dataset and application specific. The later two are the input and the output of corpus. The last one is a debug flag and you can just leave it as 0.