Skip to content
The implementation of
Python Perl Shell Jupyter Notebook
Branch: master
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Type Name Latest commit message Commit time
Failed to load latest commit information.

Segmental Language Models


A PyTorch Implementation of Unsupervised Neural Word Segmentation for Chinese via Segmental Language Modeling

Implemented features


  • Unsupervised Learning with Segmental Language Models
  • Supervised Learning with Segmental Language Models


Chinese Corpus:

  • segmented.txt: segmented data set for supervised training
  • unsegmented.txt: unsegmented data set. You can use both this data set and test.txt for unsupervised training
  • test.txt: unsegmented data set for evaluation
  • test_gold.txt: gold segmented test data set


For example, this command train an unsupervised SLM model on pku dataset with maximal segment length 4 and GPU 0.

bash train unsupervised pku 4 0

Check and argparse configuration at codes/ for more arguments and more details.


bash predict unsupervised pku 4 0


bash eval unsupervised pku 4


The Segmental Language Models usually take about 30 - 50 minutes to converge, which depends on the maximal segment length (2 - 4).

Unsupervised results of the SLM model (Maximal Segment Length = k)

Dataset PKU MSR AS CityU
k = 2 0.797 (0.802) 0.776 (0.785) 0.794 (0.794) 0.786 (0.782)
k = 3 0.803 (0.798) 0.784 (0.794) 0.800 (0.803) 0.803 (0.805)
k = 4 0.797 (0.792) 0.782 (0.790) 0.798 (0.804) 0.798 (0.797)

Note that this is a re-implementation of the SLM model. Due to the differences in detailed settings, such as data loader setting, dropout rate and learning rate, the re-implementation performance is a little different from what is reported in the paper.

Using the library

The python library is organized around 4 objects:

  • InputDataset ( prepare data stream for training and evaluation
  • CWSTokenizer ( work along with InputDataset for data pre-processing
  • SegmentalLM ( build the model and provide train/test API for SLM
  • SLMConfig ( manage configurations for SLM

The file contains the main function, which parses arguments, reads data, initialize the model and provides the training loop.


If you use the codes, please cite the following paper:

  title={Unsupervised Neural Word Segmentation for Chinese via Segmental Language Modeling},
  author={Sun, Zhiqing and Deng, Zhi-Hong},
  booktitle={Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing},
You can’t perform that action at this time.