Skip to content
Switch branches/tags


Failed to load latest commit information.
Latest commit message
Commit time

Learning Word Embeddings for Low-resource Languages by PU Learning

Chao Jiang, Hsiang-Fu Yu, Cho-Jui Hsieh, Kai-Wei Chang. NAACL 2018

Please run the code through

For details, please refer to this paper

  • Abstract

Word embedding is a key component in many downstream applications in processing natural languages. Existing approaches often assume the existence of a large collection of text for learning effective word embedding. However, such a corpus may not be available for some low-resource languages. In this paper, we study how to effectively learn a word embedding model on a corpus with only a few million tokens. In such a situation, the co-occurrence matrix is sparse as the co-occurrences of many word pairs are unobserved. In contrast to existing approaches often only sample a few unobserved word pairs as negative samples, we argue that the zero entries in the co-occurrence matrix also provide valuable information. We then design a Positive-Unlabeled Learning (PU-Learning) approach to factorize the co-occurrence matrix and validate the proposed approaches in four different languages.

  • Source Code

To reproduce results on text8 dataset, please run file. It could automatically generate all results on the text8 dataset in our paper.

To run the code, please use python 2.7.

  • Data

We provide the testsets in four different languages: English, Czech, Danish and Dutch. Testsets in Czech, Danish and Dutch are translated from English by Google Translation API. They are in the testsets_in_different_languages folder.

  • Reference

    Please cite
author    = {Chao Jiang and Hsiang-Fu Yu and Cho-Jui Hsieh and Kai-Wei Chang},
title     = {Learning Word Embeddings for Low-resource Languages by PU Learning}, 
booktitle = {NAACL}, 
year      = {2018},
  • Acknowledgments


Matrix factorization for word embeddings







No releases published


No packages published