Skip to content
Library for analysing text documents: tf-idf transformation, computing similarities, visualisation, etc.
Branch: master
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Type Name Latest commit message Commit time
Failed to load latest commit information.



This repository contains several functions to analyze text corpora. Mainly, text documents can be transformed into (sparse, dictionary based) tf-idf features, based on which the similarities between the documents can be computed, the dataset can be classified with knn, or the corpus can be visualized in two dimensions.

The individual library components are largely independent of another (besides most of them using functions from, which means you might also find only parts of this library interesting, e.g., which contains a concise python implementation of t-SNE, which can be used to embed data points in 2D based on any kind of similarity matrix, not necessarily created with the scripts from this library.

If any of this code was helpful for your research, please consider citing it:

  author       = {Franziska Horn},
  title        = {cod3licious/nlputils},
  month        = may,
  year         = 2018,
  doi          = {10.5281/zenodo.1254413},
  url          = {}

The code is intended for research purposes. It was programmed for Python 2.7, but should also run on newer Python 3 versions - please open an issue if you find something isn't working there!


You either download the code from here and include the nlputils folder in your $PYTHONPATH or install (the library components only) via pip:

$ pip install nlputils

nlputils library components

dependencies: numpy, scipy, unidecode, matplotlib

  • various helper functions to manipulate dictionaries, e.g. to invert them on various levels (for example transform a dict with {document: {word: count}} into {word: {document: count}}).
  • this contains code to preprocess texts and transform them into tf-idf features. It's somewhat similar to the sklearn TfidfVectorizer, but based on (sparse) dictionaries instead of sparse vectors. These dictionary based document features are the main input used for other parts of this library. But there is also a features2mat function to transform the dictionaries into a sparse feature matrix, which can be used with sklearn classifiers, for example.
  • this has one main function compute_sim, which gets as input the tf-idf feature dictionaries of two documents and then computes their similarity. Concerning the type of similarity to compute between the documents, you can chose from a large variety of similarity coefficients, kernel functions, and distance measures, implemented based on [RIE08].
  • this contains wrapper functions for to speed up the computation of the similarity matrix for a whole corpus.
  • helper function to perform a cross-validation.
  • based on a similarity matrix, perform k-nearest-neighbors classification.
  • based on a similarity matrix, project data points to 2D with classical scaling or t-SNE.
  • helper functions to create a plot of the dataset based on the 2D embedding. This can also create a json file, which can be used with d3.js to create an interactive visualization of the data.


additional dependencies: sklearn

In the iPython Notebook at examples/examples.ipynb are several examples on how to use the above described library components.

If you have any questions please don't hesitate to send me an email and of course if you should find any bugs or want to contribute other improvements, pull requests are very welcome!

[RIE08]Rieck, Konrad, and Pavel Laskov. "Linear-time computation of similarity measures for sequential data." Journal of Machine Learning Research 9.Jan (2008): 23-48.
You can’t perform that action at this time.