Skip to content
SALM: Suffix Array and its Applications in Empirical Language Processing by Joy
Branch: master
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Type Name Latest commit message Commit time
Failed to load latest commit information.


SALM: Suffix Array tool kit for empirical Language Manipulations.
By Joy,

1) Download the source code from: or
2) Build binaries:
	a) For Linux platform:
		cd Distribution/Linux
		make allO32 (for 32-bit platform)
		make allO64 (for 64-bit platform)
		Binaries are created under Bin/Linux
	b) For Win32 platform
		open project files under Distribution/Win32 and use Visual C++ to build executables.
		Executables are placed under Bin/Win32
3) Index a corpus.
	The first step is to index a corpus using IndexSA program.
	There is no limitation to the size of the corpus as long as there is enough RAM.
	A corpus of N words requires 9N bytes memory during indexing.
	Another constraint is that no sentence can have more than 254 words.
	Synposis of IndexSA:
		IndexSA corpusFileName [existingIDVocabularyFile]
	Optional existingIDVocabularyFile can be used to specify an existing vocabulary.
	It will be updated if the words in the corpus are new to the exising vocabulary.
	This is useful if several corpora want to share a common vocabulary.

4) Applications
	The key functions to suffix array applications are provided in class C_SuffixArraySearchApplicationBase and C_SuffixArrayScanningBase
	Please check the documentation and API for more details.
	Sample programs such as:
			Output the frequency of an n-gram in the training corpus
			Output the n-gram token matching statistics of a testing data
			Output the n-gram type matching statistics of a testing data
			Output the frequencies of all the embedded n-grams in a sentence
			Output the non-compositionalities of the embedded n-grams in a sentence
			Filter out duplicated sentences in the training corpus and output the unique ones
			Given a list of n-grams and a list of traing corpus indexed by their suffix array, collect counts of n-grams in these corpus. E.g. given a Chinese word list, one can collect the frequency of these words (as character n-grams) from several large corpora (segmented into characters).

			Output the count-of-counts information of a corpus
			Specified by a configuration file, output the n-gram types that have frequencies higher than the threshold
			Output the type/token statistics of the corpus

5) Questions, comments and suggestions?
Please email
You can’t perform that action at this time.