# shogun-toolbox/shogun

The Shogun Machine Learning Toolbox (Source Code)
C++ Python CMake C Matlab R Other
Latest commit 68edab9 Aug 25, 2016 Fix #2550
When the build directory is outside of the source directory
and building from a clone git repository then version.cmake will
fail as the git commands were not running within the cloned
repository's directory
thnx for CaBa for helping in debugging the problem
 Failed to load latest commit information. applications Sep 12, 2015 benchmarks Mar 23, 2016 cmake Aug 25, 2016 configs Jun 7, 2016 data @ 87bcaa2 Aug 11, 2016 doc Aug 13, 2016 examples Aug 12, 2016 scripts Apr 30, 2015 src Aug 12, 2016 tests Aug 12, 2016 .clang_complete Aug 14, 2013 .gitignore Mar 4, 2016 .gitmodules Feb 9, 2016 .travis.yml Mar 12, 2016 CMakeLists.txt Jul 27, 2016 COPYING Dec 22, 2014 CTestConfig.cmake Aug 9, 2013 NEWS Jun 2, 2016 README.md Jul 12, 2016 setup.py Jul 12, 2016

# The SHOGUN machine learning toolbox

Develop branch build status:

Other links that may be useful:

• See INSTALL for first steps on installation and running SHOGUN.
• See README.developer for the developer documentation.
• See README.cmake for setting particular build options with SHOGUN and cmake.

## Introduction

The Shogun Machine learning toolbox provides a wide range of unified and efficient Machine Learning (ML) methods. The toolbox seamlessly allows to easily combine multiple data representations, algorithm classes, and general purpose tools. This enables both rapid prototyping of data pipelines and extensibility in terms of new algorithms. We combine modern software architecture in C++ with both efficient low-level computing backends and cutting edge algorithm implementations to solve large-scale Machine Learning problems (yet) on single machines.

One of Shogun's most exciting features is that you can use the toolbox through a unified interface from C++, Python, Octave, R, Java, Lua, C#, etc. This not just means that we are independent of trends in computing languages, but it also lets you use Shogun as a vehicle to expose your algorithm to multiple communities. We use SWIG to enable bidirectional communication between C++ and target languages. Shogun runs under Linux/Unix, MacOS, Windows.

Originally focussing on large-scale kernel methods and bioinformatics (for a list of scientific papers mentioning Shogun, see here), the toolbox saw massive extensions to other fields in recent years. It now offers features that span the whole space of Machine Learning methods, including many classical methods in classification, regression, dimensionality reduction, clustering, but also more advanced algorithm classes such as metric, multi-task, structured output, and online learning, as well as feature hashing, ensemble methods, and optimization, just to name a few. Shogun in addition contains a number of exclusive state-of-the art algorithms such as a wealth of efficient SVM implementations, Multiple Kernel Learning, kernel hypothesis testing, Krylov methods, etc. All algorithms are supported by a collection of general purpose methods for evaluation, parameter tuning, preprocessing, serialisation & I/O, etc; the resulting combinatorial possibilities are huge. See our feature list for more details.

The wealth of ML open-source software allows us to offer bindings to other sophisticated libraries including: LibSVM/LibLinear, SVMLight, LibOCAS, libqp, VowpalWabbit, Tapkee, SLEP, GPML and more. See our list of integrated external libraries.

Shogun got initiated in 1999 by Soeren Sonnenburg and Gunnar Raetsch (that's where the name ShoGun originates from). It is now developed by a much larger Team cf. website and AUTHORS, and would not have been possible without the patches and bug reports by various people. See CONTRIBUTIONS for a detailed list. Statistics on Shogun's development activity can be found on ohloh.

## Interfaces

SHOGUN is implemented in C++ and interfaces to Matlab(tm), R, Octave, Java, C#, Ruby, Lua and Python.

The following table depicts the status of each interface available in SHOGUN:

Interface Status
python_modular mature (no known problems)
octave_modular mature (no known problems)
java_modular stable (no known problems; not all examples are ported)
ruby_modular stable (no known problems; only few examples ported)
csharp_modular stable (no known problems; not all examples are ported)
lua_modular alpha (some examples work, string typemaps are unstable
perl_modular pre-alpha (work in progress quality)
r_modular pre-alpha (SWIG does not properly handle reference counting and thus only for the brave,
--disable-reference-counting to get it to work, but beware that it will leak memory; disabled by default)
octave_static mature (no known problems)
matlab_static mature (no known problems)
python_static mature (no known problems)
r_static mature (no known problems)
libshogun_static mature (no known problems)
cmdline_static stable (but some data types incomplete)
elwms_static this is the eierlegendewollmilchsau interface, a chimera that in one file interfaces with python, octave, r, matlab and provides the run_python command to run code in python using the in octave,r,matlab available variables, etc)

Visit http://www.shogun-toolbox.org/doc/en/current for further information.

## Platforms

Debian GNU/Linux, Mac OSX and WIN32/CYGWIN are supported platforms (see the INSTALL file for generic and platform specific installation instructions).

## Contents

The following directories are found in the source distribution.

• src - source code.
• data - data sets (required for some examples / applications - these need to be downloaded separately via the download site or git submodule update --init from the root of the git checkout.
• doc - documentation (to be built using doxygen), ipython notebooks, and the PDF tutorial.
• examples - example files for all interfaces.
• applications - applications of SHOGUN.
• benchmarks - speed benchmarks.
• tests - unit and integration tests.
• cmake - cmake build scripts

## Applications

We have successfully used this toolbox to tackle the following sequence analysis problems: Protein Super Family classification, Splice Site Prediction, Interpreting the SVM Classifier, Splice Form Prediction, Alternative Splicing and Promotor Prediction. Some of them come with no less than 10 million training examples, others with 7 billion test examples.