If you require support or wish to ensure the continuation of this library, you must get your company to respond to the Call For Funding. I do not have the inclination to provide gratis assistance.
netlib-java is a wrapper for low-level BLAS,
LAPACK and ARPACK
that performs as fast as the C / Fortran interfaces with a pure JVM fallback.
netlib-java is included with recent versions of Apache Spark.
If you're a developer looking for an easy-to-use linear algebra library on the JVM, we strongly recommend Commons-Math, MTJ and Breeze:
- Apache Commons Math for the most popular mathematics library in Java (not using
- Matrix Toolkits for Java for high performance linear algebra in Java (builds on top of
- Breeze for high performance linear algebra in Scala and Spark (builds on top of
netlib-java, implementations of BLAS/LAPACK/ARPACK are provided by:
- delegating builds that use machine optimised system libraries (see below)
- self-contained native builds using the reference Fortran from netlib.org
- F2J to ensure full portability on the JVM
The JNILoader will attempt to load the implementations in this order automatically.
All major operating systems are supported out-of-the-box:
- OS X (
- Linux (
x86_64, Raspberry Pi
armhf) (must have
- Windows (32 and 64 bit)
Machine Optimised System Libraries
High performance BLAS / LAPACK are available commercially and open source for specific CPU chipsets. It is worth noting that "optimised" here means a lot more than simply changing the compiler optimisation flags: specialist assembly instructions are combined with compile time profiling and the selection of array alignments for the kernel and CPU combination.
An alternative to optimised libraries is to use the GPU: e.g. cuBLAS or clBLAS. Setting up cuBLAS must be done via our NVBLAS instructions, since cuBLAS does not implement the actual BLAS API out of the box.
Be aware that GPU implementations have severe performance degradation for small arrays. MultiBLAS is an initiative to work around the limitation of GPU BLAS implementations by selecting the optimal implementation at runtime, based on the array size.
To enable machine optimised natives in
netlib-java, end-users make their machine-optimised
libblas3 (CBLAS) and
liblapack3 (Fortran) available as shared libraries at runtime.
If it is not possible to provide a shared library, the author may be available
to assist with custom builds (and further improvements to
netlib-java) on a commercial basis.
Make contact for availability (budget estimates are appreciated).
Apple OS X requires no further setup because OS X ships with the veclib framework, boasting incredible CPU performance that is difficult to surpass (performance charts below show that it out-performs ATLAS and is on par with the Intel MKL).
(includes Raspberry Pi)
Generically-tuned ATLAS and OpenBLAS are available with most distributions (e.g. Debian) and must be enabled explicitly using the package-manager. e.g. for Debian / Ubuntu one would type
sudo apt-get install libatlas3-base libopenblas-base sudo update-alternatives --config libblas.so sudo update-alternatives --config libblas.so.3 sudo update-alternatives --config liblapack.so sudo update-alternatives --config liblapack.so.3
selecting the preferred implementation.
However, these are only generic pre-tuned builds. To get optimal performance for a specific
machine, it is best to compile locally by grabbing the latest ATLAS or the latest OpenBLAS and following the compilation
instructions (don't forget to turn off CPU throttling and power management during the build!).
Install the shared libraries into a folder that is seen by the runtime linker (e.g. add your install
/etc/ld.so.conf then run
ldconfig) ensuring that
exist and point to your optimal builds.
If you have an Intel MKL licence, you could also
create symbolic links from
libmkl_rt.so or use
Debian's alternatives system:
sudo update-alternatives --install /usr/lib/libblas.so libblas.so /opt/intel/mkl/lib/intel64/libmkl_rt.so 1000 sudo update-alternatives --install /usr/lib/libblas.so.3 libblas.so.3 /opt/intel/mkl/lib/intel64/libmkl_rt.so 1000 sudo update-alternatives --install /usr/lib/liblapack.so liblapack.so /opt/intel/mkl/lib/intel64/libmkl_rt.so 1000 sudo update-alternatives --install /usr/lib/liblapack.so.3 liblapack.so.3 /opt/intel/mkl/lib/intel64/libmkl_rt.so 1000
and don't forget to add the MKL libraries to your
file (and run
sudo ldconfig), e.g. add
NOTE: Some distributions, such as Ubuntu
precise do not create the necessary symbolic links
/usr/lib/liblapack.so.3 for the system-installed implementations,
so they must be created manually.
native_system builds expect to find
liblapack3.dll on the
(or current working directory).
Besides vendor-supplied implementations,
OpenBLAS provide generically tuned binaries,
and it is possible to build
Use Dependency Walker to help resolve any problems such as:
UnsatisfiedLinkError (Can't find dependent libraries).
NOTE: OpenBLAS doesn't provide separate libraries
so you will have to customise the build or copy the binary into both
liblapack3.dll whilst also obtaining a copy of
libgcc_s_seh-1.dll from MinGW.
A specific implementation may be forced like so:
A specific (non-standard) JNI binary may be forced like so:
(note that this is not your
liblapack.so.3, it is the
netlib-java native wrapper component which automatically detects and loads your system's libraries).
To turn off natives altogether, add these to the JVM flags:
Java has a reputation with older generation developers because Java applications were slow in the 1990s. Nowadays, the JIT ensures that Java applications keep pace with – or exceed the performance of – C / C++ / Fortran applications.
The following performance charts give an idea of the performance ratios of Java vs the native
implementations. Also shown are pure C performance runs that show that
dropping to C at the application layer gives no performance benefit.
If anything, the Java version is faster for smaller matrices and is consistently faster
than the "optimised" implementations for some types of operations (e.g.
One can expect machine-optimised natives to out-perform the reference implementation – especially for larger arrays – as demonstrated below by Apple's veclib framework, Intel's MKL and (to a lesser extent) ATLAS.
Of particular note is the cuBLAS (NVIDIA's graphics card) which performs as well
as ATLAS on
DGEMM for arrays of
~20,000+ elements (but as badly as the Raspberry Pi for smaller arrays!) and
not so good for
Included in the CUDA performance results is the
time taken to setup the CUDA interface and copy the matrix elements to the GPU device. The
nooh run is
a version that does not include the overhead of transferring arrays to/from the GPU device: to take
full advantage of the GPU requires developers to re-write their applications with
GPU devices in mind. e.g. re-written implementation of LAPACK that took advantage of the GPU BLAS
would give a much better performance improvement than dipping in-and-out of GPU address space.
The DSAUPD benchmark measures the
calculation of 10% of the eigenvalues for sparse matrices (
N rows by
N colums). Not included in
this benchmark is the time taken to perform the matrix multiplication at each iteration
NOTE: larger arrays were called first so the JIT has already kicked in for F2J implementations: on a cold startup the F2J implementations are about 10 times slower and get to peak performance after about 20 calls of a function (Raspberry Pi doesn't seem to have a JIT).
Releases are distributed on Maven central:
<dependency> <groupId>com.github.fommil.netlib</groupId> <artifactId>all</artifactId> <version>1.1.2</version> <type>pom</type> </dependency>
SBT developers can use
"com.github.fommil.netlib" % "all" % "1.1.2" pomOnly()
Those wanting to preserve the pre-1.0 API can use the legacy package (but note that it will be removed in the next release):
<dependency> <groupId>com.googlecode.netlib-java</groupId> <artifactId>netlib</artifactId> <version>1.1</version> </dependency>
and developers who feel the native libs are too much bandwidth can
depend on a subset of implementations: simply look in the
Snapshots (preview releases, when new features are in active development) are distributed on Sonatype's Snapshot Repository, e.g.:
<dependency> <groupId>com.github.fommil.netlib</groupId> <artifactId>all</artifactId> <version>1.2-SNAPSHOT</version> </dependency>
If the above fails, ensure you have the following in your
<repositories> <repository> <id>sonatype-snapshots</id> <url>https://oss.sonatype.org/content/repositories/snapshots/</url> <releases> <enabled>false</enabled> </releases> <snapshots> <enabled>true</enabled> </snapshots> </repository> </repositories>