Skip to content

tsnnls is a lightweight solver for the sparse nonnegative least squares problem based on TAUCS

License

LGPL-3.0 and 2 other licenses found

Licenses found

LGPL-3.0
LICENSE
GPL-3.0
COPYING
LGPL-3.0
COPYING.LESSER
Notifications You must be signed in to change notification settings

designbynumbers/tsnnls

tsnnls 2.4.5 - A fast block pivoting algorithm for sparse linear least squares
problems with non-negative variables designed to be embedded in applications

Jason Cantarella 
Michael Piatek
Eric Rawdon

28 September 2022

Contents
0. What is tsnnls?
1. Building
2. License
3. Quick start
4. Supporting functions
5. References

0.  tsnnls is a fast solver for least-squares problems in the form Ax
= b under the constraint that all entries in the solution vector x are
non-negative. The solver works for the case when the matrix A has full
column rank (and so there is a unique solution). The tsnnls code is
primarily designed for large sparse problems, but works well for dense
problems too. We designed tsnnls to be called from larger
applications, and are including it in a large simulation ourselves. We
would be interested to hear about your application for tsnnls-- please
feel free to write to us and tell us about your uses for the code!

The tsnnls library also includes a fast unconstrained least-squares
solver based on the method of normal equations. For well-conditioned
problems, this is a reasonable solution. For ill-conditioned problems,
you are probably better off with an iterative method like the Stanford
LSQR code, which is also freely available on the web.

For a more complete report on tsnnls, see our paper "tsnnls: A solver 
for large sparse least squares problems with non-negative variables",
which is available on arXiv.


1. Building

TSNNLS uses the standard GNU Autotools

./configure
make
make check
make install

method for installation. TSNNLS depends on OpenBlas to provide LAPACK
and Blas functions; if you install via autotools, you may have to provide
./configure with the location of the cblas.h and lapacke.h headers and
the libopenblas.dylib library. To build the test program tsnnls_test, you
will need to install the argparse library.

Installation can also be accomplished via Homebrew using the cantarellalab tap:

brew tap designbynumbers/cantarellalab
brew install tsnnls

See the file INSTALL in the main distribution directory for more information.

Also note that included with our distribution is a limited version of
the TAUCS package for sparse matrix manipulation. Future updates to
this library may provide performance benefits for tsnnls. See
http://www.tau.ac.il/~stoledo/taucs/ for information on the TAUCS
library. We include source from version 2.2. A statement regarding the
TAUCS license and availability information is provided in
'TAUCS_LICENSE_AND_AVAILABILITY' in the same directory as this readme.

2. License

This software is distributed under the terms of the GNU Lesser GPL. A full
description of the terms of this license can be found in the file
'LICENSE, COPYING, and COPYING.LESSER' in the same directory as this readme, or the license can be
read on the web at http://www.gnu.org/licenses/gpl.txt


3. Quick start

Once you have built the library, the main functionality is exposed
through the header tsnnls.h. The functions of primary interest are:

taucs_ccs_matrix*   
taucs_construct_sorted_ccs_matrix( double* values, int cols, int rows );

This function takes an array of double values in row major order
representing a dense matrix of size rows by cols and converts it into
a compressed column structure for use with our algorithm. For a
complete discussion of the taucs_ccs_matrix structure, see the
taucs.pdf documentation included with this package.

taucs_double* t_snnls( taucs_ccs_matrix *A_original_ordering,
taucs_double *b, double* outResidualNorm, double inRelErrTolerance,
int inPrintErrorWarnings );

This function solves the problem Ax = b in the least squares sense
subject to the contraint that x > 0. The value returned is an array of
doubles of size A->n, where A->n is the number of columns of the
matrix A. b must be of size A->m where A-> is the number of rows of
the matrix A. residualNorm is the 2-norm of the residual
b-Ax. inRelErrTolerance is the threshold for relative error (estimated
using (condition number)^2*(machine epsilon)) at which to use the more
numerically sensitive LSQR algorithm (see references). If you *never*
wish to use LSQR steps (for maximum speed) pass a value greater than 1
for this parameter. If you always want to perform a final LSQR step
(for maximum accuracy), pass a value less than 0 for this
parameter. The latter practice is recommended.

double* t_lsqr(taucs_ccs_matrix *A, taucs_double *b);

This function solves the unconstrained problem Ax=b in the least
squares sense using a Cholesky factorization.

For examples of how these functions can be used in practice, see the
tsnnls test application source in the file 'tsnnls_test.c' in the same
directory as this readme.


4. Supporting functions

A variety of debugging and support functions are included with tsnnls
that may be useful in your application. We describe several here:

double
taucs_rcond( taucs_ccs_matrix* A );

This function returns an estimate of the reciprocal condition number
of A as computed by LAPACK.

double*	
taucs_convert_ccs_to_doubles( const taucs_ccs_matrix* A );

This function converts a sparse matrix A to an array of doubles in row
major order of size A->m*A->n.

taucs_ccs_matrix*  
taucs_ccs_transpose( const taucs_ccs_matrix* A );

This function returns a sparse representation of the transpose of A.

void		
taucs_print_ccs_matrix( const taucs_ccs_matrix* A );

This function prints a representation of the sparse matrix A on
stdout.

See the tsnnls.h header for a complete list of functions available
from the library.


5. References

Version 2.0 of tsnnls uses the block3 algorithm of Mikael Adlers. A
description of the algorithm can be found in his licentiat thesis
"Sparse Least Squares Problems with Box Constraints", available at

http://www.mai.liu.se/~milun/

The self-test code in tsnnls 2.0 and higher uses a problem generation
strategy proposed in

@article {MR95a:90059,
    AUTHOR = {Portugal, Lu{\'{\i}}s F. and J{\'u}dice, Joaqu{\'{\i}}m J. and
              Vicente, Lu{\'{\i}}s N.},
     TITLE = {A comparison of block pivoting and interior-point algorithms
              for linear least squares problems with nonnegative variables},
   JOURNAL = {Math. Comp.},
  FJOURNAL = {Mathematics of Computation},
    VOLUME = {63},
      YEAR = {1994},
    NUMBER = {208},
     PAGES = {625--643},
      ISSN = {0025-5718},
     CODEN = {MCMPAF},
   MRCLASS = {90C20 (65K05)},
  MRNUMBER = {95a:90059},
The paper is stored on JSTOR.

LSQR sparse unconstrained least squares algorithm:
http://www.stanford.edu/group/SOL/software.html

LAPACK and BLAS: http://www.netlib.org/lapack/

TAUCS library for sparse matrices: http://www.tau.ac.il/~stoledo/taucs/


About

tsnnls is a lightweight solver for the sparse nonnegative least squares problem based on TAUCS

Topics

Resources

License

LGPL-3.0 and 2 other licenses found

Licenses found

LGPL-3.0
LICENSE
GPL-3.0
COPYING
LGPL-3.0
COPYING.LESSER

Stars

Watchers

Forks

Packages

No packages published

Languages