Skip to content

muchip/fmca

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

FMCA

Fast multiresolution covariance analysis

FMCA is a header only library for the multiresolution analysis of scattered data and kernel matrices. It is developed at the Università della Svizzera italiana in the research group of Michael Multerer.

Currently, the library features the construction of samplet bases and different versions of the pivoted Cholesky decomposition, as well as the fast samplet covariance compression introduced in Samplets: Construction and scattered data compression.

Different scaling distributions and samplets on a Sigma shaped point cloud my look for example like depicted below. What is this

Representing an exponential covariance kernel with respect to this basis and truncating small entries leads to a sparse matrix which can be factorized using nested dissection What is this The left panel shows the kernel matrix, the middle panel the reordered matrix and the right panel the Cholesky factor.

Installation

FMCA is header only. It depends on Eigen, which has to be installed in advance.

Moreover, thanks to pybind11, FMCA may be compiled into a python module. To this end, pybind11 needs to be installed as well. Afterwards, the module can simply be compiled using cmake:

mkdir build
cd build
cmake -DCMAKE_BUILD_TYPE=Release ../
make

example files and the compiled library are then located in build/py

Samplets

FMCA features a samplet basis, which can be used to localize a given signal in the frequency domain. Given for example a signal sampled at 100000 random locations, e.g., What is this

the first 500 coefficients of the transformed signal looks like this What is this

The example above can be found and modified in the jupyter notebook FMCA_Samplets

Gaussian process learning

FMCA provides different variants of the pivoted (truncated) Cholesky decomposition, cp. On the low-rank approximation by the pivoted Cholesky decomposition and the references therein, that can be used for Gaussian process learning.

posterior mean (read) and posterior standard deviation (green) conditioned on the blue dots What is this

The example above can be found and modified in the jupyter notebook FMCA_GP.

Samplet Gaussian process filtering

A samplet matrix compression based approach is also available. It particular allows for filtering of the (compressed) kernel matrix, thus mitigating the very ill-conditioning of the kernel matrix.

What is thisWhat is this

For the Matern-3/2 kernel shown on the left, just considering the diagonal block associated to the 40 largest entries, shown on the right, leads to a relative approximation error of about 3e-5 of the kernel matrix in the Frobenius norm. Solving the associated system for the noisy data set shown below, leads to an effective denoising. The corresponding exepctation is shown in orange.

What is this

This example can be found FMCA_Samplet_GP_Filtering.

About

Fast multiresolution covariance analysis

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published