Skip to content

Commit

Permalink
release 5.1.0 (#134)
Browse files Browse the repository at this point in the history
* - citation info, typos

* - typo

* Update README.md; Cite Me with Saskia as co-author, beta release info "no warranty" not after UniQC, but before all beta toolboxes.

* Update CHANGELOG.md; PhysIO v8.0 (Hilbert RVT, resp preprocessing)

* Update CITATION.cff; typos Saskia

* - HUGE: release 202104

* - HUGE: file permissions

* - release: update changelog

* Update CITATION.cff

Co-authored-by: Lars Kasper <kasper@biomed.ee.ethz.ch>
  • Loading branch information
TNU-yaoy and mrikasper committed Apr 23, 2021
1 parent a3b1378 commit 2013ba8
Show file tree
Hide file tree
Showing 22 changed files with 1,757 additions and 57 deletions.
12 changes: 11 additions & 1 deletion CHANGELOG.md
@@ -1,12 +1,22 @@
# Changelog
TAPAS toolbox
## [5.1.0] 2021-04-23

### Added
- HUGE: Added Monte Carlo based inference (https://doi.org/10.1002/hbm.25431)

### Fixed
- Fixed typos


## [5.0.0] 2021-03-15

### Added
- [genbed](genbed/README.md): Python package for Generative Embedding
- [UniQC](UniQC/README.md): unified neuroimaging quality control (beta release)
- [task/BLT](task/BLT/README.md): breathing learning task

- [PhysIO](PhysIO/CHANGELOG.md): Release 2021a-v8.0; new method for estimating respiratory volume per unit time (RVT) via the Hilbert transform ([Harrison et al., NeuroImage 230 (2021)](https://doi.org/10.1016/j.neuroimage.2021.117787))
- also: robust respiratory preprocessing: optional de-spiking (median filter) and window-padded detrending

## [4.0.0] 2020-09-09

Expand Down
10 changes: 5 additions & 5 deletions CITATION.cff
Expand Up @@ -3,8 +3,8 @@ message: If you use this software, please cite it as below.
authors:
- family-names: Aponte
given-names: Eduardo
- family-names: Bollman
given-names: Sakia
- family-names: Bollmann
given-names: Saskia
- family-names: Brodersen
given-names: Kay
- family-names: Frässle
Expand All @@ -28,8 +28,8 @@ authors:
- family-names: Yao
given-names: Yu
title: "TAPAS - Translational Algorithms for Psychiatry-Advancing Science"
version: 5.0.0
date-released: 2021-03-12
version: 5.1.0
date-released: 2021-04-23
repository-code: "https://github.com/translationalneuromodeling/tapas"
url: "http://www.translationalneuromodeling.org/tapas"
license: GPL-3.0-or-later
license: GPL-3.0-or-later
16 changes: 9 additions & 7 deletions README.md
@@ -1,6 +1,6 @@
![TAPAS Logo](misc/TapasLogo.png?raw=true "TAPAS Logo")

*Version 5.0.0*
*Version 5.1.0*

T A P A S - Translational Algorithms for Psychiatry-Advancing Science.
========================================================================
Expand Down Expand Up @@ -39,10 +39,9 @@ And the following tasks:
- [BLT](task/BLT/README.md): Breathing Learning Task.


TAPAS also includes beta versions the following toolboxes:
TAPAS also includes beta versions the following toolboxes. Please note that these toolboxes have not been extensively tested and are still in active development:
- [h2gf](h2gf/README.md): hierarchical extension of HGF.
- [UniQC](UniQC/README.md): unified neuroimaging quality control.
Please note that these toolboxes have not been extensively tested and are still in active development.


TAPAS is written in MATLAB and Python and distributed as open source code under
Expand All @@ -53,9 +52,11 @@ the GNU General Public License (GPL, Version 3).
INSTALLATION
------------

TAPAS is a collection of toolboxes written in MATLAB (Version R2016b) and Python. The key
requirement is the installation of MATLAB (produced by The MathWorks, Inc.
Natick, MA, USA. http://www.mathworks.com/). Toolboxes written in Python currently include: genbed. For requirements see the genbed [documentation](genbed/README.md).
TAPAS is a collection of toolboxes written in MATLAB (Version R2016b) and Python.
- The key requirement is the installation of MATLAB (produced by The MathWorks, Inc.
Natick, MA, USA. http://www.mathworks.com/).
- Toolboxes written in Python currently include:
- genbed: For requirements see the genbed [documentation](genbed/README.md).

Please download TAPAS from our
[Github Release Page](https://github.com/translationalneuromodeling/tapas/releases).
Expand Down Expand Up @@ -101,7 +102,8 @@ Cite Me

Please cite this toolbox as:

S. Frässle, E. A. Aponte, K. H. Brodersen, C. T. Do, O. Harrison, S. J. Harrison, J. Heinzle, S. Iglesias, L. Kasper, E. I. Lomakina, C. D. Mathys, M. Müller-Schrader, I. Pereira, F. H. Petzschner, S. Raman, D. Schöbi, B. Toussaint, L. A. Weber, Y. Yao, and K. E. Stephan, “TAPAS: an open-source toolbox for translational neuromodeling and computational psychiatry,” \[Online\]. Available: https://www.biorxiv.org/content/10.1101/2021.03.12.435091v1
Frässle, S., Aponte, E.A., Bollmann, S., Brodersen, K.H., Do, C.T., Harrison, O.K., Harrison, S.J., Heinzle, J., Iglesias, S., Kasper, L., Lomakina, E.I., Mathys, C., Müller-Schrader, M., Pereira, I., Petzschner, F.H., Raman, S., Schöbi, D., Toussaint, B., Weber, L.A., Yao, Y., Stephan, K.E., 2021. TAPAS: an open-source software package for Translational Neuromodeling and Computational Psychiatry. bioRxiv 2021.03.12.435091. https://doi.org/10.1101/2021.03.12.435091


For additional information about citations and current version, enter `tapas_version(1);` on your MATLAB command line.

Expand Down
1 change: 0 additions & 1 deletion huge/.gitignore
Expand Up @@ -2,4 +2,3 @@
/*.mex*
/trash
/@tapas_Huge/*.asv
/spm
4 changes: 4 additions & 0 deletions huge/@tapas_Huge/default_options.m
Expand Up @@ -30,10 +30,13 @@

% name value pairs
options.nvp = struct(...
'burnin' , [], ...
'confounds' , [], ...
'confoundsvariant' , 'default', ...
'dcm' , [], ...
'inversetemperature' , 1, ...
'k' , 1, ...
'kernelratio' , [10 100], ...
'method' , 'VB', ...
'numberofiterations' , [], ...
'omitfromclustering' , [], ...
Expand All @@ -42,6 +45,7 @@
'priordegree' , 100, ...
'priorvarianceratio' , .1, ...
'randomize' , false, ...
'retainsamples' , 1000, ...
'saveto' , [], ...
'seed' , [], ...
'startingvaluedcm' , 'prior', ...
Expand Down
4 changes: 2 additions & 2 deletions huge/@tapas_Huge/estimate.m
Expand Up @@ -182,7 +182,7 @@
prior.tau_0 = obj.options.nvp.priorvarianceratio;
assert(all(prior.tau_0(:) > 0), 'TAPAS:HUGE:Prior', ...
'PriorCovarianceRatio must be positive scalar.');
% xxxTODO check size consistent


% mu_h
prior.mu_h = obj.options.prior.mu_h;
Expand Down Expand Up @@ -298,7 +298,7 @@

% randomize starting values
if obj.options.nvp.randomize && ~(strcmpi(obj.options.nvp.method, 'MH') && ...
~obj.options.mh.nSteps.clusters) %%% TODO remove exception and introduce new method for clustering only
~obj.options.mh.nSteps.clusters)
stVal2 = stVal2 + randn(size(stVal2)).*obj.options.start.gmm;
end

Expand Down
177 changes: 177 additions & 0 deletions huge/@tapas_Huge/mh_init.m
@@ -0,0 +1,177 @@
function [ obj ] = mh_init( obj )
% Initialize Markov chain state for Metropolized Gibbs sampling.
% Requires obj.dcm and obj.prior to be intialized.
%
% This is a protected method of the tapas_Huge class. It cannot be called
% from outside the class.
%


% fMRI forward model
obj.options.fncBold = @bold_gen;

%% priors
if isvector(obj.prior.tau_0)
obj.prior.tau_0 = diag(obj.prior.tau_0);
end
% adapt obj.prior to collapsed HUGE
prior = struct();
prior.alpha_0 = obj.prior.alpha_0;
prior.m_0 = obj.prior.m_0;
prior.T_0 = inv(obj.prior.S_0).*obj.prior.tau_0;
prior.s_0 = - log(diag(obj.prior.S_0)');
prior.nu_0 = obj.prior.nu_0;
if isscalar(prior.nu_0)
prior.nu_0 = repmat(prior.nu_0, obj.idx.P_c, 1);
end
prior.Pi_h = inv(obj.prior.Sigma_h);
prior.mu_h = obj.prior.mu_h;
% lambda: BOLD-to-Noise ratio in log-space
prior.lambda_0 = repmat(log(obj.prior.mu_lambda), 1, obj.R);
prior.omega_0 = repmat(4/(log(obj.prior.s2_lambda).^2), obj.R, 1);

obj.prior = prior;

%% starting values
init.pi = ones(1, obj.K)./obj.K;
init.mu = obj.options.start.clusters;
init.kappa = repmat(obj.prior.s_0, obj.K, 1);
init.theta_c = obj.options.start.subjects(:, 1:obj.idx.P_c);
init.theta_h = obj.options.start.subjects(:, obj.idx.P_c+1:end);
init.lambda = repmat(obj.prior.lambda_0, obj.N, 1);
% randomizing starting values
if obj.options.nvp.randomize
tmp = obj.options.start.gmm;
if obj.options.mh.nSteps.weights
init.pi = init.pi + exp(randn(size(init.pi))*tmp); %%% TODO draw from prior
init.pi = init.pi./sum(init.pi); %%% requires samples from dirichlet
end
if obj.options.mh.nSteps.clusters
init.kappa = init.kappa + randn(size(init.kappa))*tmp;
end
tmp = obj.options.start.dcm;
if obj.options.mh.nSteps.dcm
init.lambda = init.lambda + randn(size(init.lambda))*tmp;
end
end

%% switch to single precision
if obj.options.bSinglePrec
init.pi = single(init.pi);
init.mu = single(init.mu);
init.kappa = single(init.kappa);
init.theta_c = single(init.theta_c);
init.theta_h = single(init.theta_h);
init.lambda = single(init.lambda);
end

%% initialize state of Markov chain
obj.aux.l2pi = -.5*obj.idx.P_c*log(2*pi);
% step size
obj.aux.step = obj.options.mh.stepSize;
obj.aux.step.mu = repmat(obj.aux.step.mu, obj.K, 1);
% obj.aux.step.mu = ones(obj.K, 1);
obj.aux.step.kappa = repmat(obj.aux.step.kappa, obj.K, 1);
obj.aux.step.theta = repmat(obj.aux.step.theta, obj.N, 1);
% obj.aux.step.theta = ones(obj.N, 1);
obj.aux.step.lambda = repmat(obj.aux.step.lambda, obj.N, 1);

% transforming proposal distribution
obj.aux.transform = struct();
obj.aux.transform.mu = repmat(eye(obj.idx.P_c), 1, 1, obj.K);
obj.aux.logdet.mu = zeros(obj.K, 1);
% obj.aux.transform.theta = repmat(eye(obj.idx.P_c + obj.idx.P_h), 1, 1, obj.N);
obj.aux.transform.theta = eye(obj.idx.P_c + obj.idx.P_h);
% obj.aux.logdet.theta = zeros(obj.N, 1);
obj.aux.logdet.theta = 0;

obj.aux.sample = init; % obj.aux sample
obj.aux.nProp = struct(); % number of proposals
obj.aux.nAccept = struct(); % number of accepted proposals
obj.aux.lpr = struct(); % log obj.prior
obj.aux.lpr.llh = zeros(obj.N,1); % log likelihood

% weights
obj.aux.nProp.pi = 0;
obj.aux.nAccept.pi = 0;
pic = fliplr(cumsum(obj.aux.sample.pi(end:-1:1)));
pis = obj.aux.sample.pi./pic;
pic = pic(2:end-1);
pis = pis(1:end-1);
obj.aux.sample.pi_u = tapas_huge_logit(pis) + log(obj.K-1:-1:1);

obj.aux.lpr.pi = max(log(obj.aux.sample.pi)*(obj.prior.alpha_0 - 1) ...
+ sum(log(pis)) + sum(log(1-pis)) + sum(log(pic)), -realmax);
% dPi = obj.aux.sample.pi_u - obj.prior.mu_pi; % Gaussian prior in
% obj.aux.lpr.pi = -dPi'.^2*obj.prior.pi_pi/2; % unconstrained space

% clusters
obj.aux.nProp.mu = 0;
obj.aux.nAccept.mu = zeros(obj.K, 1);
obj.aux.nAccept.mu_rsk = zeros(obj.K, 2);
obj.aux.nProp.kappa = 0;
obj.aux.nAccept.kappa = zeros(obj.K, 1);
obj.aux.rho = zeros(obj.N, obj.K);
obj.aux.lpr.mu = zeros(1, obj.K);
obj.aux.lpr.kappa = zeros(1, obj.K);
for k = 1:obj.K
dmu_k = obj.aux.sample.mu(k,:) - obj.prior.m_0;
obj.aux.lpr.mu(k) = -.5*dmu_k*obj.prior.T_0*dmu_k';
obj.aux.lpr.kappa(k) = -.5*(obj.aux.sample.kappa(k,:) - ...
obj.prior.s_0).^2*obj.prior.nu_0;
dtheta_c = bsxfun(@minus, obj.aux.sample.theta_c, obj.aux.sample.mu(k,:));
obj.aux.rho(:,k) = -.5*dtheta_c.^2*exp(obj.aux.sample.kappa(k,:)') ...
+ .5*sum(obj.aux.sample.kappa(k,:)) + obj.aux.l2pi;
end
obj.aux.rho_max = max(obj.aux.rho, [], 2);
obj.aux.rho = bsxfun(@minus, obj.aux.rho, obj.aux.rho_max);

% DCM parameters
obj.aux.nProp.theta = 0;
obj.aux.nAccept.theta = zeros(obj.N, 1);
obj.aux.lpr.theta_c = log(exp(obj.aux.rho)*obj.aux.sample.pi(:)) ...
+ obj.aux.rho_max;
dtheta_h = bsxfun(@minus, obj.aux.sample.theta_h, obj.prior.mu_h);
obj.aux.lpr.theta_h = -.5*sum((dtheta_h*obj.prior.Pi_h).*dtheta_h, 2);

obj.aux.lvarBold = zeros(obj.N, obj.R); % log-variance of BOLD response
obj.aux.q_r = zeros(obj.N, 1); % number of scans
obj.aux.epsilon = cell(obj.N, 1);
for n = 1:obj.N
% number of scans
obj.aux.q_r(n) = size(obj.data(n).bold, 1);
% log-variance of BOLD response (per subject and region)
obj.aux.lvarBold(n,:) = max(log(var(obj.data(n).bold)), ...
obj.const.minLogVar);
theta = [obj.aux.sample.theta_c(n,:), obj.aux.sample.theta_h(n,:)];
epsilon = obj.bold_gen( theta, obj.data(n), obj.inputs(n), ...
obj.options.hemo, obj.R, obj.L, obj.idx );
assert(~(any(isnan(epsilon(:))) || any(isinf(epsilon(:)))), ...
'TAPAS:HUGE:Init:Stability',...
'Starting values for subject %u lead to instable DCM.', n);
obj.aux.epsilon{n} = epsilon;
end

% Noise precision (hyperparameters)
obj.aux.nProp.lambda = 0;
obj.aux.nAccept.lambda = zeros(obj.N,1);
dLambda = bsxfun(@minus, obj.aux.sample.lambda, obj.prior.lambda_0);
obj.aux.lpr.lambda = -.5*dLambda.^2*obj.prior.omega_0(:);

%%% TODO add support for confounds

% log-likelihood
for n = 1:obj.N
obj.aux.lpr.llh(n) = ...
-.5*sum(obj.aux.epsilon{n}.^2*exp(obj.aux.sample.lambda(n,:) ...
- obj.aux.lvarBold(n,:))') ...
+.5*obj.aux.q_r(n)*sum(obj.aux.sample.lambda(n,:)...
- obj.aux.lvarBold(n,:));
end

% special proposals
obj.aux.nProp.sp = [0;0];
obj.aux.nAccept.sp = [0;0];

end

0 comments on commit 2013ba8

Please sign in to comment.