New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fit parameters #372

Merged
merged 54 commits into from Sep 2, 2018

Conversation

Projects
None yet
3 participants
@CameronTEllis
Contributor

CameronTEllis commented Jun 5, 2018

Added the ability to fit SFNR, SNR and AR noise parameters. Also added the 'compute_signal_magnitude' function to make signal creation easier. Minor fixes elsewhere

CameronTEllis added some commits Feb 4, 2018

Improved export_epoch, added more logging info, zscoring AR, reduced …
…AR coefs to 1, added some warnings/tests of input data quality, split fitting up into 3 functions
@CameronTEllis

This comment has been minimized.

Contributor

CameronTEllis commented Jun 6, 2018

@mihaic Will codecov not run if Travis failed? The Travis failure is with Ruby again

CameronTEllis added some commits Jun 7, 2018

@lcnature

Hi @CameronTEllis I have not finished reading the codes yet. But here are some partial comments for you to consider :)

@@ -624,7 +625,7 @@ def export_epoch_file(stimfunction,
label them as different epochs
filename : str

This comment has been minimized.

@lcnature

lcnature Jun 20, 2018

Contributor

Sorry I did not provide feedback earlier. But when I read the code this time, I feel that the docstring did not mention exaclty how the epoch file is related to the task structure except for "This is a list with each entry a 3d matrix corresponding to a participant. The dimensions of the 3d matrix are condition by epoch by time". Would it help to add in the docstring some description like "For the i-th condition, if its k-th epoch spans time points t_m to t_n-1, then [i, k, t_m:t_n] are 1 in the epoch file."?
I actually personally feel that this way of coding epoch file seemsa little redundant :) stimfunction (or its sub-sampled version) seems to already capture all the info in the epoch file. But maybe it was decided earlier that this way of coding has some specific use?

This comment has been minimized.

@CameronTEllis

CameronTEllis Jun 26, 2018

Contributor

Hi Mingbo,

Firstly thank you so much for the review, it is really appreciated!

Secondly, yes I will add that info, that is a good idea

Thirdly, I agree that the epoch file contains redundant information but I think the reason they made them that way is to ensure that they are as flexible as possible for different design types to make the code as robust as possible.

# If you have multiple TRs
if len(brain_voxels.shape) > 1:
brain_voxels = np.mean(brain_voxels, 1)
nonbrain_voxels = np.mean(nonbrain_voxels, 3)

This comment has been minimized.

@lcnature

lcnature Jun 20, 2018

Contributor

If I understood correctly, this steps calculates the mean intensity of nonbrain_voxels over time, and then std_voxels takes the spatial std of the mean intensity of non-brain voxels.
Do you think it makes more sense to calculate the std over both spatial and temporal dimension, instead of std over spatial dimension on a temporal mean? Obviously people may have different definition of noise level but I worry that the first step of calculating mean of each time series might make the estimation of noise level too small (it would be close to the expected spatial variation of noise at any given TR divided by a factor of sqrt(n_TR))

This comment has been minimized.

@CameronTEllis

CameronTEllis Jun 26, 2018

Contributor

This functionality is there as a stop gap. Usually for this calculation (and everywhere else in the code) only one TR is provided so this if statement is skipped.

mask,
auto_reg_order=1,
ma_order=1,
sample_num=100,

This comment has been minimized.

@lcnature

lcnature Jun 20, 2018

Contributor

If the recommended sample_num is 1000 below, would it be better by leaving the default value as 1000 here?

This comment has been minimized.

@CameronTEllis

CameronTEllis Jun 26, 2018

Contributor

It would be advised if it weren't so slow. I now changed the doc string to reflect the statistics for 100 samples

auto_reg_sigma = np.sqrt(auto_reg_sigma[1])
# Pull out the ARMA values (depends on order)
try:
model = ARMA(demeaned_timecourse, [auto_reg_order, ma_order])

This comment has been minimized.

@lcnature

lcnature Jun 20, 2018

Contributor

Non-essential suggestion: since the same model is used for all voxels, would it be better to move this line above the current loop?

This comment has been minimized.

@CameronTEllis

CameronTEllis Jun 26, 2018

Contributor

This would be great if I could avoid this because this is the most time consuming part of the whole code. However, this line seems necessary since the model also takes in the data (demeaned_timecourse) which changes on each iteration right

This comment has been minimized.

@lcnature

lcnature Jul 20, 2018

Contributor

Ah, sorry you are right, I somehow thought it was a module in nitime and had a vague impression that a class instance can be initiated without input data.

@@ -1362,15 +1461,9 @@ def generate_noise_volume(dimensions,
# If this is below zero then all the noise will be temporal

This comment has been minimized.

@lcnature

lcnature Jun 20, 2018

Contributor

I just realized that the assumption temporal_sd reflects variance of both temporal and spatial noise is not reflected in the docstring. Would you consider making them more consistent?

This comment has been minimized.

@CameronTEllis

CameronTEllis Jun 26, 2018

Contributor

Not sure what you mean by this. I am pretty sure that temporal_sd only varies the temporal component of the noise but is limited when spatial_sd is greater

This comment has been minimized.

@lcnature

lcnature Jul 20, 2018

Contributor

Ah, sorry I think I meant spatial_sd. Does the line 1461 spatial_sd = np.sqrt(spatial_sd ** 2 - temporal_sd ** 2) mean the users should add both the variance of spatial and temporal noise when they feed spatial_sd? They might not be aware that this subtraction is performed. Or maybe I misunderstood this line.
Sorry I am slightly confused with the formula in the following lines as well.
Line 1478 seems to say temporal_noise becomes the demeaned temporal noise plus spatial noise. Line 1481 adds this temporal_noise (which already added spatial_noise) with spatial_noise again. Does it mean spatial_noise is added twice in system_noise?

This comment has been minimized.

@CameronTEllis

CameronTEllis Jul 23, 2018

Contributor

This is a complicated part of the simulation that I could be doing wrong. The goal is to make a baseline volume of noise where each voxel has a specific mean and standard deviation. Importantly this noise will contribute to any measurement of SFNR or SNR I want to do so I need to be careful about how these different types of noise sum together. So for instance when generating this volume I need to add the spatial noise (gaussian) to the temporal noise (which is gaussian around the spatial noise). But after that addition I don't want to have changed the amount mean or variability of the spatial and temporal noise. I think that this code achieves it but I could be wrong and I would appreciate any suggestions to either fix it or make it more clear.

This comment has been minimized.

@lcnature

lcnature Aug 8, 2018

Contributor

Unfortunately I still did not understand the rationale of the if/else clause. Does it work without this?

This comment has been minimized.

@CameronTEllis

CameronTEllis Aug 10, 2018

Contributor

Yeah good question. The problem is that I want to subtract the spatial variance from the temporal variance but if the spatial variance is smaller than the temporal variance this would be negative. In such cases I just set the spatial variance to zero

@@ -1382,7 +1475,7 @@ def generate_noise_volume(dimensions,
1)
temporal_noise = temporal_noise - (temporal_noise_mean - spatial_noise)

This comment has been minimized.

@lcnature

lcnature Jun 20, 2018

Contributor

I actually think this may not be necessary - a zero-mean noise time series just means that the statistical expectation of the noise needs to be zero but does not mean that the mean of an instance of the noise time series has to be zero. But of course when n_TR is large it doing this does not change much.

This comment has been minimized.

@CameronTEllis

CameronTEllis Jun 26, 2018

Contributor

Since I multiply these time series, it is critical that they have a mean of zero to ensure that the variance of the time course is the only thing I am manipulating. Hence it is necessary that each time series is mean zero

auto_reg_order : float
How many timepoints ought to be taken into consideration for the
autoregression function
noise_dict : dict

This comment has been minimized.

@lcnature

lcnature Jun 20, 2018

Contributor

Do you want to mention all the keys allowed for noise_dict here, or at least in a function which the users are supposed to interact with? (Please dismiss this if it is already documented elsewhere -- sorry I did not search everywhere)

This comment has been minimized.

@CameronTEllis

CameronTEllis Jun 26, 2018

Contributor

Good point. I added some description to _noise_dict_update and generate_noise that should help.

# this timecourse 1, it will change the autoregression coefficient to be
# much lower.
# Collect the noise from the previous TRs
err_vols[:, :, :, tr_counter] = noise

This comment has been minimized.

@lcnature

lcnature Jun 20, 2018

Contributor

this line may be raised above the loop for pCounter, I think, since it only depends on tr_counter.

This comment has been minimized.

@CameronTEllis

CameronTEllis Jun 26, 2018

Contributor

Good point

@@ -1594,10 +1732,10 @@ def _generate_noise_temporal_phys(timepoints,
What time points, in seconds, are sampled by a TR
resp_freq : float
What is the frequency of respiration
What is the frequency of respiration (in s)

This comment has been minimized.

@lcnature

lcnature Jun 20, 2018

Contributor

Maybe in Hz instead, since you refer to frequency?

This comment has been minimized.

@CameronTEllis

CameronTEllis Jun 26, 2018

Contributor

Good catch

if len(dimensions) == 4:
return
raise IndexError('4 dimensions have been supplied, only using 3')

This comment has been minimized.

@lcnature

lcnature Jun 20, 2018

Contributor

I think this line would just raise an error and the code below won't be executed, so there is no need for the next line, right?

This comment has been minimized.

@CameronTEllis

CameronTEllis Jun 26, 2018

Contributor

Good point, turned it into a warning instead

CameronTEllis added some commits Jun 26, 2018

@mihaic

This comment has been minimized.

Contributor

mihaic commented Jul 17, 2018

@lcnature, what do you think about the latest commits?

@lcnature

Sorry for my delay. It looks much nicer! I also like the notebook example. I left some comments for your consideration.

auto_reg_sigma = np.sqrt(auto_reg_sigma[1])
# Pull out the ARMA values (depends on order)
try:
model = ARMA(demeaned_timecourse, [auto_reg_order, ma_order])

This comment has been minimized.

@lcnature

lcnature Jul 20, 2018

Contributor

Ah, sorry you are right, I somehow thought it was a module in nitime and had a vague impression that a class instance can be initiated without input data.

@@ -1362,15 +1461,9 @@ def generate_noise_volume(dimensions,
# If this is below zero then all the noise will be temporal

This comment has been minimized.

@lcnature

lcnature Jul 20, 2018

Contributor

Ah, sorry I think I meant spatial_sd. Does the line 1461 spatial_sd = np.sqrt(spatial_sd ** 2 - temporal_sd ** 2) mean the users should add both the variance of spatial and temporal noise when they feed spatial_sd? They might not be aware that this subtraction is performed. Or maybe I misunderstood this line.
Sorry I am slightly confused with the formula in the following lines as well.
Line 1478 seems to say temporal_noise becomes the demeaned temporal noise plus spatial noise. Line 1481 adds this temporal_noise (which already added spatial_noise) with spatial_noise again. Does it mean spatial_noise is added twice in system_noise?

to the brain. If you set the noise dict to matched then it will fit
the parameters to match the participant as best as possible.
The noise variables are as follows:
snr [float]: Size of the spatial noise

This comment has been minimized.

@lcnature

lcnature Jul 20, 2018

Contributor

I thinks snr and sfnr are both some sorts of signal-to-noise ratio. Perhaps you should describe them as such ratio, instead of the size of noise?

This comment has been minimized.

@CameronTEllis

CameronTEllis Jul 23, 2018

Contributor

Good point, added

physiological_sigma [float]: Size of the variance of physiological
noise
auto_reg_rho [list]: The coefficients of the autoregressive

This comment has been minimized.

@lcnature

lcnature Jul 20, 2018

Contributor

For these two parameters maybe it helps to point out that they correspond to ARMA model of noise. It might also be helpful to mention fwhm is for describing spatial smoothness of noise.

This comment has been minimized.

@CameronTEllis

CameronTEllis Jul 23, 2018

Contributor

Good point, added

if iterations == 0:
logger.info('No fitting iterations were run')
elif iteration == iterations:
logger.info('SNR failed to converge.')

This comment has been minimized.

@lcnature

lcnature Jul 20, 2018

Contributor

might be worth using logger.warning for these instead. The same suggestion for _fit_temporal

This comment has been minimized.

@CameronTEllis

CameronTEllis Jul 23, 2018

Contributor

I think that when iterations failed to converge this should through a warning but I don't think a warning is useful when iterations is set (by the user) to zero, although it would be strange to call the 'fit*' functions and not specify iterations.

# Calculate the current signal amplitude (likely to be 1,
# but not necessarily)
sig_amp = np.max(np.abs(sig_voxel))

This comment has been minimized.

@lcnature

lcnature Jul 20, 2018

Contributor

CNR_Amp/Noise-SD seems to calculate CNR based on the change of signal in the voxel with the largest change (becuase every voxel is scaled only to the maximum absolute value of the whole signal_function). But here CNR seems to be calculated for each voxel separately because sig_amp is calculated for each voxel. I am not sure which is better. The current approach can have problem if signal_function has all-zero time course in any voxel. The approach in CNR_Amp/Noise-SD can also be a bit wierd if signal_function have different magnitude in different voxels originally and if the magnitude parameter is a vector -- the final CNR for each voxel won't be what is asked for in the magnitude parameter. But regardless of which one you choose, I think it may be better that different metrics take consistent way of scaling, and it may be good to specify what is required for signal_function (is it expected that they only reflect the hypothetical time courses of signals in each voxel but they should not reflect magnitude difference across voxels?)

This comment has been minimized.

@CameronTEllis

CameronTEllis Jul 23, 2018

Contributor

Wow, great catch, this is perceptive. I decided to scale the CNR choices to the maximum amplitude (which is almost always 1 anyway). I added a note to the description to clarify that voxels values are all relative to each other

# Rearrange the equation to compute the size of signal change in
# decibels
scale = 10 ** ((magnitude_voxel / sig_amp) + np.log10(noise_std

This comment has been minimized.

@lcnature

lcnature Jul 20, 2018

Contributor

I think (magnitude_voxel / sig_amp) and np.log10(noise_std ** 2) are not compatible to be both in the power. Only magnitude_voxel is defined on log scale (since you are using dB) but sig_map and noise_std should not be used as power for 10.
I think it should be (10 ** (magnitude_voxel/20)) * noise_std. And if you want to scale signal_function by the maximum absolute value separately for different voxels, then (10 ** (magnitude_voxel/20)) * noise_std / sig_amp. The division by 20 is because the name of the metric seems to indicate that the dB value is calculated based on power: dB=10log(Power1/Power2)=20log(magnitude1/magnitude2)
And if you use this formula, then the next line does not need to take sqrt.
But anyway I think it is worth double checking the formula with some example simulated time courses.

This comment has been minimized.

@CameronTEllis

CameronTEllis Jul 23, 2018

Contributor

Good point, changed accordingly and checked with the simulation below:

import numpy as np
import matplotlib.pyplot as plt
from brainiak.utils.fmrisim import _double_gamma_hrf

def sim_deci(decibels):
    noise_size=1
    sig_voxel=np.asarray(_double_gamma_hrf())
    noise_voxel=np.random.randn(len(sig_voxel)) * noise_size
    noise_std = np.std(noise_voxel)

    scale = (10 ** (decibels / 20)) * noise_std

    func = (sig_voxel * scale) + noise_voxel

    plt.plot(func)

sim_deci(10)
sim_deci(20)
sim_deci(30)
# Rearrange the equation to compute the size of signal change in
# decibels
scale = 10 ** ((magnitude_voxel / sig_std) + np.log10(noise_std

This comment has been minimized.

@lcnature

lcnature Jul 20, 2018

Contributor

The same as above, I think (magnitude_voxel / sig_std) should not be in the power of the exponent.

This comment has been minimized.

@CameronTEllis

CameronTEllis Jul 23, 2018

Contributor

Sounds good, changed it to this:

scale = (10 ** (magnitude_voxel / 20)) * noise_std / (max_amp * sig_std)

'PSC',
)
assert (abs(sig_b) - abs(sig_a)).min() >= 0, 'Magnitude modulation failed'

This comment has been minimized.

@lcnature

lcnature Jul 20, 2018

Contributor

If the noise_function and signal_function are shared between sig_a and sig_b, is it expected that sig_b is twice as big as sig_a for all time points and voxels? If so, maybe this can be tested as well.
And I think it might be worth testing the result of the other metrics as well, in addition to verifying they can run.

This comment has been minimized.

@CameronTEllis

CameronTEllis Jul 23, 2018

Contributor

Good suggestion, I added these tests, it is a good safety precaution

system_high = np.std(noise_high[mask > 0], 1).mean()
system_low = np.std(noise_low[mask > 0], 1).mean()
assert system_low < system_high, "SFNR noise could not be manipulated"

This comment has been minimized.

@lcnature

lcnature Jul 20, 2018

Contributor

I am slightly confused here. If SFNR is larger, does it mean that mean intensity is higher with the same amount of noise? Does that mean noise_low should have a higher average value than noise_high (given that sfnr is larger in noise_low)?
EDIT: Ah sorry I realized the std is taken first. So there is no problem here.

Changed the fitting procedure of _fit_spatial and _fit_temporal so th…
…at they are not made negative by default, added more description throughout
@CameronTEllis

This comment has been minimized.

Contributor

CameronTEllis commented Aug 8, 2018

@lcnature have you had a chance to review the latest commit and responses? It seems almost good to go.

@lcnature

@CameronTEllis Yes I think it is almost good to go. Please see the two added comments. Regarding SFNR - I know that there is no correct way to do it. Maybe it is OK to leave it as is. But we should be very cautious as this might cause some problem if other properties in noise_dict interact with this parameter.

@@ -1362,15 +1461,9 @@ def generate_noise_volume(dimensions,
# If this is below zero then all the noise will be temporal

This comment has been minimized.

@lcnature

lcnature Aug 8, 2018

Contributor

Unfortunately I still did not understand the rationale of the if/else clause. Does it work without this?

@@ -1382,7 +1478,7 @@ def generate_noise_volume(dimensions,
1)
temporal_noise = temporal_noise - (temporal_noise_mean - spatial_noise)

This comment has been minimized.

@lcnature

lcnature Aug 8, 2018

Contributor

I think the combination of Line 1479 (where you essentially added spatial_noise to de-meaned temporal_noise) and 1482 (which adds spatial_noise again to temporal_noise) inflates the size of spatial_noise. I suggest changing Line 1479 to temporal_noise = temporal_noise - temporal_noise_mean.
Further, if spatial_std means the std of noise for the mean intensity across voxels and temporal_std is the std of noise beyond the varation in spatial_noise, then demeaning temporal_noise already guarantees this. So I think you do not need the if/else clause in Line 1461 which attempts to correct the variance as you said.

This comment has been minimized.

@CameronTEllis

CameronTEllis Aug 10, 2018

Contributor

Yeah good catch, that is right, I am effectively doubling the spatial noise.

However I am not sure I understand the second part. The variance of the spatial and temporal noise can be anything, if I don't put constraints on it I will get problems as described above

This comment has been minimized.

@lcnature

lcnature Aug 10, 2018

Contributor

Sorry then maybe I have misunderstanding of some definitions. Can you help me sort them out?

Function noise_volume generates an iid volume of noise with specified noise size and noise type, so both voxels in and out of brain have the same noise variance in spatial_noise. temporal_noise is also generated with noise_volume but then temporally demeaned. This means that its temporal average has no spatial variation, therefore contributes nothing to spatial noise. It is then added by spatial_noise, which essentiall biases the time series in each voxel by the value in the correspondent voxel in spatial_noise. Based on these, I assume spatial_sd means the size of variation in the mean intensity (averaged across time) across voxels, and temporal_sd means the temporal variation within each voxel relative to its mean intensity. I think your codes guarantee these, so it does not seem necessary to correct spatial_noise as in Line 1461. And I think _calc_snr seems also based on this definition. Lines like 2345 seem to be consistent with this definition.

But maybe you intended to use spatial_sd to mean global_sd, that is, the standard deviation of the whole 4-D matrix that includes the variance both in space and in time? In that case, I can see that you want to subtract the temporal variance from the variance of whole data to get a spatial_sd that follows the definition above. And with this definition what you did in Line 1461 makes sense. But then I think _calc_snr does not reflect this assumption because it estimates the real "spatial" standard deviation.

This comment has been minimized.

@CameronTEllis

CameronTEllis Aug 10, 2018

Contributor

Ah I think that is a clear explanation of what you are thinking. Let me clarify my thinking and then we can see if you agree. _generate_system_noise seeks to simulate the thermal noise properties of the scanner. This is added to the brain specific noise and also the baseline mean MR values of the brain (and skull and rest of the MR sensitive stuff in the FOV). The system noise has a big effect on the SNR since I am comparing the brain voxels to the non brain voxels. Critically, the SNR is calculated using only a single time point, not the average of the brain across time.

The system noise in this simulation (and real data) looks like the middle image below:

image

The idea is that each voxel has a mean activity with variability in time around that mean. The mean is mainly set by the baseline, but an additional component is the spatial noise that is added on top (parameterized by spatial_sd) in order to make sure there is variability between simulations (the baseline is identical between simulations with the same input/reference data). This spatial noise is consistent between time points. Added to the mean of each voxel is the temporal variability The temporal variability is characterized by temporal_sd. The idea of temporal_sd is simply to capture how thermal noise fluctuates across time. However, this temporal_sd is not there to affect/control SFNR but instead for SNR...

That may be confusing but let me explain. Remember we want SNR to reflect the brain mean to nonbrain variance at a given time point. Hence both the spatial_sd AND temporal_sd will contribute to the sd at any given timepoint (this wouldn't be true if SNR used the mean across time). Hence I want to ensure that there is a specific amount of variability for any given time point and to do that I need to constrain one of these two values not to exceed that target variability. I chose to constrain/limit spatial_sd but I could have done the opposite, I just think that makes less sense.

SFNR is less affected by all of this: the size of the variability in time accounted for by the system noise is constrained by 'temporal_proportion'.

Upon explaining this, I am questioning the necessity of spatial_sd, specifically whether the complexity it adds is justifiable. According to the logic, I could have only temporal_sd that is added to the baseline and that should achieve an appropriate simulation of the system noise. Although I think spatial_sd is potentially important theoretically, it may just make things unnecessarily complicated.

Interested to hear your thoughts

This comment has been minimized.

@lcnature

lcnature Aug 10, 2018

Contributor

I see you point. So spatial_sd only matters when there are multiple runs. If there is a single run, it is undetectable since no one knows the real intensity at each voxel. Then I agree that temporal_noise is sufficient. It would only be a bit odd if the baseline outside of the brain are all set zero and temporal_noise is also demeaned - which makes every voxel's temporal mean strictly zero - which is unlikely but probably no one cares much. If you do not demean temporal_noise, then it seems fine.
About _calc_snr, I think it is called by _fit_spatial in 2362 with the input noise, and the noise fed to _fit_spatial in line 2761 seems to be a spatial-temporal matrix. So I think _calc_snr actually deals with 4D data. And I think the resulting SNR is based on the spatial variation of temporally demeaned noise.

This comment has been minimized.

@lcnature

lcnature Aug 10, 2018

Contributor

I see you point. So spatial_sd only matters when there are multiple runs. If there is a single run, it is undetectable since no one knows the real intensity at each voxel. Then I agree that temporal_noise is sufficient. It would only be a bit odd if the baseline outside of the brain are all set zero and temporal_noise is also demeaned - which makes every voxel's temporal mean strictly zero - which is unlikely but probably no one cares much. If you do not demean temporal_noise, then it seems fine.
About _calc_snr, I think it is called by _fit_spatial in 2362 with the input noise, and the noise fed to _fit_spatial in line 2761 seems to be a spatial-temporal matrix. So I think _calc_snr actually deals with 4D data. And I think the resulting SNR is based on the spatial variation of temporally demeaned noise.

This comment has been minimized.

@CameronTEllis

CameronTEllis Aug 13, 2018

Contributor

My sincerest apologies but in the process of this discussion and thinking about the code, I have realized that spatial_sd was necessary. Firstly, I made a mistake in what I said above: SNR is by default calculated based on the temporal average of all data:

_calc_snr does take in a 4D volume, however, one of its inputs is 'tr' which specifies the tr to be used for calculating the brain mean and nonbrain standard deviation. By default it uses all TRs (contrary to wait I said previously sorry).

I have added some documentation and changed the names to make this more clear

The reason this matters is that the temporal_sd now doesn't affect the SNR (which is good to control for) since it is zero mean. But now if spatial_sd was zero then the spatial variance would simply be based on the template. As it turns out, that variability is too low in real data, which has more non-brain variance than the template (which makes sense because the template is a temporal average and so partial denoised). This means I need to add some amount of noise to the nonbrain voxels. Spatial_sd allows me to do this and _fit_spatial is the main way this is done.

However, this does not mean that I am doing this correctly. For instance, I could remove the step where I subtract temporal_sd from spatial_sd. I do this, as described above, to limit the total variance of the system noise but that could be wrong.

Happy to discuss further, this is a very thorny issue that I am not sure about.

This comment has been minimized.

@lcnature

lcnature Aug 17, 2018

Contributor

@CameronTEllis
I agree with all the reasoning. And I think as long as reference_tr=None is used when calling _calc_snr from _fit_spatial, subtracting temporal_sd from spatial_sd can be safely removed.

This comment has been minimized.

@CameronTEllis

CameronTEllis Aug 18, 2018

Contributor

Great, I will remove the subtraction step and run through my thorough tests to see if it still works

# Sum up the noise of the brain
noise = base + (noise_temporal * temporal_sd_element) + noise_system
noise = base + (noise_temporal * (1 - temporal_sd)) + noise_system

This comment has been minimized.

@lcnature

lcnature Aug 10, 2018

Contributor

I may be wrong, but I am not sure if (1 - temporal_sd) is what you intended.

This comment has been minimized.

@CameronTEllis

CameronTEllis Aug 10, 2018

Contributor

I think this code is out of date (github doesn't make it easy to track comments). The current code reads: noise = base + (noise_temporal * temporal_sd) + noise_system

This comment has been minimized.

@lcnature

lcnature Aug 10, 2018

Contributor

I see. sorry then maybe I went to the old code...

CameronTEllis added some commits Aug 13, 2018

Added documentation to clarify how nonbrain voxels affect SNR, how SN…
…R is calculated and the importance of using unmasked data for generating templates
no_dilation_snr = sim._calc_snr(noise_matched,
mask,
dilation=0,
tr=tr_duration,

This comment has been minimized.

@lcnature

lcnature Aug 19, 2018

Contributor

The error in test seems to be related to calling _calc_snr with the old argument tr.

This comment has been minimized.

@CameronTEllis

CameronTEllis Aug 19, 2018

Contributor

Great catch, I must have missed that test. Thanks for the commit. I have also run through the simulation and the results are better with the new updates (specifically without subtracting temporal noise from spatial noise). It seems like the code is now ready to go, right? Thanks so much for all of the help!

lcnature added some commits Aug 19, 2018

Update test_fmrisim.py
Changed the argument `tr` to `reference_tr` for `_calc_snr` in the test code.
@CameronTEllis

This comment has been minimized.

Contributor

CameronTEllis commented Aug 19, 2018

@mihaic Any idea why Ruby is failing on Travis? Is this what your recent commit fixed?

@CameronTEllis

This comment has been minimized.

Contributor

CameronTEllis commented Sep 1, 2018

Jenkins, retest this please

lcnature added some commits Sep 2, 2018

adding a check that `magnitude` is indeed a list
@CameronTEllis I added a line in 2864 of fmrisim.py to check for the type of `magnitude` (in case a numpy array of size 1 is fed). Please check if this is appropriate.
Update fmrisim.py
corrected a typo
@CameronTEllis

This comment has been minimized.

Owner

CameronTEllis commented on d6e3959 Sep 2, 2018

Update fmrisim.py
Not sure why two checks were pending. Just made small changes to force it to retest.

@lcnature lcnature merged commit 48a1b8b into brainiak:master Sep 2, 2018

5 checks passed

codecov/patch 93.24% of diff hit (target 88.76%)
Details
codecov/project 91.23% (+2.46%) compared to 014c418
Details
continuous-integration/travis-ci/pr The Travis CI build passed
Details
linux Build finished.
Details
macos Build finished.
Details
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment