On-line neuroimaging resources
I try to list here links to softwares, databases, tutorials, blogs and other resources list that I or others have found relevant to learn about neuroimaging or to help us perform neuroimaging analysis. Most of the things listed here are for fMRI but feel free to point towards EEG, MEG, TMS things too. Feel free to add things: see the How to contribute section below.
This document is mostly meant for me to be able to quickly find things without having to google them or browse through my bookmarks, pocket, github stars and repos. But if it can help others, that's great.
Also I am by no means an expert or even have used or done all the things I list... But I wish I had and I wish someone had told me some of those things 5 years ago.
I am also working on a companion [reading list] ( ??? ).
How to use this document Most people don't use a map by starting in the upper left corner, scanning horizontally till they end up in the bottom right corner (or however it is that people read in the region of the world you are in at the moment). Similarly this document is obviously not meant to be read from top to bottom. The best is to browse the Table of content below and jump to section that interests you. For that reason there is some redundancy in the content. This also means that this document is not a cookbook: I just try to list things that could apply a to wide variety of topics and context, but in many cases only a handful of those will be relevant to you.
Note also that some of the sectioning is bit arbitrary: I try to put cross-links where useful.
How to contribute Feel free to add your own resource or any material you have found useful. Send me a pull request to this repository or raise an issue. Or if you don't know how to do that you can reach me on twitter https://twitter.com/RemiGau.
You can check the looking for section right below to see what sections of this document need populating. I have also tried to flag with
??? in the table of content and in the main document the areas where I am pretty sure I have missed existing gems.
- Material on the BOLD signal: origin and biophysics
- Material on preprocessing, denoising
- Material on statistical inference in neuroimaging: peak, voxel, cluster based
- Material on multiple comparison correction in neuroimaging
- Material on DTI, ASL
- Material on connectivity: PPI, DCM, granger causality
To add in the list
specific talks from mumfordbrainstats, OHBM conference and other video series
Table of content
- Online courses
- Video series
- Where to ask for help
- UNIX command line
- Matlab and SPM specific resources
- The python ecosystem ( ??? )
- Web apps ( ??? )
- BEFORE YOU START: Reproducibility ( ??? )
- BEFORE YOU START: Ethics and consent forms
- BEFORE YOU START: Code and data management ( ??? )
- PLANNING YOUR STUDY
- Reusing data
- Defining your terms and your task
- Piloting ( ??? )
- Optimizing your design
- Defining your region of interest ( ??? )
- Non-standard templates ( ??? )
- ONCE YOU HAVE DATA: quality control
- ONCE YOU HAVE DATA: preprocessing
- ANALYSIS: general linear model
- ANALYSIS: Resting state ( ??? )
- ANALYSIS: Model selection ( ??? )
- ANALYSIS: Statistical inferences and multiple comparison correction (MCP) ( ??? )
- ANALYSIS: Multivariate analysis ( ??? )
- ANALYSIS: Robustness checks
- ANALYSIS: Computational neuroscience
- ANALYSIS: Laminar and high-resolution MRI
- ANALYSIS: Meta analysis ( ??? )
- REPORTING METHODS AND RESULTS (also useful for reviewing papers)
- YOU ARE NOT DONE YET: sharing your code, data and your results
There are tons of on-line resources for neuroimaging data analysis so the following list is not meant to be exhaustive. There are also similar lists here and there that might partly overlap with this one, so here is a list of lists.
Neuroimaging Informatics Tools and Resources Clearinghouse
The most obvious place where everything is centralized is the Neuroimaging Informatics Tools and Resources Clearinghouse. Many tools, atlases, courses are there but if your favorite isn't make sure to add it.
Lab guides and lab wikis
If your lab does not have a lab guide/wiki, it is well worth the time to make one. Lab wikis can save a lot of time for newcomers to get set up and started (rather than reinvent the wheel or take time from other lab members), while lab guides will also help PIs, PhD students and post-docs know what to expect from each others and to promote a healthier lab culture. Mariam Aly explains that well here.
If you lab does not have a guide and/or wiki here is a list you can use to create your own. But I suggest that you go beyond a copy-paste as the ones you find in there might not be tailored to your lab's needs.
And here are some neuroimaging oriented lab wikis:
- Mariam Aly's lab wiki is there.
- Jonathan Peelle has a great list of resources for beginners.
- Check the wiki from the CBU in Cambridge.
- The one from Tor Wager's lab
- Michael Beauchamp's lab wiki
- Chris Rorden's Neuropsychology Lab wiki
- The Kording lab and Kendrick Kay's labs are more computational oriented and so check them out if this is what you do.
- Stephan Heunis has a list to many SPM and matlab material.
- https://github.com/brainhack101 also has a collections or links to courses, data...
- ReproNim is a good site to get up to date on doing reproducible neuroimaging research.
- Open neuroscience points to a lot of open things related to neuroscience.
- There MOOC on open-science is still under construction but already has an insane list of resources.
Math and linear algebra courses
Khan Academy is a great free resource for all sorts of topics.
- Their series on linear algebra is particularly useful and relevant to our needs.
- The Fourier series and the
- statistics one videos may also prove useful (h/t [Sam Jones] ( ??? ) ).
If you feel that your background in mathematics and signal processing is a bit weak please have a look at these slides. This file was put together by Joana Leitao and covers several topics that are important to be familiar with in neuroimaging:
- basic linear algebra
- ordinary least square solution for the general linear model
- the BOLD response and convolution: what is a linear time invariant system and why is matters when doing a fMRI study ?
- how to do t-test and ANOVAS within a general linear model
MRI courses ( ??? )
If you need to dust up your knowledge about MRI.
- the e-MRI course
- MRI fundamentals on coursera
- A Magnetic Resonance Imaging Lab from UCSD
- MRI question is not a course but a VERY VERY comprehensive FAQ on MRI.
There are also blog posts series on practicalfmri (and on its companion winnower account) that cover
- the physics of fMRI artefacts
- the type of artefacts you might encounter and what to look for
- a list of potential confounds to the BOLD signal
- quality control on MRI and fMRI
fMRI courses ( ??? )
There are quite a few courses for fMRI analysis out there that I am aware of.
- On coursera A few courses on coursera with notably
Machine learning ( ??? )
If you are going to do some multivariate analysis, it is likely you will need to know a
bit lot of machine learning. I did find that that this class on coursera covered a lot of ground. It is not specific to neuroimaging but gives you a good overview of the basic concept you need to understand.
Worst case it will let you understand why John von Neumann said
With four parameters I can fit an elephant, and with five I can make him wiggle his trunk.
Resting state courses ( ??? )
There is one on the rMRI website.
Neurohackademy is more than a neuroimaging course: it is broader in scope as it covers reproducibility and open science issues in neuroimaging. It is also very practical and definitely python oriented. To know more, see this post by Tal Yarkoni about the 2018 edition of Neurohackademy.
Software specific ( ??? )
Most of the main analysis packages on top of the IRL courses usually have one video series that works as a course +/- tutorial.
- mailing list
- [course/tutorial]: Video recordings from the AFNI bootcamp, with slides, and example data.
Nipype is best viewed as a way to create and run software-agnostic preprocessing/analysis-pipeline. It becomes very powerful when you need to use different softwares in your analysis.
Tim Van Mourik and a few other people have developed tool to facilitate building pipelines with nipype:
- Porcupine stands for "PORcupine Creates Ur PipelINE" which is probably the worst recursive acronym with bad capitalisation and annoying use of slang. This software allows researchers to build pipelines using a GUI and generates the code that is needed to run the pipeline created.
- Giraffe is web-based "Graphical Interface for Reproducible Analysis oF workFlow Experiments" that can take advantage of Porcupine to create pipelines.
Others ( ??? )
Some of those are clearly not specific to neuroimaging but are well worth going through even if you are a PI.
- If you have no idea what the distribution of p-value would look like if there were only noise in your data, then the odds are you will learn at least one thing in Daniel Lakens course on how to improve your statistical inferences. Most likely you will learn more than one thing.
Daniel also has a blog blog is very useful of stats related knowledge. Similarly Guillaumme Rousselet's has a series of posts on his blog where you learn more about robust statistics and how to improve your data visualizations.
Open-science and reproducibility
There is a MOOC on open-science is still under construction but on top of an insane list of resources has the module 5 already up and running to teach you how to use github and zenodo to create a time stamped screenshot of your code to link to in your papers.
If you run out of things to binge on on Netflix, Youtube has some useful channels if you want to learn more about fMRI data analysis. I also list here other repository of MRI related videos.
Center for Brains, Minds and Machines
Organization from human brain mapping (OHBM)
The videos of the lectures and workshops from the previous HBM conferences are available online here.
fMRIf summer courses from the NIH
Conference on Cognitive Computational Neuroscience (CCN)
There are many excellent blogs run by neuroscientists where you can find interesting and more or less technical information on neuroimaging analysis. I list a few below but you can find a subsample of my neuroscience blogroll in the file blogroll_a_sample.opml that you can import into your favorite news reader (e.g feedly).
- Jo Etzel has a great blog if you want to know more about multivariate analysis: MVPA meandering
- practiCal fMRI has good blog posts that cover the basics of fMRI, MRI artefacts, as well as a all the things that can affect the BOLD signal
- techniCal fMRI companion to practiCal fMRI that covers topics related to ancillary equipment for fMRI scanning.
- Chris Chambers blog is Neurochamberss
- Neuroskeptic blogs at Neuroskeptic
- Dorothy Bishop is there.
- Russel Poldracks posts can be found here
- Peter Bandettini blogs at the brain blog
- Peter Molfese blogs at Crash Log, somewhat AFNI focused.
Where to ask for help
If you have question linked to a specific software, check the documentation/FAQ/manual/wiki/tutorial for that software first. Then you can turn to the mailing list related to that software: but always start by looking through the archives of those mailing lists first before sending a question that has already been answered.
But if you have more general questions you can also try :
- the neurostars forum
- social medias: there are some specialised Facebook groups or good hashtags on twitter that will succeed when your google fu fails you.
- the slack channel of brainhack
UNIX command line
Even if you have only used Windows in your life, the odds are that you will at some point have to use a UNIX command line (like the one you can find on a linux computer or a Mac) to do some of your MRI analysis. Best case scenario you might only need it to explore some folder structure on some server, worst case you might have to write some scripts to automate some tasks. Either way, having some basics ideas about how to interact with a UNIX is a good idea.
- from the FSL website
- on the MRC-CBU wiki
- from software carpentry for beginners and more advanced users
- also for beginners
- for more advanced scripting (h/t Tom Nichols)
Matlab and SPM specific resources
Matlab ( ??? )
- tutorials I learnt matlab with a book and by reading other's scripts and with a lot of coffee, patience, sweat, tears and, trial and errors. I am sure there are better ways to do it than that but I don't really know what the best tutorials are these days.
- The first place to look is the SPM wiki book that could become a even better resource if users contributed even more to it.
- Then you can check the add-ons for SPM.
- The spm.mat is the file where SPM stores all the information about your analysis. This page explains its organization.
- If you want to write scripts and use batches efficiently using SPM see what I wrote here
- The clever machine blog has some very useful matlab codes for fMRI analysis
- Tom Nichols has tagged SPM related posts on his website if you are looking for some good code snippets: see for example some of John Ashburner's gems.
- Check out Cyril Pernet website for SPM/matlab code: here or there
- Some good tutorials on the CBU if you want to understand design efficiency, smoothing, SPM GLM stats or how random field theory works to correct for multiple comparison
- Quite a few others on the web
- But also too many repos on Github to list them all but here are some you might come across: Rik Henson's, the canlab
The python ecosystem ( ??? )
Matlab must still be the most used "language" in neuroimaging (citation needed) but there is huge neuroscience-oriented python ecosystem out there taking advantage of the scientific python community. On top of the financial aspect (those matlab licenses can be quite expensive), there are many good reasons why you might wanna switch if only because matlab breeds bad coding habits.
Here too there are plenty of generic python courses on datacamp, code academy or kaggle. You can also check things that are more scientific python oriented like the scipy lectures or Jake Vanderplas's jupyter notebooks Whirlwind Tour Of Python and Python Data Science Handbook.
Web apps ( ??? )
R based apps
- confidence intervals
- p curves and why with a decent power and a large effect size, it is relatively unlikely to find a value between p<.01 and p<.05
- null hypothesis significance testing
- p hacking
- positive predictive values
- the bioimagesuite seems like a convenient way to visualize and do some processing of you images on the fly via a web-browser. (h/t Renzo)
Anatomy atlases ( ??? )
Some of those might help you learn or revise your neuroanatomy:
BEFORE YOU START: Reproducibility ( ??? )
There are a few options you can investigate to make your analysis more replicable and reproducible. On top of [sharing your data and your code](#Sharing-your-code, data-and-your-results) you can use containers like docker or singularity that allows you to run your analysis in contained environment that has an operating system, the software you need and all their dependencies.
In practice this means that by using this container:
- other researchers can reproduce your analysis now on their computer (e.g you can run a linux container with freesurfer on your windows computer),
- you can reproduce your own analysis in 5 years from now without facing the problem of knowing which version of the software you used.
BEFORE YOU START: Ethics and consent forms
The open brain consent form tries to facilitate neuroimaging data sharing by providing an “out of the box” solution addressing human subjects concerns and consisting of
- widely acceptable consent form allowing deposition of anonymized data to public data archives
- collection of tools/pipelines to help anonymization of neuroimaging data making it ready for sharing
BEFORE YOU START: Code and data management ( ??? )
For managing your code, if you don't already, I suggest you make version control with GIT part of every day your every day workflow. GIT might seem scary and confusing at first but it is well worth the effort: the good news is that there are plenty of tutorials available (for example: here, there or there). Another advantage of using GIT is that it allows you to collaborate on many projects on github but which already makes a lot of sense even simply at the scale of a lab.
Even though GIT is most powerful when using the command line, there are also many graphic interfaces that might just be enough for what you need. Plus the graphic interface can help you get started to then you move on to use the command line only. There is no shame in using a GUI: just don't tell the GIT purists this is what you do otherwise you will never hear the end of it.
Another good coding practice to have is a consistent coding style. For python you have the PEP8 standard and some tools like pylint, pycodestyle, or pep8online that help you make sure that your code complies with this standard.
Avoid selective debugging: unit tests, positive and negative control
Having a bug is annoying. Having your code run but give you an obviously wrong answer is more annoying. Having your code run and give you a plausible but wrong answer is scary (and potentially expensive when it crashes a spaceship onto a planet). Having your code run and give you the answer you want but not the true answer is the worst and keeps me up at night.
Selective debugging happens when we don't check the code that gives us the answer we want but we do check it when it gives us an answer that goes against our expectation. In a way it is a quite insidious form of p-hacking.
Some things that can be done about it:
- organize code reviews in your lab: basically make sure that the code has been checked by another person. Pairing a beginner with a more senior member of the lab can also be a way to improve learning and skill transfer in the lab.
- test your code. These tests can be implemented automatically to your project by continuous integration services like Travis.
- test your pipeline with positive negative control. A negative control is testing your analysis by running on random noise or on data that should have no signal in it. The latter was the approach used by Anders Eklund and Tom Nichols in their cluster failure paper series. A positive control is making sure that your analysis can detect VERY obvious things it should detect (e.g motor cortex activation following button presses, classify responses to auditory versus visual stimuli in V1, …). Jo Etzel has post about this.
Data: BIDS, Datalad and YODA
If you are going to do some fMRI analysis you will quickly drown in data if you are not a bit organized, so I highly recommend you use the brain imaging data structure standard (BIDS) to organize your data. The current version of BIDS only talks about raw data but it should soon cover derivatives (e.g preprocessed data). In general BIDS also allows you to more easily share your data and use plenty of analytical tools.
If you would like to use BIDS but you have no idea what a JSON file or the length of the specification document scares you, head over to the BIDS starter kit to find tutorials and scripts to help you rearrange your data.
Datalad is to data what git is to code. It allows curation of data and version controlling of but also lets you crawl databases to explore and download data from them and it facilitates data sharing. Several of these features are described here with scripts that act as tutorial. There are videos presentation of it there.
Having a standard way to organize not only your data but also your code, the results, the documentation... from the beginning of a project can go a long way to save you a lot of time down the line (when communicating within or outside your lab, or when you have to wrap things up when moving to a new project/job). The YODA template is folder structure recommended by ReproNim that you can use.
Other good habits:
- a simple, transparent and systematic filenaming is a good start
- if you have to deal with data in spreadsheet I think you will enjoy this paper and this cookbook
Documentation ( ??? )
It is often said to:
Always code as if the person who ends up maintaining your code is a violent psychopath who knows where you live.
Proper documentation of a project and good commenting of your code will help others to use it or pick it up later. But there are good selfish reasons to document your project and comment your code: it will most likely help future you when you have to respond to reviewers or when you want to check something in that data set or in that function you used 6 months ago.
- Most likely, you will have to re-run your analysis more than once.
- In the future, you or a collaborator may have to re-visit part of the project.
- Your most likely collaborator is your future self, and your past self doesn’t answer emails.
See here for more.
In terms of code I guess the ideal is self-documenting code. Read the docs is a good option that also allows for continuous integration. Python also apparently has this thing called Sphinx that helps create intelligent and beautiful documentation (that alone should make matlab users envious). There are also ways to make it part of a continuous integration.
PLANNING YOUR STUDY
Some of the main databases are:
But there are many possibilities of databases where you can find your raw and/or pre-processed data. Maybe your university or your institute already ahs a repository of published data (e.g the Donders institute.
The recent google extension for databases can also be useful to locate datasets that might be of interest.
There are some tools that help you search through them like the metasearch tool on the Open Neuroimaging Laboratory but this is also where Datalad can become useful to browse or crawl those databases.
Defining your terms and your task
Inigo Montoya: You keep using that word. I don't think it means what you think it means. Ayotnom Ogini: Funny you should say that! I was about to tell you the same thing.
The use of alternate and even competitive terminologies can often impede scientific discoveries.
Piloting ( ??? )
Good piloting is very important but piloting is not meant to be about finding a hypothesis you want to test: because of the small sample size of pilot studies, anything interesting you see there is very likely to be a fluke. Piloting is more about checking the overall feasibility of that experiment and that you can get high [quality data](#ONCE YOU-HAVE-DATA:-quality-control), judged by criteria that are unrelated to your hypothesis.
Piloting is usually a phase where it would be good to check with your local MRI physicist and statistician. And you also might already have to make choices about pre-processing and data analysis.
If your work is not purely exploratory you might want to consider pre-registering your study. It is good way to decide in advance how you are going to collect and analyze your data. It helps make it clear to yourself and to others what part of your study was predicted (i.e confirmatory) and which part wasn't (i.e exploratory). This way pre-registration are a good way to restrict the number of researchers degrees of freedom and limit the possibility to engage (most often unknowingly) in questionable research practices like procedural overfitting (also known as p-hacking) or HARKing (Hypothesising After the Results are Known). You can also opt for registered reports where you submit your methods to a journal and get reviews on the protocol before the data collection and analysis is conducted. At the moment there are more than a 140 journals that accept registered reports.
Pre-registering neuroimaging studies can be quite challenging and comes with a whole set of constraints that might be absent in other fields. Jessica Flannery has created a template for pre-registering fMRI studies that you might find useful.
Optimizing your design
Before you run your study there are a few things you can do to optimize your design. Two of them are doing a power analysis and optimizing the efficiency of your fMRI design.
Design efficiency ( ??? )
If you need a reminder about what design efficiency is. When you want to optimize it you have few options:
- you can compute the efficiency by hand and tweaking your design to see what options work best
- but there are also more systematic ways to optimize your protocol: see here, here or there
In order to investigate whether an effect exists, one should design an experiment that has a reasonable chance of detecting it. I take this insight as common sense. In statistical language, an experiment should have sufficient statistical power. Yet the null [hypothesis significant testing] ritual knows no statistical power.
Gerd Gigerenzer in Statistical Rituals: The Replication Delusion and How We Got There, DOI: 10.1177/2515245918771329
There is good evidence that the average statistical power has remained low for several decades in psychology which increases the false negative rate and reduces the positive predictive value of findings (i.e the chance that a significant finding is actually true). Maybe neuroimaging could learn from that mistake, especially that a large majority of neuroimaging studies seem to have even lower statistical power.
fMRI power is a matlab based toolbox to help you run your power analysis.
The website neuropowertools actually offers options to run both your design efficiency optimization and your power analysis. They also have their respective python packages.
For MVPA: same analysis approach
If you intend to run a MVPA - classification analysis on your data, there are a few things you can do BEFORE you start collecting data to optimize your design. There is no app/toolbox for that so I am afraid you will have to read the paper
Defining your region of interest ( ??? )
If you don't want to run a whole brain analysis, then you will most likely need to define your regions of interest (ROI). This must be done using data that is independent from the data you will use in the end otherwise you will have a [circularity] ( ??? ) problem (also known as double dipping or [voodoo correlation] ( ??? )).
- around a coordinate identified in a previous study or in a [meta-analysis](#meta-analysis-( ??? )), or by using Neurosynth.
- using a localizer
- or relying on a functional or anatomical atlas.
Using previous results ( ??? )
Neurosynth can help with to run a meta-analysis to create mask to define your ROI. See for example this if you wanted to have a ROI for brain region matching the search term
auditory and see here for a tutorial.
Localizers ( ??? )
There are many atlases you could use to create ROIS. Some ship automatically with some softwares otherwise you can find lists on the
Some other retinotopics maps are apparently not listed in the above so here they are:
- An anatomical template of human striate retinotopy (https://cfn.upenn.edu/aguirre/wiki/public:data_currbio_2012_benson)
- The HCP 7T Retinotopy Dataset: data1(https://balsa.wustl.edu/study/show/9Zkk); data2; paper
- Probabilistic Maps of Visual Topography in Human Cortex: data; paper
The problem then becomes which atlas to choose. To help you with this the Online Brain Atlas Reconciliation Tool can show the overlap that exist between some of those atlases. The links I had to the website (here and there) are broken at the moment so at least here is a link to the paper
Some toolboxes out there also allow you to create your own ROI and rely on anatomical / cytoarchitectonic atlases:
Non-standard templates ( ??? )
In case you want to normalize brains of children it might be better to use a pediatric template. Some of them are listed here.
ONCE YOU HAVE DATA: quality control
MRIQC MRI quality control. A BIDS app that runs a pipeline to assess the quality of your data.
the PCP Quality Assessment Protocol is another BIDS app based on the protocol of [the connectome project data}(http://preprocessed-connectomes-project.org/quality-assessment-protocol/)
ONCE YOU HAVE DATA: preprocessing
Pipelines ( ??? )
There are some ready made pipeline as BIDS apps that already exist and have been tested. Using them might save you time and make your results more reproducible.
- AFNI based
- HCP Pipelines: a set of tools (primarily, but not exclusively, shell scripts) for processing MRI images for the Human Connectome Project.
- The NeuroImaging Analysis Kit: NIAK is a library of pipelines for the preprocessing and mining of large functional neuroimaging data.
- Automatic Analysis: is a pipeline system for neuroimaging written primarily in Matlab. It robustly supports recent versions of SPM, as well as selected functions from other software packages. The goal is to facilitate automatic, flexible, and replicable neuroimaging analyses through a comprehensive pipeline system.
- Configurable Pipeline for the Analysis of Connectomes: C-PAC is a software for performing high-throughput preprocessing and analysis of functional connectomes data using high-performance computers.
There is also an OPPNI for Optimization of Preprocessing Pipelines for NeuroImaging.
Artefact/Noise removal ( ??? )
PCA ( ??? )
ICA ( ??? )
ART ( ??? )
ART repair ( ??? )
Physiological noise ( ??? )
ANALYSIS: general linear model
- a FAQ article on the GLM by Cyril Pernet with matlab code to go through
- see the section on percent signal change to better understand how to report results
- orthogonalization of regressors can be a bit hard to wrap your head aroudnd at first but Jeanette Mumford ( ??? ) has great paper on the topic with a jupyter notebook.
ANALYSIS: Resting state ( ??? )
I know almost nothing about resting state but I have been told this site is worth having a look at.
- Tools ( ??? )
ANALYSIS: Model selection ( ??? )
If several analysis are attempted it can be good to have ways to decide amongst them. There is bad way to do like the one described in the overfitting toolbox.
But there are better ways to do it:
ANALYSIS: Statistical inferences and multiple comparison correction (MCP) ( ??? )
Cluster based inference ( ??? )
Family wise error (FWE) ( ??? )
In case you do not remember how random field theory works to correct for multiple comparison, check this.
False discovery rate (FDR) ( ??? )
Permutation tests ( ??? )
A talk by Carsten Allefeld on permutation test at OHBM 2018: https://www.pathlms.com/ohbm/courses/8246/sections/12542/video_presentations/116074
The prevalence test
SnPM ( ??? )
FSL PALM and Randomize( ??? )
Freesurfer PALM ( ??? )
ANALYSIS: Multivariate analysis ( ??? )
A talk by Pradeep Reedy Raamana at OHBM 2018 on cross-validation: https://www.pathlms.com/ohbm/courses/8246/sections/12542/video_presentations/116075
Neuroimaging toolboxes for representation similarity analysis (RSA), support vector machine (SVM), population receptive field (pRF), encoding model and others...
TDT is the The Decoding Toolbox.
PRoNTo is the Pattern Recognition for Neuroimaging Toolbox developed at UCL (UK).
The pattern components modelling toolbox of the Diedrichsen lab
From Carsten Allefeld
A pRF analysis toolbox called the Seriously Annoying Matlab SuRFer from Sam Schwarzkopf.
Intended to ease statistical learning analyses of large datasets.
Nilearn is a Python module for fast and easy statistical learning on NeuroImaging data.
For pRF analysis.
R based ( ??? )
ANALYSIS: Robustness checks
Non neuroimaging cases
ANALYSIS: Computational neuroscience
As someone said on twitter there is a cottage industry of blog posts trying to understand/explain this:
And a tutorial
Dynamic causal modelling
ANALYSIS: Laminar and high-resolution MRI
Renzo Hubert is keeping track of the most recent development of laminar MRI via twitter but also on his blog. He also curates laminar-fMRI related talks on his Youtube channel or papers in this google spreahsheet.
- This blog post has a list of most of the softwares that are related to laminar fMRI.
- A more recent tool not listed in there for creating equivolumetric surfaces.
ANALYSIS: Meta analysis ( ??? )
a talk on ALE and brainmap https://www.pathlms.com/ohbm/courses/8246/sections/12542/video_presentations/116066
For coordinate based meta-analysis:
For image based meta-analysis:
- IBMA is the Image-Based Meta-Analysis toolbox for SPM.
REPORTING METHODS AND RESULTS (also useful for reviewing papers)
A checklist: COBIDAS report
The organization from human brain mapping (OHBM) created the Committee on Best practices In Data Analysis and Sharing (COBIDAS) that published a report with a set of guidelines with an appended checklist on how to conduct and report fMRI studies. It is a very useful resource to use to make sure you are not forgetting anything when writing up your article. See also Jeanette Mumford's video about it.
- Journal specific requirements or checklists
- [21 words solution] ( ??? )
- [Constraints on generality] ( ??? )
- Other checklists:
Percent signal change ( ??? )
- a FAQ article on the GLM by Cyril Pernet with matlab code to go through has some mention on reporting PSC.
- See also this FSL guide by Jeanette Mumford ( ??? ) for reporting results in PSC.
- This post by Tom Nichols ( ??? ) can be helpful to understand what are the units that SPM paranter estimates are reported in.
- The MarsBAR SPM toolbox can also help you deal with PSC.
Making figures ( ??? )
I keep hearing that the books by Edward R. Tufte are great https://www.amazon.com/dp/0961392118/?tag=codihorr-20 https://www.amazon.com/dp/0961392126/?tag=codihorr-20
http://mkweb.bcgsc.ca/essentials.of.data.visualization/ https://www.jisc.ac.uk/full-guide/data-visualisation https://jimgrange.wordpress.com/2016/06/15/solution-to-barbarplots-in-r/ https://f1000research.com/articles/4-466/v1
Color maps color blind friendly
Tools to check results/statistics ( ??? )
Those recent tools cannot be applied to statistical maps but they can be useful for any behavioural results. Many of them can be used on a paper you are about to publish to check for errors or on a paper you are reviewing / reading.
- Statcheck developed by automatically checks for errors in statistical reporting making sure that your p values match with your t/F values and degrees of freedom.
- GRIM test checks for Granularity-Related Inconsistency of Means. Developed by Nick Brown and James Heathers makes sure that the mean reported are plausible given a measurement scale (liek a Likert scale or a visual analog scale) and a sample size. There are [GRIMMER](http://www.prepubmed.org/grimmer/, GRIMMEST extension to standard deviations and F values.
- SPRITE stands for Sample Parameter Reconstruction via Iterative TEchniques and allows to generate the possible data distribution given a scale, a mean and variability measure web app, shiny app, code.
- test of insufficient variance
Peer review ( ??? )
- Peer review openness (PRO) initiative to ask for data: https://opennessinitiative.org/
YOU ARE NOT DONE YET: sharing your code, data and your results
There should be at least 3 boxes on your to do list once your study is completed.
- sharing the code
- sharing the data
- sharing the statistical map
- updating meta-analysis databases If the 3 first points are done before an article submission it can be useful for reviewers to check what you have done. But all of those points are important also for future researchers that would like to base new research on your results or to run a meta-analysis of similar studies.
NeuroImaging Data Model (NIDM)
If you want to share your results I suggest you export your final results using the NIDM format that is supported natively by SPM12. There are also tools for exporting FSL results and things are under development for AFNI. The NIDM format makes your results easily viewable by other softwares (check the INCF-NIDASH repo for more information). There are extension in development for the NIDM to cover not only non-parametric statistical maps but also to export in very compact way many of the details of about your experiment and analysis.
Another good reason to use the NIDM model is that it facilitates uploading them to a site like neurovault where you can store them and share them with others.
Sharing your data
Some of the main databases where you can put your data are: