Skip to content

Commit

Permalink
Merge branch 'main' into zenodo
Browse files Browse the repository at this point in the history
  • Loading branch information
sappelhoff committed Aug 1, 2022
2 parents 3fb17a2 + b90dde8 commit c5380c3
Show file tree
Hide file tree
Showing 13 changed files with 237 additions and 76 deletions.
1 change: 1 addition & 0 deletions doc/authors.rst
Expand Up @@ -33,3 +33,4 @@
.. _Simon Kern: https://github.com/skjerns
.. _Yorguin Mantilla: https://github.com/yjmantilla
.. _Swastika Gupta: https://swastyy.github.io
.. _Scott Huberty: https://github.com/scott-huberty
8 changes: 8 additions & 0 deletions doc/whats_new.rst
Expand Up @@ -45,9 +45,13 @@ Detailed list of changes

- You can now write raw data and an associated empty-room recording with just a single call to :func:`mne_bids.write_raw_bids`: the ``empty_room`` parameter now also accepts an :class:`mne.io.Raw` data object. The empty-room session name will be derived from the recording date automatically, by `Richard Höchenberger`_ (:gh:`998`)

- :func:`~mne_bids.write_raw_bids` now stores participant weight and height in ``participants.tsv``, by `Richard Höchenberger`_ (:gh:`1031`)

🧐 API and behavior changes
^^^^^^^^^^^^^^^^^^^^^^^^^^^

- In many places, we used to infer the ``datatype`` of a :class:`~mne_bids.BIDSPath` from the ``suffix``, if not explicitly provided. However, this has lead to trouble in certain edge cases. In an effort to reduce the amount of implicit behavior in MNE-BIDS, we now require users to explicitly specify a ``datatype`` whenever the invoked functions or methods expect one, by `Richard Höchenberger`_ (:gh:`1030`)

- :func:`mne_bids.make_dataset_description` now accepts keyword arguments only, and can now also write the following metadata: ``HEDVersion``, ``EthicsApprovals``, ``GeneratedBy``, and ``SourceDatasets``, by `Stefan Appelhoff`_ (:gh:`406`)

- The deprecated function ``mne_bids.mark_bad_channels`` has been removed in favor of :func:`mne_bids.mark_channels`, by `Richard Höchenberger`_ (:gh:`1009`)
Expand All @@ -72,6 +76,10 @@ Detailed list of changes

- :func:`~mne_bids.print_dir_tree` now correctly expands ``~`` to the user's home directory, by `Richard Höchenberger`_ (:gh:`1013`)

- :func:`~mne_bids.write_raw_bids` now correctly excludes stim channels when writing to electrodes.tsv, by `Scott Huberty`_ (:gh:`1023`)

- :func:`~mne_bids.read_raw_bids` doesn't populate ``raw.info['subject_info']`` with invalid values anymore, preventing users from writing the data to disk again, by `Richard Höchenberger`_ (:gh:`1031`)

:doc:`Find out what was new in previous releases <whats_new_previous_releases>`

.. include:: authors.rst
67 changes: 64 additions & 3 deletions examples/convert_mne_sample.py
Expand Up @@ -10,7 +10,10 @@
In this example we will use MNE-BIDS to organize the MNE sample data according
to the BIDS standard.
In a second step we will read the organized dataset using MNE-BIDS.
""" # noqa: D400 D205
.. _BIDS dataset_description.json definition: https://bids-specification.readthedocs.io/en/stable/03-modality-agnostic-files.html#dataset-description
.. _ds000248 dataset_description.json: https://github.com/sappelhoff/bids-examples/blob/master/ds000248/dataset_description.json
""" # noqa: D400 D205 E501

# Authors: Mainak Jas <mainak.jas@telecom-paristech.fr>
# Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr>
Expand All @@ -24,14 +27,17 @@
# First we import some basic Python libraries, followed by MNE-Python and its
# sample data, and then finally the MNE-BIDS functions we need for this example

import json
import os.path as op
from pprint import pprint
import shutil

import mne
from mne.datasets import sample

from mne_bids import (write_raw_bids, read_raw_bids, write_meg_calibration,
write_meg_crosstalk, BIDSPath, print_dir_tree)
write_meg_crosstalk, BIDSPath, print_dir_tree,
make_dataset_description)
from mne_bids.stats import count_events

# %%
Expand Down Expand Up @@ -82,10 +88,11 @@
raw.info['line_freq'] = 60
raw_er.info['line_freq'] = 60

task = 'audiovisual'
bids_path = BIDSPath(
subject='01',
session='01',
task='audiovisual',
task=task,
run='1',
root=output_path
)
Expand Down Expand Up @@ -170,3 +177,57 @@
with open(readme, 'r', encoding='utf-8-sig') as fid:
text = fid.read()
print(text)

# %%
# It is also generally a good idea to add a description of your dataset,
# see the `BIDS dataset_description.json definition`_ for more information.

how_to_acknowledge = """\
If you reference this dataset in a publication, please acknowledge its \
authors and cite MNE papers: A. Gramfort, M. Luessi, E. Larson, D. Engemann, \
D. Strohmeier, C. Brodbeck, L. Parkkonen, M. Hämäläinen, \
MNE software for processing MEG and EEG data, NeuroImage, Volume 86, \
1 February 2014, Pages 446-460, ISSN 1053-8119 \
and \
A. Gramfort, M. Luessi, E. Larson, D. Engemann, D. Strohmeier, C. Brodbeck, \
R. Goj, M. Jas, T. Brooks, L. Parkkonen, M. Hämäläinen, MEG and EEG data \
analysis with MNE-Python, Frontiers in Neuroscience, Volume 7, 2013, \
ISSN 1662-453X"""

make_dataset_description(
path=bids_path.root,
name=task,
authors=["Alexandre Gramfort", "Matti Hämäläinen"],
how_to_acknowledge=how_to_acknowledge,
acknowledgements="""\
Alexandre Gramfort, Mainak Jas, and Stefan Appelhoff prepared and updated the \
data in BIDS format.""",
data_license='CC0',
ethics_approvals=['Human Subjects Division at the University of Washington'], # noqa: E501
funding=[
"NIH 5R01EB009048",
"NIH 1R01EB009048",
"NIH R01EB006385",
"NIH 1R01HD40712",
"NIH 1R01NS44319",
"NIH 2R01NS37462",
"NIH P41EB015896",
"ANR-11-IDEX-0003-02",
"ERC-StG-263584",
"ERC-StG-676943",
"ANR-14-NEUC-0002-01"
],
references_and_links=[
"https://doi.org/10.1016/j.neuroimage.2014.02.017",
"https://doi.org/10.3389/fnins.2013.00267",
"https://mne.tools/stable/overview/datasets_index.html#sample"
],
doi="doi:10.18112/openneuro.ds000248.v1.2.4",
overwrite=True
)
desc_json_path = bids_path.root / 'dataset_description.json'
with open(desc_json_path, 'r', encoding='utf-8-sig') as fid:
pprint(json.loads(fid.read()))

# %%
# This should be very similar to the `ds000248 dataset_description.json`_!
6 changes: 5 additions & 1 deletion mne_bids/dig.py
Expand Up @@ -138,7 +138,11 @@ def _write_electrodes_tsv(raw, fname, datatype, overwrite=False):
# create list of channel coordinates and names
x, y, z, names = list(), list(), list(), list()
for ch in raw.info['chs']:
if (
if ch['kind'] == FIFF.FIFFV_STIM_CH:
logger.debug(f"Not writing stim chan {ch['ch_name']} "
f"to electrodes.tsv")
continue
elif (
np.isnan(ch['loc'][:3]).any() or
np.allclose(ch['loc'][:3], 0)
):
Expand Down
35 changes: 16 additions & 19 deletions mne_bids/path.py
Expand Up @@ -22,7 +22,7 @@
from mne_bids.config import (
ALLOWED_PATH_ENTITIES, ALLOWED_FILENAME_EXTENSIONS,
ALLOWED_FILENAME_SUFFIX, ALLOWED_PATH_ENTITIES_SHORT,
ALLOWED_DATATYPES, SUFFIX_TO_DATATYPE, ALLOWED_DATATYPE_EXTENSIONS,
ALLOWED_DATATYPES, ALLOWED_DATATYPE_EXTENSIONS,
ALLOWED_SPACES,
reader, ENTITY_VALUE_TYPE)
from mne_bids.utils import (_check_key_val, _check_empty_room_basename,
Expand Down Expand Up @@ -51,10 +51,6 @@ def _find_matched_empty_room(bids_path):
'date set. Cannot get matching empty-room file.')

ref_date = raw.info['meas_date']
if not isinstance(ref_date, datetime): # pragma: no cover
# for MNE < v0.20
ref_date = datetime.fromtimestamp(raw.info['meas_date'][0])

emptyroom_dir = BIDSPath(root=bids_root, subject='emptyroom').directory

if not emptyroom_dir.exists():
Expand Down Expand Up @@ -95,6 +91,7 @@ def _find_matched_empty_room(bids_path):
er_bids_path = get_bids_path_from_fname(er_fname, check=False)
er_bids_path.subject = 'emptyroom' # er subject entity is different
er_bids_path.root = bids_root
er_bids_path.datatype = 'meg'
er_meas_date = None

# Try to extract date from filename.
Expand Down Expand Up @@ -229,7 +226,7 @@ class BIDSPath(object):
Generate a BIDSPath object and inspect it
>>> bids_path = BIDSPath(subject='test', session='two', task='mytask',
... suffix='ieeg', extension='.edf')
... suffix='ieeg', extension='.edf', datatype='ieeg')
>>> print(bids_path.basename)
sub-test_ses-two_task-mytask_ieeg.edf
>>> bids_path
Expand Down Expand Up @@ -733,11 +730,6 @@ def update(self, *, check=None, **kwargs):
getattr(self, f'{key}') if hasattr(self, f'_{key}') else None
setattr(self, f'_{key}', val)

# infer datatype if suffix is uniquely the datatype
if self.datatype is None and \
self.suffix in SUFFIX_TO_DATATYPE:
self._datatype = SUFFIX_TO_DATATYPE[self.suffix]

# Perform a check of the entities and revert changes if check fails
try:
self._check()
Expand Down Expand Up @@ -899,7 +891,11 @@ def find_empty_room(self, use_sidecar_only=False, verbose=None):
'Please use `bids_path.update(root="<root>")` '
'to set the root of the BIDS folder to read.')

sidecar_fname = _find_matching_sidecar(self, extension='.json')
sidecar_fname = _find_matching_sidecar(
# needed to deal with inheritance principle
self.copy().update(datatype=None),
extension='.json'
)
with open(sidecar_fname, 'r', encoding='utf-8') as f:
sidecar_json = json.load(f)

Expand Down Expand Up @@ -1237,14 +1233,15 @@ def _parse_ext(raw_fname):
return fname, ext


def _infer_datatype_from_path(fname):
def _infer_datatype_from_path(fname: Path):
# get the parent
datatype = Path(fname).parent.name

if any([datatype.startswith(entity) for entity in ['sub', 'ses']]):
datatype = None

if not datatype:
if fname.exists():
datatype = fname.parent.name
if any([datatype.startswith(entity) for entity in ['sub', 'ses']]):
datatype = None
elif fname.stem.split('_')[-1] in ('meg', 'eeg', 'ieeg'):
datatype = fname.stem.split('_')[-1]
else:
datatype = None

return datatype
Expand Down
23 changes: 17 additions & 6 deletions mne_bids/read.py
Expand Up @@ -194,10 +194,11 @@ def _handle_participants_reading(participants_fname, raw, subject):
participants_tsv = _from_tsv(participants_fname)
subjects = participants_tsv['participant_id']
row_ind = subjects.index(subject)
raw.info['subject_info'] = dict() # start from scratch

# set data from participants tsv into subject_info
for col_name, value in participants_tsv.items():
if col_name == 'sex' or col_name == 'hand':
if col_name in ('sex', 'hand'):
value = _map_options(what=col_name, key=value[row_ind],
fro='bids', to='mne')
# We don't know how to translate to MNE, so skip.
Expand All @@ -206,15 +207,24 @@ def _handle_participants_reading(participants_fname, raw, subject):
info_str = 'subject sex'
else:
info_str = 'subject handedness'
warn(f'Unable to map `{col_name}` value to MNE. '
warn(f'Unable to map "{col_name}" value "{value}" to MNE. '
f'Not setting {info_str}.')
elif col_name in ('height', 'weight'):
try:
value = float(value[row_ind])
except ValueError:
value = None
else:
value = value[row_ind]
if value[row_ind] == 'n/a':
value = None
else:
value = value[row_ind]

# add data into raw.Info
if raw.info['subject_info'] is None:
raw.info['subject_info'] = dict()
key = 'his_id' if col_name == 'participant_id' else col_name
raw.info['subject_info'][key] = value
if value is not None:
assert key not in raw.info['subject_info']
raw.info['subject_info'][key] = value

return raw

Expand Down Expand Up @@ -763,6 +773,7 @@ def read_raw_bids(bids_path, extra_params=None, verbose=None):
)
else:
warn(f"participants.tsv file not found for {raw_path}")
raw.info['subject_info'] = dict()

assert raw.annotations.orig_time == raw.info['meas_date']
return raw
Expand Down
3 changes: 2 additions & 1 deletion mne_bids/sidecar_updates.py
Expand Up @@ -85,7 +85,8 @@ def update_sidecar_json(bids_path, entries, verbose=None):
>>> from pathlib import Path
>>> root = Path('./mne_bids/tests/data/tiny_bids').absolute()
>>> bids_path = BIDSPath(subject='01', task='rest', session='eeg',
... suffix='eeg', extension='.json', root=root)
... suffix='eeg', extension='.json', datatype='eeg',
... root=root)
>>> entries = {'PowerLineFrequency': 60}
>>> update_sidecar_json(bids_path, entries, verbose=False)
Expand Down
25 changes: 25 additions & 0 deletions mne_bids/tests/test_dig.py
Expand Up @@ -325,3 +325,28 @@ def test_convert_montage():
assert pos['coord_frame'] == 'mri'
assert_almost_equal(pos['ch_pos']['EEG 001'],
[-0.0313669, 0.0540269, 0.0949191])


def test_electrodes_io(tmp_path):
"""Ensure only electrodes end up in *_electrodes.json."""
raw = _load_raw()
raw.pick_types(eeg=True, stim=True) # we don't need meg channels
bids_root = tmp_path / 'bids1'
bids_path = _bids_path.copy().update(root=bids_root, datatype='eeg')
write_raw_bids(raw=raw, bids_path=bids_path)

electrodes_path = (
bids_path.copy()
.update(
task=None,
run=None,
space='CapTrak',
suffix='electrodes',
extension='.tsv'
)
)
with open(electrodes_path, encoding='utf-8') as sidecar:
n_entries = len([line for line in sidecar
if 'name' not in line]) # don't need the header
# only eeg chs w/ electrode pos should be written to electrodes.tsv
assert n_entries == len(raw.get_channel_types('eeg'))
18 changes: 9 additions & 9 deletions mne_bids/tests/test_path.py
Expand Up @@ -260,10 +260,8 @@ def test_parse_ext():
@pytest.mark.parametrize('fname', [
'sub-01_ses-02_task-test_run-3_split-01_meg.fif',
'sub-01_ses-02_task-test_run-3_split-01',
('/bids_root/sub-01/ses-02/meg/' +
'sub-01_ses-02_task-test_run-3_split-01_meg.fif'),
('sub-01/ses-02/meg/' +
'sub-01_ses-02_task-test_run-3_split-01_meg.fif')
'/bids_root/sub-01/ses-02/meg/sub-01_ses-02_task-test_run-3_split-01_meg.fif', # noqa: E501
'sub-01/ses-02/meg/sub-01_ses-02_task-test_run-3_split-01_meg.fif'
])
def test_get_bids_path_from_fname(fname):
bids_path = get_bids_path_from_fname(fname)
Expand Down Expand Up @@ -598,7 +596,7 @@ def test_bids_path(return_bids_test_dir):
# ... but raises an error with check=True
match = r'space \(foo\) is not valid for datatype \(eeg\)'
with pytest.raises(ValueError, match=match):
BIDSPath(subject=subject_id, space='foo', suffix='eeg')
BIDSPath(subject=subject_id, space='foo', suffix='eeg', datatype='eeg')

# error check on space for datatypes that do not support space
match = 'space entity is not valid for datatype anat'
Expand All @@ -612,7 +610,8 @@ def test_bids_path(return_bids_test_dir):
bids_path_tmpcopy.update(space='CapTrak', check=True)

# making a valid space update works
bids_path_tmpcopy.update(suffix='eeg', space="CapTrak", check=True)
bids_path_tmpcopy.update(suffix='eeg', datatype='eeg',
space="CapTrak", check=True)

# suffix won't be error checks if initial check was false
bids_path.update(suffix=suffix)
Expand All @@ -628,7 +627,7 @@ def test_bids_path(return_bids_test_dir):

# test repr
bids_path = BIDSPath(subject='01', session='02',
task='03', suffix='ieeg',
task='03', suffix='ieeg', datatype='ieeg',
extension='.edf')
assert repr(bids_path) == ('BIDSPath(\n'
'root: None\n'
Expand Down Expand Up @@ -680,7 +679,8 @@ def test_make_filenames():
# All keys work
prefix_data = dict(subject='one', session='two', task='three',
acquisition='four', run=1, processing='six',
recording='seven', suffix='ieeg', extension='.json')
recording='seven', suffix='ieeg', extension='.json',
datatype='ieeg')
expected_str = ('sub-one_ses-two_task-three_acq-four_run-01_proc-six_'
'rec-seven_ieeg.json')
assert BIDSPath(**prefix_data).basename == expected_str
Expand Down Expand Up @@ -974,7 +974,7 @@ def test_find_emptyroom_ties(tmp_path):
'sample_audvis_trunc_raw.fif')

bids_root = str(tmp_path)
bids_path = _bids_path.copy().update(root=bids_root)
bids_path = _bids_path.copy().update(root=bids_root, datatype='meg')
session = '20010101'
er_dir_path = BIDSPath(subject='emptyroom', session=session,
datatype='meg', root=bids_root)
Expand Down

0 comments on commit c5380c3

Please sign in to comment.