Skip to content

Commit

Permalink
Merge pull request #7 from mad-lab-fau/reviewDocumentation
Browse files Browse the repository at this point in the history
Review documentation
  • Loading branch information
richrobe authored Aug 2, 2021
2 parents e80aa6f + 53b6a1d commit f848adc
Show file tree
Hide file tree
Showing 15 changed files with 384 additions and 344 deletions.
2 changes: 1 addition & 1 deletion src/biopsykit/example_data.py
Original file line number Diff line number Diff line change
Expand Up @@ -251,7 +251,7 @@ def get_time_log_example() -> pd.DataFrame:
-------
data : :class:`~pandas.DataFrame`
dataframe with example time log information. The time log match the data from the two ECG data example
functions :func:`~biosykit.utils.get_ecg_example` and :func:`~biosykit.utils.get_ecg_example_02`
functions :func:`~biopsykit.example_data.get_ecg_example` and :func:`~biopsykit.example_data.get_ecg_example_02`
"""
return load_time_log(_EXAMPLE_DATA_PATH.joinpath("ecg_time_log.xlsx"))
Expand Down
6 changes: 3 additions & 3 deletions src/biopsykit/questionnaires/utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -483,7 +483,7 @@ def bin_scale(
----------
data : :class:`~pandas.DataFrame` or :class:`~pandas.Series`
data with scales to be binned
bins : The criteria to bin by. ``bins``can have one of the following types:
bins : The criteria to bin by. ``bins`` can have one of the following types:
* ``int`` : Defines the number of equal-width bins in the range of ``data``. The range of ``x`` is extended by
0.1% on each side to include the minimum and maximum values of `x`.
Expand Down Expand Up @@ -607,8 +607,8 @@ def compute_scores(
quest_dict : dict
dictionary with questionnaire names to be computed (keys) and columns of the questionnaires (values)
quest_kwargs : dict
dictionary with optional arguments to be passed to questionnaire functions. The dictionary is expected
consist of questionnaire names (keys) and ``**kwargs`` dictionaries (values) with arguments per questionnaire
dictionary with optional arguments to be passed to questionnaire functions. The dictionary is expected to
consist of questionnaire names (keys) and \*\*kwargs dictionaries (values) with arguments per questionnaire
Returns
-------
Expand Down
1 change: 1 addition & 0 deletions src/biopsykit/signals/ecg/ecg.py
Original file line number Diff line number Diff line change
Expand Up @@ -1115,6 +1115,7 @@ def _correct_outlier_correlation(rpeaks: pd.DataFrame, bool_mask: np.array, corr
corr_thres : float
threshold for cross-correlation coefficient. Beats below that threshold will be marked as outlier
**kwargs : additional parameters required for this outlier function, such as:
* ecg_signal :class:`~pandas.DataFrame`
dataframe with processed ECG signal. Output from :meth:`biopsykit.signals.ecg.EcgProcessor.ecg_process()`
* sampling_rate : float
Expand Down
2 changes: 1 addition & 1 deletion src/biopsykit/signals/ecg/plotting.py
Original file line number Diff line number Diff line change
Expand Up @@ -579,7 +579,7 @@ def rr_distribution_plot(
dataframe with R peaks. Output of :meth:`~biopsykit.signals.ecg.ecg.EcgProcessor.ecg_process()`.
sampling_rate : float, optional
Sampling rate of recorded data in Hz. Default: 256
*+kwargs
**kwargs
Additional parameters to configure the plot. Parameters include:
* ``figsize``: Figure size
Expand Down
43 changes: 23 additions & 20 deletions src/biopsykit/sleep/plotting.py
Original file line number Diff line number Diff line change
Expand Up @@ -40,10 +40,10 @@ def sleep_imu_plot(
Parameters
----------
data : :class:`~pandas.DataFrame´
data : :class:`~pandas.DataFrame`
data to plot. Data must either be acceleration data (:obj:`~biopsykit.utils.datatype_helper.AccDataFrame`),
gyroscope data (:obj:`~biopsykit.utils.datatype_helper.GyrDataFrame`), or IMU data
(:obj:`~biopsykit.utils.datatype_helper.ImuDataFrame`)
(:obj:`~biopsykit.utils.datatype_helper.ImuDataFrame`).
datastreams : str or list of str, optional
list of datastreams indicating which type of data should be plotted or ``None`` to only plot acceleration data.
If more than one type of datastream is specified each datastream is plotted row-wise in its own subplot.
Expand All @@ -53,37 +53,40 @@ def sleep_imu_plot(
downsample_factor : int, optional
downsample factor to apply to raw input data before plotting or ``None`` to not downsample data before
plotting (downsample factor 1). Default: ``None``
kwargs
**kwargs
optional arguments for plot configuration.
To configure which type of sleep endpoint annotations to plot:
* ``plot_sleep_onset``: whether to plot sleep onset annotations or not: Default: ``True``
* ``plot_wake_onset``: whether to plot wake onset annotations or not: Default: ``True``
* ``plot_bed_start``: whether to plot bed interval start annotations or not: Default: ``True``
* ``plot_bed_end``: whether to plot bed interval end annotations or not: Default: ``True``
* ``plot_sleep_wake``: whether to plot vspans of detected sleep/wake phases or not: Default: ``True``
* ``plot_sleep_onset``: whether to plot sleep onset annotations or not: Default: ``True``
* ``plot_wake_onset``: whether to plot wake onset annotations or not: Default: ``True``
* ``plot_bed_start``: whether to plot bed interval start annotations or not: Default: ``True``
* ``plot_bed_end``: whether to plot bed interval end annotations or not: Default: ``True``
* ``plot_sleep_wake``: whether to plot vspans of detected sleep/wake phases or not: Default: ``True``
To style general plot appearance:
* ``axs``: pre-existing axes for the plot. Otherwise, a new figure and axes objects are created and
returned.
* ``colormap``: colormap to plot different axes from input data
* ``figsize``: tuple specifying figure dimensions
* ``axs``: pre-existing axes for the plot. Otherwise, a new figure and axes objects are created and
returned.
* ``colormap``: colormap to plot different axes from input data
* ``figsize``: tuple specifying figure dimensions
To style axes:
* ``xlabel``: label of x axis. Default: "Time"
* ``ylabel``: label of y axis. Default: "Acceleration [$m/s^2$]" for acceleration data and
"Angular Velocity [$°/s$]" for gyroscope data
* ``xlabel``: label of x axis. Default: "Time"
* ``ylabel``: label of y axis. Default: "Acceleration [$m/s^2$]" for acceleration data and
"Angular Velocity [$°/s$]" for gyroscope data.
To style legend:
* ``legend_loc``: location of legend. Default: "lower left"
* ``legend_fontsize``: font size of legend labels. Default: "smaller"
* ``legend_loc``: location of legend. Default: "lower left"
* ``legend_fontsize``: font size of legend labels. Default: "smaller"
Returns
-------
fig : :class:`matplotlib.figure.Figure`
fig : :class:`~matplotlib.figure.Figure`
figure object
axs : list of :class:`matplotlib.axes.Axes`
axs : list of :class:`~matplotlib.axes.Axes`
list of subplot axes objects
"""
Expand Down
35 changes: 18 additions & 17 deletions src/biopsykit/sleep/sleep_endpoints/sleep_endpoints.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,23 +14,24 @@ def compute_sleep_endpoints(
"""Compute a set of sleep endpoints based on sleep/wake information and time spent in bed.
This functions computes the following sleep endpoints:
* ``date``: date of recording if input data is time-aware, ``0`` otherwise. **NOTE**: If the participant went
to bed between 12 am and 6 am (i.e, the beginning of ``bed_interval`` between 12 am and 6 am) ``date`` will
be set to the day before (because this night is assumed to "belong" to the day before)
* ``sleep_onset``: Sleep Onset, i.e., time of falling asleep, in absolute time
* ``wake_onset``: Wake Onset, i.e., time of awakening, in absolute time
* ``net_sleep_duration``: Net duration spent sleeping, in minutes
* ``bed_interval_start``: Bed Interval Start, i.e, time when participant went to bed, in absolute time
* ``bed_interval_end``: Bed Interval End, i.e, time when participant left bed, in absolute time
* ``sleep_efficiency``: Sleep Efficiency, defined as the ratio between net sleep duration and sleep duration
in percent
* ``sleep_onset_latency``: Sleep Onset Latency, i.e., time in bed needed to fall asleep, in minutes
* ``getup_latency``: Get Up Latency, i.e., time in bed after awakening until getting up, in minutes
* ``wake_after_sleep_onset``: Wake After Sleep Onset (WASO), i.e., total time awake after falling asleep, in
minutes
* ``sleep_bouts``: List with start and end times of sleep bouts
* ``wake_bouts``: List with start and end times of wake bouts
* ``number_wake_bouts``: Total number of wake bouts
* ``date``: date of recording if input data is time-aware, ``0`` otherwise. **NOTE**: If the participant went
to bed between 12 am and 6 am (i.e, the beginning of ``bed_interval`` between 12 am and 6 am) ``date`` will
be set to the day before (because this night is assumed to "belong" to the day before).
* ``sleep_onset``: Sleep Onset, i.e., time of falling asleep, in absolute time
* ``wake_onset``: Wake Onset, i.e., time of awakening, in absolute time
* ``net_sleep_duration``: Net duration spent sleeping, in minutes
* ``bed_interval_start``: Bed Interval Start, i.e, time when participant went to bed, in absolute time
* ``bed_interval_end``: Bed Interval End, i.e, time when participant left bed, in absolute time
* ``sleep_efficiency``: Sleep Efficiency, defined as the ratio between net sleep duration and sleep duration
in percent
* ``sleep_onset_latency``: Sleep Onset Latency, i.e., time in bed needed to fall asleep, in minutes
* ``getup_latency``: Get Up Latency, i.e., time in bed after awakening until getting up, in minutes
* ``wake_after_sleep_onset``: Wake After Sleep Onset (WASO), i.e., total time awake after falling asleep, in
minutes
* ``sleep_bouts``: List with start and end times of sleep bouts
* ``wake_bouts``: List with start and end times of wake bouts
* ``number_wake_bouts``: Total number of wake bouts
Parameters
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -16,13 +16,14 @@ def predict_pipeline_acceleration(
"""Apply sleep processing pipeline on raw acceleration data.
This function processes raw acceleration data collected during sleep. The pipeline consists of the following steps:
* *Activity Count Conversion*: Convert raw acceleration data into activity counts. Most sleep/wake detection
algorithms use activity counts (as typically provided by Actigraphs) as input data
* *Wear Detection*: Detect wear and non-wear periods. Cut data to longest continuous wear block
* *Rest Periods*: Detect rest periods, i.e., periods with large physical inactivity. The longest continuous
rest period (*Major Rest Period*) is used to determine the *Bed Interval*, i.e., the period spent in bed
* *Sleep/Wake Detection*: Apply sleep/wake detection algorithm to classify phases of sleep and wake.
* *Sleep Endpoint Computation*: Compute Sleep Endpoints from sleep/wake detection results and bed interval
* *Activity Count Conversion*: Convert raw acceleration data into activity counts. Most sleep/wake detection
algorithms use activity counts (as typically provided by Actigraphs) as input data.
* *Wear Detection*: Detect wear and non-wear periods. Cut data to longest continuous wear block.
* *Rest Periods*: Detect rest periods, i.e., periods with large physical inactivity. The longest continuous
rest period (*Major Rest Period*) is used to determine the *Bed Interval*, i.e., the period spent in bed.
* *Sleep/Wake Detection*: Apply sleep/wake detection algorithm to classify phases of sleep and wake.
* *Sleep Endpoint Computation*: Compute Sleep Endpoints from sleep/wake detection results and bed interval.
Parameters
----------
Expand All @@ -34,10 +35,10 @@ def predict_pipeline_acceleration(
``True`` if input data is provided in :math:`m/s^2` and should be converted in :math:`g`, ``False`` if input
data is already in :math:`g` and does not need to be converted.
Default: ``True``
kwargs :
**kwargs :
additional parameters to configure sleep/wake detection. The possible parameters depend on the selected
sleep/wake detection algorithm and are passed to
:class:`~biopsykit.sleep.sleep_wake_detection.SleepWakeDetection`
:class:`~biopsykit.sleep.sleep_wake_detection.SleepWakeDetection`.
Returns
Expand Down
4 changes: 2 additions & 2 deletions src/biopsykit/stats/stats.py
Original file line number Diff line number Diff line change
Expand Up @@ -292,7 +292,7 @@ def display_results(self, sig_only: Optional[Union[str, bool, Sequence[str], Dic
**kwargs
additional arguments to be passed to the function, such as:
* category names: ``True`` to display results of this category, ``False`` to skip displaying results
* ``category`` names: ``True`` to display results of this category, ``False`` to skip displaying results
of this category. Default: show results from all categories
* ``grouped``: ``True`` to group results by the variable "groupby" specified in the parameter
dictionary when initializing the ``StatsPipeline`` instance.
Expand Down Expand Up @@ -439,7 +439,7 @@ def sig_brackets(
* ``str``: only one feature is plotted in the boxplot
(returns significance brackets of only one feature)
* ``list``: multiple features are combined into *one* :class:`matplotlib.axes.Axes` object
* ``list``: multiple features are combined into *one* :class:`~matplotlib.axes.Axes` object
(returns significance brackets of multiple features)
* ``dict``: dictionary with feature (or list of features) per subplot if boxplots are structured in
subplots (``subplots`` is ``True``) (returns dictionary with significance brackets per subplot)
Expand Down
36 changes: 19 additions & 17 deletions src/biopsykit/utils/array_handling.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@


def sanitize_input_1d(data: arr_t) -> np.ndarray:
"""Convert 1-d array-like data (numpy array, pandas dataframe/series) to a numpy array.
"""Convert 1-d array-like data (:class:`~numpy.ndarray`, :class:`~pandas.DataFrame`/:class:`~pandas.Series`) to a numpy array.
Parameters
----------
Expand All @@ -19,8 +19,8 @@ def sanitize_input_1d(data: arr_t) -> np.ndarray:
Returns
-------
array_like
data as 1-d numpy array
:class:`~numpy.ndarray`
data as 1-d :class:`~numpy.ndarray`
"""
if isinstance(data, (pd.Series, pd.DataFrame)):
Expand All @@ -36,7 +36,7 @@ def sanitize_input_nd(
data: arr_t,
ncols: Optional[Union[int, Tuple[int, ...]]] = None,
) -> np.ndarray:
"""Convert n-d array-like data (numpy array, pandas dataframe/series) to a numpy array.
"""Convert n-d array-like data (:class:`~numpy.ndarray`, :class:`~pandas.DataFrame`/:class:`~pandas.Series`) to a numpy array.
Parameters
----------
Expand All @@ -49,7 +49,7 @@ def sanitize_input_nd(
Returns
-------
array_like
:class:`~numpy.ndarray`
data as n-d numpy array
"""
Expand Down Expand Up @@ -86,10 +86,12 @@ def find_extrema_in_radius(
array with indices for which to search for extrema values around
radius: int or tuple of int
radius around ``indices`` to search for extrema:
* if ``radius`` is an ``int`` then search for extrema equally in both directions in the interval
[index - radius, index + radius].
* if ``radius`` is a ``tuple`` then search for extrema in the interval
[ index - radius[0], index + radius[1] ]
* if ``radius`` is an ``int`` then search for extrema equally in both directions in the interval
[index - radius, index + radius].
* if ``radius`` is a ``tuple`` then search for extrema in the interval
[ index - radius[0], index + radius[1] ]
extrema_type : {'min', 'max'}, optional
extrema type to be searched for. Default: 'min'
Expand Down Expand Up @@ -168,9 +170,9 @@ def remove_outlier_and_interpolate(
x_old: Optional[np.ndarray] = None,
desired_length: Optional[int] = None,
) -> np.ndarray:
"""Remove outlier, impute missing values and optionally interpolate data to desired length.
"""Remove outliers, impute missing values and optionally interpolate data to desired length.
Detected outlier are removed from array and imputed by linear interpolation.
Detected outliers are removed from array and imputed by linear interpolation.
Optionally, the output array can be linearly interpolated to a new length.
Expand All @@ -190,8 +192,8 @@ def remove_outlier_and_interpolate(
Returns
-------
array_like
data with removed and imputed outlier, optionally interpolated to desired length
:class:`~numpy.ndarray`
data with removed and imputed outliers, optionally interpolated to desired length
Raises
Expand Down Expand Up @@ -260,13 +262,13 @@ def sliding_window(
Returns
-------
array_like
:class:`~numpy.ndarray`
sliding windows from input array.
See Also
--------
`sliding_window_view`
:func:`~biopsykit.utils.array_handling.sliding_window_view`
create sliding window of input array. low-level function with less input parameter configuration possibilities
"""
Expand Down Expand Up @@ -358,7 +360,7 @@ def sliding_window_view(array: np.ndarray, window_length: int, overlap: int, nan
.. warning::
This function will return by default a view onto your input array, modifying values in your result will directly
affect your input data which might lead to unexpected behaviour! If padding is disabled (default) last window
affect your input data which might lead to unexpected behaviour! If padding is disabled (default), last window
fraction of input may not be returned! However, if `nan_padding` is enabled, this will always return a copy
instead of a view of your input data, independent if padding was actually performed or not!
Expand All @@ -378,7 +380,7 @@ def sliding_window_view(array: np.ndarray, window_length: int, overlap: int, nan
Returns
-------
array_like
:class:`~numpy.ndarray`
windowed view (or copy if ``nan_padding`` is ``True``) of input array as specified,
last window might be nan-padded if necessary to match window size
Expand Down
Loading

0 comments on commit f848adc

Please sign in to comment.