Skip to content

Commit

Permalink
docs complete first draft
Browse files Browse the repository at this point in the history
  • Loading branch information
renatoparedes committed Nov 12, 2021
1 parent 91a5af3 commit 6a17888
Show file tree
Hide file tree
Showing 4 changed files with 244 additions and 3 deletions.
32 changes: 32 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,3 +15,35 @@ Research on the the neural process by which unisensory signals are combined to f
## Contact
Renato Paredes (paredesrenato92@gmail.com)

## Features

**Scikit-neuromsi** currently has three classes which implement neurocomputational
models of multisensory integration.

The available modules are:

- **alais_burr2004**: implements the near-optimal bimodal integration
employed by Alais and Burr (2004) to reproduce the Ventriloquist Effect.

- **ernst_banks2002**: implements the visual-haptic maximum-likelihood
integrator employed by Ernst and Banks (2002) to reproduce the visual-haptic task.

- **kording2007**: implements the Bayesian Causal Inference model for
Multisensory Perception employed by Kording et al. (2007) to reproduce
the Ventriloquist Effect.

In addition, there is a **core** module with features to facilitate the implementation of new models of multisensory integration.

## Requirements

You need Python 3.9+ to run scikit-neuromsi.

## Installation

Run the following command:

$ pip install scikit-neuoromsi

or clone this repo and then inside the local directory execute:

$ pip install -e .
1 change: 1 addition & 0 deletions docs/source/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -40,6 +40,7 @@ https://github.com/renatoparedes/scikit-neuromsi
:maxdepth: 1
:caption: Contents:

installation.rst
tutorial.ipynb
license.rst
api.rst
Expand Down
48 changes: 48 additions & 0 deletions docs/source/installation.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,48 @@
Installation
============


This is the recommended way to install scikit-neuromsi.

Installing with pip
^^^^^^^^^^^^^^^^^^^^

Make sure that the Python interpreter can load scikit-neuromsi code.
The most convenient way to do this is to use virtualenv, virtualenvwrapper, and pip.

After setting up and activating the virtualenv, run the following command:

.. code-block:: console
$ pip install scikit-neuromsi
...
That should be enough.



Installing the development version
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

If you’d like to be able to update your scikit-neuromsi code occasionally with the
latest bug fixes and improvements, follow these instructions:

Make sure that you have Git installed and that you can run its commands from a shell.
(Enter *git help* at a shell prompt to test this.)

Check out scikit-neuromsi main development branch like so:

.. code-block:: console
$ git clone https://github.com/renatoparedes/scikit-neuromsi.git
...
This will create a directory *scikit-neuromsi* in your current directory.

Then you can proceed to install with the commands

.. code-block:: console
$ cd scikit-neuromsi
$ pip install -e .
...
166 changes: 163 additions & 3 deletions docs/source/tutorial.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@
"metadata": {},
"outputs": [],
"source": [
"from skneuromsi import core\n",
"import skneuromsi\n",
"import numpy as np\n",
"import matplotlib.pyplot as plt"
]
Expand Down Expand Up @@ -472,12 +472,172 @@
"## Build your own scikit-neuromsi model!"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can implement your own model by importing the `core` module and creating a class with the decorator `neural_msi_model`. Such decorated class must have two attributes: `stimuli` and `integration` as shown below:\n",
"\n",
"> `stimuli` must be a list of functions (each representing an unimodal sensory estimator) and `integration` a function (representing the multisensory estimator)."
]
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 22,
"metadata": {},
"outputs": [],
"source": []
"source": [
"from skneuromsi import core\n",
"\n",
"\n",
"def unisensory_estimator_a():\n",
" return \"zaraza\"\n",
"\n",
"\n",
"def unisensory_estimator_b():\n",
" return \"zaraza\"\n",
"\n",
"\n",
"def multisensory_estimator():\n",
" return \"zaraza\"\n",
"\n",
"\n",
"@core.neural_msi_model\n",
"class MyModel:\n",
"\n",
" # estimators\n",
" stimuli = [unisensory_estimator_a, unisensory_estimator_b]\n",
" integration = multisensory_estimator"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can provide further specifications to the model by including `hyperparameters` and `internals` in the model class: "
]
},
{
"cell_type": "code",
"execution_count": 55,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"101.16"
]
},
"execution_count": 55,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"def unisensory_estimator_a(a_weight, baseline):\n",
" u_estimate_a = baseline + a_weight * 2\n",
" return u_estimate_a\n",
"\n",
"\n",
"def unisensory_estimator_b(b_weight, baseline):\n",
" u_estimate_b = baseline + b_weight * 2\n",
" return u_estimate_b\n",
"\n",
"\n",
"def multisensory_estimator(\n",
" unisensory_estimator_a, unisensory_estimator_b, a_weight, b_weight\n",
"):\n",
" return (\n",
" unisensory_estimator_a * a_weight + unisensory_estimator_b * b_weight\n",
" )\n",
"\n",
"\n",
"@core.neural_msi_model\n",
"class MyModel:\n",
"\n",
" # hyperparameters\n",
" baseline = core.hparameter(default=100)\n",
"\n",
" # internals\n",
" a_weight = core.internal(default=0.7)\n",
" b_weight = core.internal(default=0.3)\n",
"\n",
" # estimators\n",
" stimuli = [unisensory_estimator_a, unisensory_estimator_b]\n",
" integration = multisensory_estimator\n",
"\n",
"\n",
"model = MyModel()\n",
"model.run()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can also configure parameters that are specific for each model run. For this purpose you can include them as parameters of the unisensory estimators functions (here as `input`):"
]
},
{
"cell_type": "code",
"execution_count": 54,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"106.16"
]
},
"execution_count": 54,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"def unisensory_estimator_a(a_weight, baseline, input):\n",
" u_estimate_a = baseline + a_weight * 2 + input\n",
" return u_estimate_a\n",
"\n",
"\n",
"def unisensory_estimator_b(b_weight, baseline, input):\n",
" u_estimate_b = baseline + b_weight * 2 + input\n",
" return u_estimate_b\n",
"\n",
"\n",
"def multisensory_estimator(\n",
" unisensory_estimator_a, unisensory_estimator_b, a_weight, b_weight\n",
"):\n",
" return (\n",
" unisensory_estimator_a * a_weight + unisensory_estimator_b * b_weight\n",
" )\n",
"\n",
"\n",
"@core.neural_msi_model\n",
"class MyModel:\n",
"\n",
" # hyperparameters\n",
" baseline = core.hparameter(default=100)\n",
"\n",
" # internals\n",
" a_weight = core.internal(default=0.7)\n",
" b_weight = core.internal(default=0.3)\n",
"\n",
" # estimators\n",
" stimuli = [unisensory_estimator_a, unisensory_estimator_b]\n",
" integration = multisensory_estimator\n",
"\n",
"\n",
"model = MyModel()\n",
"model.run(input=5)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"For more details about model building, please refer to the [documentation](https://scikit-neuromsi.readthedocs.io/en/latest/api.html#module-skneuromsi.core). "
]
}
],
"metadata": {
Expand Down

0 comments on commit 6a17888

Please sign in to comment.