diff --git a/README.md b/README.md index b4b9e27..1ef8149 100644 --- a/README.md +++ b/README.md @@ -15,3 +15,35 @@ Research on the the neural process by which unisensory signals are combined to f ## Contact Renato Paredes (paredesrenato92@gmail.com) +## Features + +**Scikit-neuromsi** currently has three classes which implement neurocomputational +models of multisensory integration. + +The available modules are: + +- **alais_burr2004**: implements the near-optimal bimodal integration + employed by Alais and Burr (2004) to reproduce the Ventriloquist Effect. + +- **ernst_banks2002**: implements the visual-haptic maximum-likelihood + integrator employed by Ernst and Banks (2002) to reproduce the visual-haptic task. + +- **kording2007**: implements the Bayesian Causal Inference model for + Multisensory Perception employed by Kording et al. (2007) to reproduce + the Ventriloquist Effect. + +In addition, there is a **core** module with features to facilitate the implementation of new models of multisensory integration. + +## Requirements + +You need Python 3.9+ to run scikit-neuromsi. + +## Installation + +Run the following command: + + $ pip install scikit-neuoromsi + +or clone this repo and then inside the local directory execute: + + $ pip install -e . \ No newline at end of file diff --git a/docs/source/index.rst b/docs/source/index.rst index 56173a4..44f1550 100644 --- a/docs/source/index.rst +++ b/docs/source/index.rst @@ -40,6 +40,7 @@ https://github.com/renatoparedes/scikit-neuromsi :maxdepth: 1 :caption: Contents: + installation.rst tutorial.ipynb license.rst api.rst diff --git a/docs/source/installation.rst b/docs/source/installation.rst new file mode 100644 index 0000000..317435d --- /dev/null +++ b/docs/source/installation.rst @@ -0,0 +1,48 @@ +Installation +============ + + +This is the recommended way to install scikit-neuromsi. + +Installing with pip +^^^^^^^^^^^^^^^^^^^^ + +Make sure that the Python interpreter can load scikit-neuromsi code. +The most convenient way to do this is to use virtualenv, virtualenvwrapper, and pip. + +After setting up and activating the virtualenv, run the following command: + +.. code-block:: console + + $ pip install scikit-neuromsi + ... + +That should be enough. + + + +Installing the development version +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +If you’d like to be able to update your scikit-neuromsi code occasionally with the +latest bug fixes and improvements, follow these instructions: + +Make sure that you have Git installed and that you can run its commands from a shell. +(Enter *git help* at a shell prompt to test this.) + +Check out scikit-neuromsi main development branch like so: + +.. code-block:: console + + $ git clone https://github.com/renatoparedes/scikit-neuromsi.git + ... + +This will create a directory *scikit-neuromsi* in your current directory. + +Then you can proceed to install with the commands + +.. code-block:: console + + $ cd scikit-neuromsi + $ pip install -e . + ... diff --git a/docs/source/tutorial.ipynb b/docs/source/tutorial.ipynb index dcde61a..265629d 100644 --- a/docs/source/tutorial.ipynb +++ b/docs/source/tutorial.ipynb @@ -30,7 +30,7 @@ "metadata": {}, "outputs": [], "source": [ - "from skneuromsi import core\n", + "import skneuromsi\n", "import numpy as np\n", "import matplotlib.pyplot as plt" ] @@ -472,12 +472,172 @@ "## Build your own scikit-neuromsi model!" ] }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "You can implement your own model by importing the `core` module and creating a class with the decorator `neural_msi_model`. Such decorated class must have two attributes: `stimuli` and `integration` as shown below:\n", + "\n", + "> `stimuli` must be a list of functions (each representing an unimodal sensory estimator) and `integration` a function (representing the multisensory estimator)." + ] + }, { "cell_type": "code", - "execution_count": null, + "execution_count": 22, "metadata": {}, "outputs": [], - "source": [] + "source": [ + "from skneuromsi import core\n", + "\n", + "\n", + "def unisensory_estimator_a():\n", + " return \"zaraza\"\n", + "\n", + "\n", + "def unisensory_estimator_b():\n", + " return \"zaraza\"\n", + "\n", + "\n", + "def multisensory_estimator():\n", + " return \"zaraza\"\n", + "\n", + "\n", + "@core.neural_msi_model\n", + "class MyModel:\n", + "\n", + " # estimators\n", + " stimuli = [unisensory_estimator_a, unisensory_estimator_b]\n", + " integration = multisensory_estimator" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "You can provide further specifications to the model by including `hyperparameters` and `internals` in the model class: " + ] + }, + { + "cell_type": "code", + "execution_count": 55, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "101.16" + ] + }, + "execution_count": 55, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "def unisensory_estimator_a(a_weight, baseline):\n", + " u_estimate_a = baseline + a_weight * 2\n", + " return u_estimate_a\n", + "\n", + "\n", + "def unisensory_estimator_b(b_weight, baseline):\n", + " u_estimate_b = baseline + b_weight * 2\n", + " return u_estimate_b\n", + "\n", + "\n", + "def multisensory_estimator(\n", + " unisensory_estimator_a, unisensory_estimator_b, a_weight, b_weight\n", + "):\n", + " return (\n", + " unisensory_estimator_a * a_weight + unisensory_estimator_b * b_weight\n", + " )\n", + "\n", + "\n", + "@core.neural_msi_model\n", + "class MyModel:\n", + "\n", + " # hyperparameters\n", + " baseline = core.hparameter(default=100)\n", + "\n", + " # internals\n", + " a_weight = core.internal(default=0.7)\n", + " b_weight = core.internal(default=0.3)\n", + "\n", + " # estimators\n", + " stimuli = [unisensory_estimator_a, unisensory_estimator_b]\n", + " integration = multisensory_estimator\n", + "\n", + "\n", + "model = MyModel()\n", + "model.run()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "You can also configure parameters that are specific for each model run. For this purpose you can include them as parameters of the unisensory estimators functions (here as `input`):" + ] + }, + { + "cell_type": "code", + "execution_count": 54, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "106.16" + ] + }, + "execution_count": 54, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "def unisensory_estimator_a(a_weight, baseline, input):\n", + " u_estimate_a = baseline + a_weight * 2 + input\n", + " return u_estimate_a\n", + "\n", + "\n", + "def unisensory_estimator_b(b_weight, baseline, input):\n", + " u_estimate_b = baseline + b_weight * 2 + input\n", + " return u_estimate_b\n", + "\n", + "\n", + "def multisensory_estimator(\n", + " unisensory_estimator_a, unisensory_estimator_b, a_weight, b_weight\n", + "):\n", + " return (\n", + " unisensory_estimator_a * a_weight + unisensory_estimator_b * b_weight\n", + " )\n", + "\n", + "\n", + "@core.neural_msi_model\n", + "class MyModel:\n", + "\n", + " # hyperparameters\n", + " baseline = core.hparameter(default=100)\n", + "\n", + " # internals\n", + " a_weight = core.internal(default=0.7)\n", + " b_weight = core.internal(default=0.3)\n", + "\n", + " # estimators\n", + " stimuli = [unisensory_estimator_a, unisensory_estimator_b]\n", + " integration = multisensory_estimator\n", + "\n", + "\n", + "model = MyModel()\n", + "model.run(input=5)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "For more details about model building, please refer to the [documentation](https://scikit-neuromsi.readthedocs.io/en/latest/api.html#module-skneuromsi.core). " + ] } ], "metadata": {