Modelling Phenomenological Differences in Aetiologically Distinct Visual Hallucinations Using Deep Neural Networks
This repository contains source code necessary to reproduce some of the main results in the paper:
Suzuki K, David S, Seth A "Modelling Phenomenological Differences in Aetiologically Distinct Visual Hallucinations Using Deep Neural Networks.". PsyArXiv [1].
For more information regarding the project, please visit the project website on OSF.
Our model is largely based on Nguyen et al 2016[2]. Please also follow the instlation instruction of the original repository for setting up the model.
This code is built on top of Caffe. You'll need to install the following:
- Install Caffe; follow the official installation instructions. You will need to install Caffe supporting upconvolution
- Build the Python bindings for Caffe
- If you have an NVIDIA GPU, you can optionally build Caffe with the GPU option to make it run faster
- Make sure the path to your
caffe/python
folder in settings.py is correct - Install ImageMagick command-line interface on your system.
You will need to download a few models. There are download.sh
scripts provided for your convenience.
- The deep generator network (Upconvolutional network) from Dosovitskiy & Brox (2016)[3]. You can download directly on their website or using the provided script
cd nets/upconv && ./download.sh
- The deep convolutional neural network can be downloaded from BVLC reference CaffeNet or using the provided script
cd nets/caffenet && ./download.sh
Settings:
- Paths to the downloaded models are in settings.py. They are relative and should work if the
download.sh
scripts run correctly.
The main algorithm is in act_max2.py, which is a standalone Python script; you can pass various command-line arguments to run different experiments.
In our model, three different parameters can be modified to simulate different types of visual hallucinations.
Model architecture and 3 manipulations applied to our model to simulate specific hallucinatory phenomenology
All the script calls image_generation.sh which handles the parameters before calling act_max2.py.
Key parameters in image_generation.sh are as follows. These variables can be spcified with commanline arguments. Check out run_experiment.sh.
Specify the target layer in DCNN to terminate the activation maximisation.
Specifiy the type of activation maximisation. Both DGN and DCNN are used with the DGN
actvation maximisation, wheras only DCNN is used with the DCNN
actvation maximisation
Specify the error function. winner
: Winner-take-all error functin, l2norm
: Deep-dream error functin, fixed
: Fixed error functin.
Specify the input images. blurred
is for simulating the CBS hallucinations. original
should be used otherwise.
Specify the target categories for for fixed
Error Function. This value is ignored in winner
or l2norm
error functions.
If export
is 1
, the program exports images to export
folder at certain nubmer of itterations defined 'export_mode'.
Exporting the images at iteration 10, 50, 100, and 1000 in exp
mode, 50, 100, and 100 in validation
mode, and 5, 10, 50, 100, 200, 400, 600, 800, and 1000 in interview
mode.
-
The
debug
option in the script is enabled allowing one to visualize the activations of intermediate images in 'Debug' directory. -
The
stats
option in the script is enabled allowing one to expoert categories information in 'stats' directory.
We provide here six different simulations provided in the paper.
The input images used in the following simulations (except for complex and simple CBS).
Simulating benchmark (non-hallucinatory) perceptual phenomenology using act_layer=fc8
gen_type=DGN
act_mode=winner
. Using 'Winner-take-all' error function, the images with the categories in the input images are synthesised. See Sec.3.1 in our paper for more details.
- Running
./1_veridical_perception.sh
produces this result:
Simulating phenomenology of complex neurological visual hallucinations using act_layer=fc8
gen_type=DGN
act_mode=fixed
. Using fixed
error function, the image with different categories from the input images are synthesised. See Sec.3.2 in our paper for more details.
- Running
./2_complex_neurological.sh
produces this result:
Simulating complex visual hallucinations as the result of visuall loss in Charls Bonnet Syndrome (CBS) using act_layer=fc8
gen_type=DGN
act_mode=fixed
init_img=blurred
. The input images with blurs in their centres are used to simulte visual deficits associated with CBS. see Sec.3.3 in our paper.
- Specifing blurred input
init_img=blurred
makes the input image blurred.
The input images used for the CBS simulations
- Running
./3_complex_CBS.sh
produces this result:
Simulating simple visual hallucinations as the result of visuall loss in Charls Bonnet Syndrome (CBS) using act_layer=conv3
or act_layer=conv4
gen_type=DGN
act_mode=fixed
init_img=blurred
. The input images with blurs in their centres are used to simulte visual deficits associated with CBS. see Sec.3.3 in our paper.
- Running
./4_simple_CBS.sh conv3
produces this result:
- Running
./4_simple_CBS.sh conv4
produces this result:
From left to right are units that are semantically labeled by humans in [2] as:
lighthouse, building, bookcase, food, and painting
Simulating complex psychedelic visual hallucinations using act_layer=fc8
gen_type=DCN
act_mode=l2norm
. Using DCNN-AM and Deep-Dream error function, the model is simulating the orginal deep-dream algorithm [1]. see Sec.3.4 in our paper.
- Running
./5_complex_psychedelic.sh
produces this result:
Simulating simple psychedelic visual hallucinations using act_layer=conv3
or act_layer=conv4
gen_type=DCNN
act_mode=l2norm
. See Sec.3.4 in our paper.
- Running
./6_simple_psychedelic.sh conv3
produces this result:
- Running
./6_simple_psychedelic.sh conv4
produces this result:
Run all the above and generate the images in Result
folder.
An example for verdical perception with different points of iterations.
An example for complex psychedelic visual hallucinations with different points of iterations.
run_validation.sh: Run all the conditions with 32 different initial images, which were used for psychedelic survey. The generated images are also found in the OSF storage.
Note that the code in this repository is licensed under MIT License, but, the pre-trained models used by the code have their own licenses. Please carefully check them before use.
- The image generator networks (in nets/upconv/) are for non-commercial use only. See their page for more.
[1] Suzuki K, Roseboom W, Schwartzman DJ, Seth A. "A deep-dream virtual reality platform for studying altered perceptual phenomenology" Scientific reports 7 (1), 1-11. 2017.
[2] Nguyen A, Dosovitskiy A, Yosinski J, Brox T, Clune J. "Synthesizing the preferred inputs for neurons in neural networks via deep generator networks. In Advances in neural information processing systems, pages 3387–3395. 2016.
[3] Dosovitskiy A, Brox T. "Generating images with perceptual similarity metrics based on deep networks". arXiv preprint arXiv:1602.02644. 2016.