- 🆕 01/2026: Code released.
- 🥳 01/2026: Paper accepted at Computer Vision and Image Understanding Volume 264, February 2026!
- ⭐ 12/2025: We have released the EventSleep2-data and Paper! 🔥
To ensure a smooth setup, we have included a EventSleepEnv.yml file that lists all the necessary libraries and their versions. You can easily create a Python environment with the required dependencies, executing:
conda env create -f EventSleepEnv.ymlAdditionally, we include a subfolder named "Models" which contains the trained models for our main approaches. It also serves as a repository for storing any new model trained using the scripts.
Here is a list of the scripts included in this folder:
-
data_tools.py: A script with a wide range of preprocessing and post-processing tools to work with EventSleep dataset (extract labels from folder names, labels dictionaries to obtain label names, resize frames, crop frames to focus on the bed area, etc...).
-
events_to_frames.py: A script to enable the transformation of the event data into frames according to the events frame-based representation explained in the paper. As result, it generates a folder named "EventFrames" in which you will find the resulting frames saved in .npy format.
-
render_recordings*.py: A script for visualizing the content of the event data, leveraging the frames representation and providing a synchronized view with the infrared recordings and the fine-grained ground truth labels. It generates a video that is located in Renders folder.
-
train_ResNet-Ev2: A script to train EvS2-net model. As result, it generates a folder with the date as name that contains a checkpoint.pth with the snapshot of the trained model and a train_details.json file with the details of the run. This folder will be saved at Models/Events in the folder corresponding to the labels and configurations used to train.
-
test_ResNet-Ev2.py: A script to test a trained EvS2-net model. As input, you must provide the checkpoint path stored in Models folder. As result, it prints and saves in the checkpoint parent folder the confusion matrices per configuration and averaged. Additionally, it saves a test_details.json file with the details of the run.
-
ft-Ev2.py: A script to finetune the model.
-
viz_EV_EEG.py: A script for coordinated visualization of event and EEG data. Takes the recording string (e.g.: 'subject01_seq02') as input, and produces an interactive figure with event count and both EEG channels over time. The figure can be navigated with the arrow keys, scrolling to the sides, and zooming in/out on the time axis.
This work is under AGPL-3.0 license.
If you find our work inspiring, please cite our papers:
@inproceedings{plou2024eventsleep,
title={EventSleep: Sleep Activity Recognition with Event Cameras},
author={Plou, Carlos and Gallego, Nerea and Sabater, Alberto and Urcola, Pablo and Montijano, Eduardo and Montesano, Luis and Martinez-Cantin, Ruben and Murillo, Ana C},
booktitle={Computer Vision -- ECCV 2024 Workshops},
pages={52--69},
year={2025},
organization={Springer}
}
@article{gallego2026eventsleep2,
title={EventSleep2: Sleep activity recognition on complete night sleep recordings with an event camera},
author={Gallego, Nerea and Plou, Carlos and Marcos, Miguel and Urcola, Pablo and Montesano, Luis and Montijano, Eduardo and Martinez-Cantin, Ruben and Murillo, Ana C},
journal={Computer Vision and Image Understanding},
pages={104619},
year={2026},
publisher={Elsevier}
}This work was supported by PID2024-159284NB-I00, PID2021-125514NB-I00, PID2024-158322OB-I00, PID2021-125209OB-I00, andAIA2025-1635 grants funded by MCIN/AEI/10.13039/501100011033ERDF/NextGenerationEU/PRTR, grant no. 101135782 (MANOLO project)funded by the European Union, two DGA scholarships and projectT45_23R.