Skip to content

MSD-IRIMAS/DomainFoundationModelsTSC

Repository files navigation

Finding Foundation Models for Time Series Classification with a PreText Task

Authors: Ali Ismail-Fawaz1, Maxime Devanne1, Stefano Berretti2, Jonathan Weber1 and Germain Forestier13

1 MSD-IRIMAS, Université de Haute-Alsace, France 2 MICC, University of Florence, Italy 3 DSAI, Monash University, Australia

This repository contains the source code for our Foundation model paper titled "Finding Foundation Models for Time Series Classification with a PreText Task". In this paper, we pre-train a deep learning model on a pretext task on multiple datasets from the same domain. This is followed by fine tuning this model on each dataset independently for their own classification task. Here is a summary figure of the approach. A preprint of our paper is not available on arxiv

summary

Architecture backbone

We utilize the H-Inception architecture from Ismail-Fawaz et al 2022

archi

We fix the distribution problem of the domain-shift in the batch normalization layers by proposing the Batch Nomrlization Multiplexer (BNM)

bnm

Code Usage

This code is runnable using docker with tensorflow image for running on gpu.

Creating the docker image

To create the docker image, run the following command in root
docker build -t IMAGE_NAME . All dependencies can be found in the dockerfile

Creating the docker container

To create a docker container, first you need to have downloaded on your own pc the UCR archive datasets, they will be linked to the docker container after its creation. Run the following command in root
docker run --gpus all -it --name CONTAINER_NAME -v "$(pwd):/pretext-code" -v "/path/to/ucr/on/your/pc:/ucr_archive" IMAGE_NAME bash

This will open a terminal inside the container, run the main.py file from this terminal to run the experiments, the main file will be inside the directory /pretext-code of the container as specified in the previous command docker run...

Code configuration

The code's configuration uses hydra, all the information needed to setup the parameters are in the config/config.yaml file such as the list of datasets, number of epochs, batch size etc.

Results

All results can be found in the results_ucr.csv file containing the accuracy results over the used datastes of the baseline H-InceptionTime, ResNet, MultiROCKET and PHIT. Results with HC2 and HydraMultiROCKET can be found in results_ucr_hc2_hydraMR.csv

Results compared to the baseline

results-base

Results compared to the state-of-the-art in Time Series Classification

Comparing to HIVE-COTE2 and HydraMR from the recent Time Series Classification bake off Middlehurst et al. 2023.

results-bakeoff

Multi-Comparison Matrix

Using the code of the Multi-Comparison Matrix proposed in Ismail-Fawaz et al. 2023

results-mcm

Visualization of filters space

Comparing below using t-SNE the filters space on two datasets, ECG200 and NonInvasiveFetalECGThorax1, among the baseline model, the pretrained model and the fine tuned model

results-filters

Acknowledgments

This work was supported by the ANR DELEGATION project (grant ANR-21-CE23-0014) of the French Agence Nationale de la Recherche. The authors would like to acknowledge the High Performance Computing Center of the University of Strasbourg for supporting this work by providing scientific support and access to computing resources. Part of the computing resources were funded by the Equipex Equip@Meso project (Programme Investissements d’Avenir) and the CPER Alsacalcul/Big Data. The authors would also like to thank the creators and providers of the UCR Archive.

About

Finding Foundation Models for Time Series Classification with a PreText Task

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages