This is the repository for the paper: "Making Time Series Embeddings More Interpretable in Deep Learning: Extracting Higher-Level Features via Symbolic Approximation Representations"
It provides examples on how SFA and SAX can be used as DL embedding for a Transformer model on time series data.
All univariate UCR & UEA datasets are supported.
A list of all needed dependencies (other versions can work but are not guaranteed to do so):
Requirements for the installation guide (other version might or might not work as well):
- python==3.9.16
- tensorflow-gpu==2.4.1
- tslearn==0.5.2
- pyts==0.12.0
- seml==0.3.7
To use the jupyter notebook:
pip install --upgrade ipykernel jupyter notebook pyzmq
We have two options to run the experiment. Either just test out single configurations with an anaconda notebook or test out multiple parameter combinations over SEML experiments.
- Go to SFATester.ipynb
- Change parameters
- Have fun
- Set up seml with seml configure (yes you need a mongoDB server for this and yes the results will be saved a in separate file, however seml does a really well job in managing the parameter combinations in combination with slurm)
- Configure the yaml file you want to run. Probably you only need to change the number of maximal parallel experiments ('experiments_per_job' and 'max_simultaneous_jobs') and the memory and cpu use ('mem' and 'cpus-per-task').
- Add and start the seml experiment. For example like this:
- seml sfaModel add sfaModel.yaml
- seml sfaModel start
- Check with "seml sfaModel status" till all your experiments are finished
- Please find the results in the results folder. It includes a dict which can be further processed with the code in sfaresultprocessing.py
This code represents the used model for the following publication:
"Making Time Series Embeddings More Interpretable in Deep Learning: Extracting Higher-Level Features via Symbolic Approximation Representations" (TODO Link)
If you use, build upon this work or if it helped in any other way, please cite the linked publication.