- Scale input projection. Since Mambaβs output is modulated by a gating unit, insufficient input context can bottleneck performance, motivating a larger input projection receptive field for densely sampled time series.
- Modularize time (in)variance. As time series often exhibit near linear time-invariant behavior, we decouple time variance of Mamba as a hyperparameter. Simpler configurations often perform better, contradicting the ablation results from Mamba.
- Remove skip connection. Time series models often have shallow networks, and skip connections do not always yield performance gains in such cases. Given Mambaβs strong long-range memory, we remove skip connections and construct logits solely from hidden states.
- Aggregate via adaptive pooling. Time series classification spans both global and event-driven patterns, which conventional pooling cannot accommodate. We therefore propose a multi-head adaptive pooling that weights temporal features in a dataset-specific manner.
-
Achieve the best average acc and rank on the UEA benchmark with a single-layer Mamba structure.
.webp)
-
Highlight systematic differences across backbone structures via UMAP visualization.

-
Obtain best performing checkpoints of MambaSL and recent TSC baselines on all 30 datasets.

-
Install Python 3.12. (tested on 3.12.8)
conda create -n ts312 python=3.12 conda activate ts312
-
Install dependencies.
-
For convenience, follow the instructions in the
./notebooks/initial setting.ipynbto set up the environment. -
Or install the required packages as below:
pip install -r "requirements (no version).txt"If you want to use the exact versions of the libraries that we used for our experiments, you can try the following command.
pip install -r "requirements (now version).txt" --force-reinstall -
As can be seen from the requirements files, we commented out
mamba-ssmandcausal-conv1dsince they took a long time and may cause some error during installation. We highly recommend installingmamba-ssmandcausal-conv1dmanually. See more details in the./notebooks/initial setting.ipynbnotebook.
-
-
Prepare Data.
- The original and preprocessed UEA30 datasets can be downloaded from [Google Drive].
- Two versions are provided:
(1) the original(.ts) and preprocessed(.pkl) UEA30 datasets, and
(2) the dataset files above with additional feature files for TSCMamba model.
- Two versions are provided:
- Place the datasets in the folder that you want and
add or modify the
--root_pathflag while running therun.pyscript for training or evaluation.
- The original and preprocessed UEA30 datasets can be downloaded from [Google Drive].
-
Prepare Checkpoints.
- The checkpoints of MambaSL and other baselines are also provided in the [Google Drive].
- Place the checkpoints in the folder that you want and
add or modify the
--checkpointsflag while running therun.pyscript for evaluation.
If you just want to test the best model on each dataset, go to [05-scripts_final] section.
We provide all files related to our experiments under the
./scripts_classification/directory.
-
The numbers in the directory correspond to the order in which the experiments were actually performed.
-
The directory structure is as follows:
./scripts_classification/ βββ 01-make_scripts β βββ make_cls_script (${model}).sh βββ 02-run_scripts β βββ run_cls_script (${model}).sh βββ 03-full_results β βββ ${model} (${experiment}) β βββ (experiment scripts and logs) βββ 04-retrieve_results β βββ retrieve_results (MambaSL, multilayer).ipynb β βββ retrieve_results (TSLib models).ipynb β βββ ... βββ 05-scripts_final β βββ _template β βββ _test_results β βββ ${model} β βββ All_UEA30.sh β βββ ${dataset}.sh β βββ run_scripts.sh βββ 06-visualize_results β βββ ${model} β βββ get_results (${model}).ipynb β βββ uea_interpgn.csv βββ 07-analysis_results β βββ ablate_MambaSL_TV.ipynb β βββ dataset_len.ipynb β βββ ... βββ data_classification.yaml : metadata of UEA30 βββ ...
This folder contains the
.shfiles that we used to make scripts for hyperparameter grid search.
- In each file, you can see the details of the hyperparameters that we choose for a certain model.
- You can modify the
data_pathand other features in the files to generate your own set of experiment scripts. - The generated scripts will be saved in either
./scripts_classification/scripts_baseline/or./scripts_classification/scripts_mamba/as default.
This folder contains the
.shfiles that we used to run the experiment scripts generated by [01-make_scripts].
- The experiment logs will be saved in
./scripts_classification/results/as default. - For instance, below is an example of running grid search experiments for MambaSL on DuckDuckGeese and PEMS-SF datasets simultaneously:
UEA_MTSC30=("DuckDuckGeese" "PEMS-SF") exp="proposed" model="MambaSL_CLS" for dataset in ${UEA_MTSC30[@]} do datasetexp="${dataset}_${exp}" nohup bash ./scripts_classification/scripts_mamba/${exp}/${model}_${datasetexp}.sh > ./scripts_classification/results/${model}_${datasetexp}.out & done
- Be aware of the memory limit of your GPU since the scripts will run simultaneously.
- Especially for the long sequence length (e.g., EigenWorms, MotorImagery) or high dimensionality (e.g. DuckDuckGeese, PEMS-SF).
- You can run the scripts sequentially by removing
&at the end of thenohupcommand.
- Be aware of the memory limit of your GPU since the scripts will run simultaneously.
This folder contains the full results (scripts and logs) of all experiments that we performed.
- We organized the results, which was temporarily saved in
./scripts_classification/results/and./scripts_classification/scripts_baseline/, by models. - You can check the performance of each hyperparameter setting for each model and dataset in the logs.
This folder contains the notebook files to retrieve the best checkpoints and the corresponding scripts from the [03-full_results].
This folder contains the final scripts to test the best model on each dataset.
_template/: Script templates for UEA30 datasets. You can modify the template scripts to test the best model on each dataset._test_results/: All test results of the final scripts that we ran for the paper.${model}/All_UEA30.sh: A script to run the final scripts for all UEA30 datasets sequentially. Each script refers to the best & lightest checkpoint for each dataset.${dataset}.sh: A script to run the final script for each dataset. It might contains multiple scripts if there are multiple best checkpoints for the dataset.- You have to modify the
gpu_id,resource_dir,data_dir, andcheckpoint_dirin the scripts properly before running them.gpu_id: GPU id (in integer) to run the script.resource_dir: (optional) the path where you placed datasets and checkpoints.data_dir: the parent directory of the dataset folder.${data_dir}/${dataset}/will be used as the--root_pathflag in therun.py.checkpoint_dir: the path to the folder where you placed the best checkpoints downloaed from the Google Drive.${checkpoint_dir}/${model}/will be used as the--checkpointsflag in therun.py.
run_scripts.sh: Scripts that we used to run multiple final scripts with for loop.- Below is an example of running the final scripts for MambaSL on EthanolConcentration and Handwriting datasets sequentially:
if you add
UEA_MTSC30=("EthanolConcentration" "Handwriting") model="MambaSL" for dataset in ${UEA_MTSC30[@]} do sh_fname="./scripts_classification/05-scripts_final/${model}/${dataset}.sh" out_fname="./scripts_classification/05-scripts_final/_test_results/${model}_${dataset}.out" nohup bash ${sh_fname} > ${out_fname} done
&at the end of thenohupcommand, you can run the scripts simultaneously. We don't recommend running too many scripts simultaneously due to the memory limit of the GPU.
- Below is an example of running the final scripts for MambaSL on EthanolConcentration and Handwriting datasets sequentially:
This folder contains the notebook files to visualize the results of the experiments.
${model}/: This folder contains the outputs fromget_results (${model}).ipynbnotebook.get_results (${model}).ipynb: This notebook contains the code to summarize the accuracy results and draw some plots (e.g., line plot of accuracy vs. hyperparameters) although these were not included in our main paper.uea_interpgn.csv: full InterpGN results from the original repo.
This folder contains the final analysis notebook codes and materials to reproduce the analysis results in our main paper.
ablate_MambaSL_TV.ipynb: notebook to generate Figure 7dataset_len.ipynb: notebook to get the sequence length range for variable datasets, which was used for Table 4visualization (adaptive pooling).ipynb: notebook to generate Figure 8visualization (UEA30 barplot).ipynb: notebook to generate Figure 4visualization (UMAP along ...).ipynb: notebook to generate Figure 5 and 6Wilcoxon test.ipynb: notebook to perform Wilcoxon test for UEA30 results, which was used for Table 5
- The code is fundamentally built upon Time-Series-Library#4ddf869.
- We modified dataloader to save and load the preprocessed datasets in pickle format for faster loading.
- We added the code for MambaSL and other baselines that weren't included in the original tslib.
- We modified some model due to proper hyperparameter search. The details can be found in each model file if there are any modifications.
- e.g. change
seg_lenfrom fixed value to hyperparameter for Crossformer.
- e.g. change
- We add experimental code for inceptiontime setting (to only use train loss for model selection), and medformer setting (to test ADFTD and FLAAP).
- Since TSLANet has pretraining phase which make it difficult to merge into Time-Series-Library pipeline, we simply add
_run_TSLANetdirectory to run the TSLANet pipeline. Still, the scripts can be generated and executed via./scripts_classification/directory. - We changed dataloader and test code of TSLANet to make it work with original UEA30 datasets and saved model checkpoints.
- non-DL models were tested via aeon-toolkit.
- The notebooks in
_run_non-DL_models (aeon)directory include the results. - For MultiRocket+Hydra, padding was required in PenDigits dataset to avoid errors (seq_len 8 -> 9).
- The scripts were tested on:
- Four NVIDIA GTX 1080 Ti (11GB)
- NVIDIA A100 (40GB) in Google Colab for some baselines due to the memory issue
- Python 3.12.8 and PyTorch 2.5.1
(packages listed inrequirements (now version).txt)
- Four NVIDIA GTX 1080 Ti (11GB)
If you find this repo useful, please consider citing our paper:
@inproceedings{
jung2026mambasl,
title={Mamba{SL}: Exploring Single-Layer Mamba for Time Series Classification},
author={Yoo-Min Jung and Leekyung Kim},
booktitle={The Fourteenth International Conference on Learning Representations},
year={2026},
url={https://openreview.net/forum?id=YDl4vqQqGP}
}-
Mamba : We are really grateful to the authors of Mamba for sharing their code and providing us with the opportunity to explore the potential of Mamba in time series classification. In particular, leaving the parameters for Mamba's ablation study in the codebase was a great help for our research.
-
Time-Series-Library / aeon-toolkit : We are also grateful to the creators and maintainers of the two time series libraries, which provided us with the codebase for DL and non-DL models, respectively.
-
UEA Archive / Medformer : We thank the authors of the UEA Archive and Medformer for sharing the well-preprocessed datasets which we used for our experiments.
-
We also thank the authors of the baslines that we compared with for sharing their code and scripts, which we used to test the performance of the baselines in our experiments.
