Note
OpenLTM is a open codebase intending to collect prevailing architectures of large time-series models. It is not intended for checkpoint reproduction. We aim to provide a pipeline to develop and evaluate large time-series models, covering three tasks: supervised training, large-scale pre-training, and adaptation.
For deep time series models and task-specific benchmarks, we recommend Time-Series-Library and this comprehensive Survey.
🚩 News (2025.03) Many thanks for the implementation of TTMs from frndtls.
🚩 News (2024.12) Many thanks for the implementation of GPT4TS from khairulislam.
🚩 News (2024.10) We include five large time-series models, release pre-training logic, and provide scripts.
LTM (Large Time-Series Model) is a series of scalable deep models built on foundation backbones (e.g. Transformers) and large-scale pre-training, which will be applied to a variety of time series data and diverse downstream tasks. For more information, here we list some related slides: [CN], [Eng].
- Timer-XL - Timer-XL: Long-Context Transformer for Unified Time Series Forecasting. [ICLR 2025], [Code]
- Moirai - Unified Training of Universal Time Series Forecasting Transformers. [ICML 2024], [Code]
- Timer - Timer: Generative Pre-trained Transformers Are Large Time Series Models. [ICML 2024], [Code]
- Moment - MOMENT: A Family of Open Time-series Foundation Model. [ICML 2024], [Code]
- TTMs - Tiny Time Mixers (TTMs): Fast Pre-trained Models for Enhanced Zero/Few-Shot Forecasting of Multivariate Time Series. [Arxiv 2024], [Code]
- GPT4TS - One Fits All: Power General Time Series Analysis by Pretrained LM. [NeurIPS 2023], [Code]
- AutoTimes: Autoregressive Time Series Forecasters via Large Language Models. [NeurIPS 2024], [Code]
- LLMTime: Large Language Models Are Zero-Shot Time Series Forecasters. [NeurIPS 2023], [Code]
- Time-LLM: . Time-LLM: Time Series Forecasting by Reprogramming Large Language Models. [ICLR 2024], [Code]
- Chronos: Learning the Language of Time Series. [TMLR 2024], [Code]
- Time-MoE: Billion-Scale Time Series Foundation Models With Mixture Of Experts. [ICLR 2025], [Code]
- A Decoder-Only Foundation Model for Time-Series Forecasting. [ICML 2024], [Code]
- Install Python 3.11 For convenience, execute the following command.
pip install -r requirements.txt
- Place downloaded data in the folder
./dataset
. Here is a dataset summary.
-
For univariate pre-training:
- UTSD contains 1 billiion time points for large-scale pre-training (in numpy format): [Download].
- ERA5-Familiy (40-year span, thousands of variables) for domain-specific model: [Download].
-
For superwised training or modeling adaptation
- Datasets from TSLib : [Download].
- We provide pre-training and adaptation scripts under the folder
./scripts/
. You can conduct experiments using the following examples:
# Supervised training
# (a) one-for-one forecasting
bash ./scripts/supervised/forecast/moirai_ecl.sh
# (b) one-for-all (rolling) forecasting
bash ./scripts/supervised/rolling_forecast/timer_xl_ecl.sh
# Large-scale pre-training
# (a) pre-training on UTSD
bash ./scripts/pretrain/timer_xl_utsd.sh
# (b) pre-training on ERA5
bash ./scripts/pretrain/timer_xl_era5.sh
# Model adaptation
# (a) full-shot fine-tune
bash ./scripts/adaptation/full_shot/timer_xl_etth1.sh
# (b) few-shot fine-tune
bash ./scripts/adaptation/few_shot/timer_xl_etth1.sh
- Develop your large time-series model.
- Add the model file to the folder
./models
. You can follow the./models/timer_xl.py
. - Include the newly added model in the
Exp_Basic.model_dict
of./exp/exp_basic.py
. - Create the corresponding scripts under the folder
./scripts
.
- Or evaluate the zero-shot performance of large time-series models. Here we list some resources:
- Chronos: https://huggingface.co/amazon/chronos-t5-base
- Moirai: https://huggingface.co/Salesforce/moirai-1.0-R-base
- TimesFM: https://huggingface.co/google/timesfm-1.0-200m
- Timer-XL: https://huggingface.co/thuml/timer-base-84m
- Time-MoE: https://huggingface.co/Maple728/TimeMoE-50M
- TTMs: https://huggingface.co/ibm-research/ttm-research-r2
Note
LTMs are still small in compared to foundation models of other modalities (for example, it is okay to use RTX 4090s for adaptation or A100s for pre-training).
If you find this repo helpful, please cite our paper.
@inproceedings{liutimer,
title={Timer: Generative Pre-trained Transformers Are Large Time Series Models},
author={Liu, Yong and Zhang, Haoran and Li, Chenyu and Huang, Xiangdong and Wang, Jianmin and Long, Mingsheng},
booktitle={Forty-first International Conference on Machine Learning}
}
@article{liu2024timer,
title={Timer-XL: Long-Context Transformers for Unified Time Series Forecasting},
author={Liu, Yong and Qin, Guo and Huang, Xiangdong and Wang, Jianmin and Long, Mingsheng},
journal={arXiv preprint arXiv:2410.04803},
year={2024}
}
We appreciate the following GitHub repos a lot for their valuable code and efforts:
- Time-Series-Library (https://github.com/thuml/Time-Series-Library)
- Large-Time-Series-Model (https://github.com/thuml/Large-Time-Series-Model)
- AutoTimes (https://github.com/thuml/AutoTimes)
If you have any questions or want to use the code, feel free to contact:
- Yong Liu (liuyong21@mails.tsinghua.edu.cn)
- Guo Qin (qinguo24@mails.tsinghua.edu.cn)