Skip to content

albertogaspar/dts

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

46 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

DTS - Deep Time-Series Forecasting

⚠️ ⚠️ The library is not actively maintained ⚠️ ⚠️

DTS is a Keras library that provides multiple deep architectures aimed at multi-step time-series forecasting.

The Sacred library is used to keep track of different experiments and allow their reproducibility.

Installation

DTS is compatible with Python 3.5+, and is tested on Ubuntu 16.04.

The setup.py script of DTS will not attempt to install Sacred, Keras and a backend for it. Thus, before installing DTS, you have to manually install:

  • The CPU or GPU version of Tensorflow (GPU recommended) <=1.14.0
  • Keras <= 2.2.4
  • Sacred <=0.7.5
  • (Optional) MongoDB is also recommended.

This choice has been taken in order to avoid any possible problem for the user. If you are already a Keras/Tesorflow user mind that if your version of Tensorflow is greater or equal to 1.14.0 then you need to check out this issue to install sacred correctly.

I have tested dts with the following dependencies:

ENV 1 ENV 2
numpy==1.14.2 numpy==1.17.0
tensorflow==1.12.0 tensorflow==1.14.0
keras==2.1.6 keras==2.2.4
sacred==0.7.4 sacred==0.7.5

To install dts from source:

git clone https://github.com/albertogaspar/dts.git
cd dts
pip install -e .

What's in it & How to use

Time-Series Forecasting

The package includes several deep learning architectures that can be used for multi step-time series forecasting. The package provides also several utilities to cast the forecasting problem into a supervised machine learning problem. Specifically a sliding window approach is used: each model is given a time window of size nT and asked to output a prediction for the following nO timesteps (see Figure below).

Run Experiment

python FILENAME.py --add_config FULLPATH_TO_YAML_FILE 

or:

python FILENAME.py --add_config FULLPATH_TO_YAML_FILE --grid_search 

grid_search: defines whether or not you are searching for the best hyperparamters. If True, multiple experiments are run, each with a different combination of hyperparamters. The process terminates when all possible combinations of hyperparamers have been explored.

add_config: The experiment's hyperparamters should be defined as a yaml file in the config folder (see How to write a config file for more details). FULLPATH_TO_YAML_FILE is the fullpath to the .yaml file that stores your configuration. The main function for your model should always look similar to this one:

observer: all the important information in an experiment can be stored either in MongoDB (default choice) or in multiple files (txt and json) inside a given folder (dts/logs/sacred/). Mongo Observer which stores all information in a MongoDB If you want to use the file-based logger then launch the script with the additional argument --observer file (once again, the default choice is --observer mongodb)

If you want to train a model using pretrained weights just run the model providing the paramter --load followed by the fullpath to the file containing the weights.

python FILENAME.py --add_config FULLPATH_TO_YAML_FILE --load FULLPATH_TO_WEIGHTS 

The model will be initilized with this weights before training.

Datasets

  • Individual household electric power consumption Data Set: Measurements of electric power consumption in one household with a one-minute sampling rate over a period of almost 4 years. Dataset & Description.
  • GEFCom 2014: hourly consumption data coming from ISO New England (aggregated consumption). Dataset & Description, Paper. If you use the GEFCom2014 you should cite this paper to acknowledge the source.

With DTS you can model your input values in many diffrent ways and then feed them to your favourite deep learning architectures. E.g.:

  • you can decide to include exogenous features (like temperature readings) if they are available.

  • you can decide to apply detrending to the time series (see dts.datasets.*.apply_detrend for more details).

See how to format your data or check out the examples in dts.examples to know more about data formatting and the possibilities offered by DTS.

Available architectures

Included architectures are:

  • Recurrent Neural Networks (Elmann, LSTM, GRU) with different trainig procedure:

    • MIMO: a Dense Network is used to map the last state of the RNN to the output space of size nO. The training and inference procedures are the same.

    • Recursive: The RNN is trained to predict the next step, i.e. the output space during training has size 1. During inference, the network is fed with (part of) the input plus it's own predictions in a recurrent fashion until an ouput vector of length nO is obtained.

  • Seq2Seq:

    different training procedure are available (see Professor Forcing: A New Algorithm for Training Recurrent Networks for more details)

    • Teacher Forcing
    • Self-Generated Samples
    • Professor Forcing : TODO

  • Temporal Convolutional Neural Networks:

    • MIMO training/inference:
    • Recursive training/inference: TODO (The methods to perform prediction with this strategy is available in dts.models.TCN.py but has not been tested and there is no example to use a TCN with this mode in dts.examples.tcn.py)
  • Feedforward Neural Networks:

    • MIMO training/inference:
    • Recursive training/inference
  • ResNet a feedforward neural network with residual connections:

    • MIMO training/inference:
    • Recursive training/inference

Project Structure & TODO list

  • dts: contains models, utilities and example to train and test different deep learning models.
  • data: contains raw data and .npz,.npy data (already preprocessed data).
  • config: yaml file to be used for grid search of hyperparamteres for all architectures.
  • weights: contains models' weights. If you use sacred using the the artifactID field in each document/json file contains the name of the trained model that achieved the related performance.
  • log: If you use sacred without mongodb then all the relevant files are stored in this directory.

Sacred Collected Information

The animation below provides an intuitive explanation of the information collected by Sacred (using MongoDB as Observer). The example refers to a completed experiment of a TCN model trained on the Individual household electric power consumption Data Set (for brevity, 'uci'):

When MongoDB is used as an Observer, the collected information for an experiment is stored in a document. In the above documents are visualized using MongoDB Compass

Reference

This is the code used for the Deep Learning for Time Series Forecasting: The Electric Load Case paper. Mind that the code has been changed a bit, thus you may notice some differences with the models described in the paper. If you encounter any problem or have any doubt don't hesitate to contact me.

If you find it interesting it please consider citing us:

@article{gasparin2019deep,
  title={Deep Learning for Time Series Forecasting: The Electric Load Case},
  author={Gasparin, Alberto and Lukovic, Slobodan and Alippi, Cesare},
  journal={arXiv preprint arXiv:1907.09207},
  year={2019}
}