One for All: Unified Workload Prediction for Dynamic Multi-tenant Edge Cloud Platforms (KDD'23 Research Track)
This is the origin Pytorch implementation of DynEformer in the following paper: One for All: UnifiedWorkload Prediction for Dynamic Multi-tenant Edge Cloud Platforms.
🚩News(May 31, 2023): We will soon release an updated mechanism for global pooling.
Before predicting with DynEformer, first create a Global Pool for your data using vade_main. This identifies and stores patterns in your time series data. In our work, the Global Pool is built on the seasonal component of edge cloud server load. Ensure decomposed data is generated before using vade_main when replicating the experiment.
- Python 3.7
- matplotlib == 3.5.3
- numpy == 1.21.6
- pandas == 1.3.5
- scikit_learn == 1.0.2
- torch == 1.13.0
Dependencies can be installed using the following command:
pip install -r requirements.txt
The ECW dataset used in the paper can be downloaded in the repo ECWDataset. The required data files should be put into data folder. A demo slice of the ECW data is illustrated in the following figure.
Note that the data in the ECWDataset are min-max normalized values, if you need to reproduce the paper, you can use the data in this repository directly.
To reproduce the experiments or to adapt the whole process to your own dataset, follow these steps:
Reproduce:
- run 'models/GlobalPooling/vade_pooling/global_pooling_main.py' to generate the global pool
- run 'models/DynEformer/dyneformer_main.py' to train and save the DynEformer
- run 'models/DynEformer/dynet_app_switch.py' or 'models/DynEformer/dynet_newadded.py' to inference
Adapt to your own dataset
- run 'models/GlobalPooling/vade_search_cluster.py' to find the best K for the global pool
- run 'models/GlobalPooling/vade_pooling/global_pooling_main.py' to generate the global pool
- run 'models/DynEformer/dyneformer_main.py' to train and save the DynEformer
- run 'models/DynEformer/dynet_app_switch.py' or 'models/DynEformer/dynet_newadded.py' to inference