DFTS: Deep Feature Transmission Simulator
DFTS is a simulator intended for studying deep feature transmission over unreliable channels. If you use this simulator in your work, please cite the following paper:
H. Unnibhavi, H. Choi, S. R. Alvar, and I. V. Bajić, "DFTS: Deep Feature Transmission Simulator," demo paper at IEEE MMSP'18, Vancouver, BC, Aug. 2018. [link]
A recent study has shown that power usage and latency of inference by deep AI models can be minimized if the model is split into two parts:
- One that runs on the mobile device
- The other that runs in the cloud
Our simulator is developed in Python to run with Keras models. The user can choose a keras model and specify the following:
- Layer at which the model is split
- The following transmission parameters(currently supported):
- n-bit quantization
- error concealment techniques
Creating your environment
First clone this repository onto your local machine.
git clone https://github.com/SFU-Multimedia-Lab/DFTS.git
Create and activate a virtual environment on your machine, and navigate to the directory containing the repository.
Install the required dependencies.
pip install -r requirements.txt
If faced with any problems, contents after '==' after each library name can be deleted and run again.
The main components with which the user interacts, includes the configuration files:
After initializing these with the desired configurations, run
python main.py -p params.yml
The params configuration file consists of the following:
|SplitLayer||Layer at which the model is split, must be one of the names used for the layers||block1_pool, in the case for vgg16|
|OutputDir||directory where the results of the simulation must be stored||'../simData'|
The taskParams configuration file consists of the following paramters for each selected task:
- reshapeDims: list denoting the reshape dimensions of the images
- num_classes: integer denoting the number of classes in the dataset
- metrics: a dictionary containing the metrics the model needs to be evaluated against
Currently, only the parameters provided in the configuration files are supported. The simulation will break if any attempt is made to change the name of the parameters.
Sample configuration files for classification and object detection are provided in the sample folder.
Examples on how to organize data for input to the simulator can be found as follows:
A small subset of these images can be found in the repository itself. First switch to the test-images branch by executing the following:
git checkout test-images
Navigate to the sampleImages folder contained in the sample directory.
The simulator outputs timing and indexing information to the terminal.
The data produced by the simulator will be stored in the specified directory, as a numpy array in a .npy file. The name of the numpy file reflects the parameters of the simulation.
For example if the following parameters are used:
- splitlayer: block1_pool
- gilbert channel with 10 percent loss and 1 burst length
- 8 bit quantization
- concealment included
The resulting file name is:
We appreciate all contributions. If you are planning to contribute back bug-fixes, please do so without any further discussions.
If you plan to contribute new features, utility functions or extensions to the core, please first open an issue and discuss the feature with us. Sending a PR without discussion might end up resulting in a rejected PR, because we might be taking the core in a different direction than you might be aware of.