Datasets and PyTorch code for RFBoost: Understanding and Boosting Deep WiFi Sensing via Physical Data Augmentation.
-
Clone this repo and download preprocessed Widar3 dataset from the link (password:hku-aiot-rfboost24): You can download raw data from Widar3 website.
unzip NPZ-pp.zip -d "dataset/NPZ-pp/"
-
(Optional) Setup cache path in
config.yaml
:cache_folder: "/path/to/cache/dir/"
-
Use Conda to manage python environment:
% create rfboost-pytorch2 conda env create -f environment.yml
- Start the batch runner with:
python source/batch_runner.py
- If everything goes well, training logs are recorded in
./log/<dataset>/<model>/
, final results are available under./record
, and TensorBoard logs are located at./runs
.
The current version supports data augmentation methods for the Widar3 dataset and models using DFS input. In batch_runner.py
file, uncomment the method you want to use. Available options include "PCA", "All Subcarriers", "RDA" and "ISS-6".
Note that customized augmentation method will be defined in "augment.py". (TODO: We will refactor the definition logic in the future.)
This is a multi-task queue that allows multiple augmentation combinations to be submitted at once to test performance. Currently, you can adjust the Dataset, Model, default_window, augmentation, hyperparameters, and so on.
By default, it uses the RFNet model and Widar3 dataset with default parameters, testing for Cross-RX evalution.
The original data are stored in the dataset/
directory, but for different tasks, different data splits are needed. So that we save split files in the source/<model>/
folder. By default, main.py
also supports K-fold cross-validation.
Users can write their own augmentation rules in this file.
This repository is built upon UniTS repo. We owe our gratitude for their initial work.
@article{hou2024rfboost,
author = {Hou, Weiying and Wu, Chenshu},
title = {RFBoost: Understanding and Boosting Deep WiFi Sensing via Physical Data Augmentation},
year = {2024},
journal = {Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.},
}
This project is licensed under the GPL v3 License - see the LICENSE file for details.