Skip to content

Human-Signals-Lab/AudioIMU

Repository files navigation

AudioIMU_new

Github page to paper: AudioIMU: Enhancing Inertial Sensing-Based Activity Recognition with Acoustic Models (to appear at ACM ISWC 2022)

IMU model

Derive DeepConvLSTM activity recognition models based on IMU inputs only: lab_motion_train.py

Teacher models

Train and evaluate the teacher model 1 (audio inputs): lab_audio_train.py

Train and evaluate the teacher model 2 (audio + IMU inputs): lab_multimodal_train.py

Student model guided by teacher outputs:

Train and evaluate the student models with 15 participants: joint_trainfixlr_loso_individual.py

If you want to do a parameter search for your own setting (especially if you experiment with a new model architecture or your own data), you can do something similar to script: main_args_individuals.py

If you just want to run inference for the participants' data based on our developed models, you can do something similar to script: sample_inference.ipynb

====

All the model architectures and FFT functions are wrapped up in models.py

Weights of our tested models can be accessed at: https://doi.org/10.18738/T8/S5RTFH. The data is of name: rawAudioSegmentedData_window_10_hop_0.5_Test_NEW.pkl

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published