Skip to content

swagshaw/ASC-CL

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

34 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

LICENSE

ASC-CL Official Pytorch Implementation

Official Pytorch Implementation for Continual Learning For On-Device Environmental Sound Classification

If you have any questions on this repository or the related paper, feel free to create an issue or send me an email.

Abstract

Continuously learning new classes without catastrophic forgetting is a challenging problem for on-device environmental sound classification given the restrictions on computation resources (e.g., model size, running memory). To address this issue, we propose a simple and efficient continual learning method. Our method selects the historical data for the training by measuring the per-sample classification uncertainty. Specifically, we measure the uncertainty by observing how the classification probability of data fluctuates against the parallel perturbations added to the classifier embedding. In this way, the computation cost can be significantly reduced compared with adding perturbation to the raw data. Experimental results on the DCASE 2019 Task 1 and ESC-50 dataset show that our proposed method outperforms baseline continual learning methods on classification accuracy and computational efficiency, indicating our method can efficiently and incrementally learn new classes without the catastrophic forgetting problem for on-device environmental sound classification.

Getting Started

Setup Environment

You need to create the running environment by Anaconda,

conda env create -f environment.yml
conda active asc

Results

There are three types of logs during running experiments; logs, results. The log files are saved in logs directory, and the results which contains accuracy of each task and memory updating time are saved in workspace directory.

workspace
    |_ logs 
        |_ [dataset]
            |_.log
            |_ ...
    |_ results
        |_ [dataset]
            |_.npy
            |_...

Data

We use the TAU-ASC and ESC-50 dataset as the training data. You should put them into:

your_project_path/data/TAU_ASC
your_project_path/data/ESC-50-master

Then use the ./data/generate_json.py:

python ./data/generate_json.py --mode train --dpath your_project_path /data
python ./data/generate_json.py --mode test --dpath your_project_path /data

Usage

To run the experiments in the paper, you just run train.sh. For various experiments, you should know the role of each argument.

  • MODE: use CL method or not [finetune, replay]
  • MODEL: use baseline CNN model or BC-ResNet [baseline, BC-ResNet ]
  • MEM_MANAGE: Memory update method.[random, reservoir, uncertainty, prototype].
  • RND_SEED: Random seed number
  • DATASET: Dataset name [TAU-ASC, ESC-50]
  • MEM_SIZE: Memory size: k={300, 500}
  • UNCERT_MERTIC: Perturbation methods for uncertainty [shift, noise, noisytune(ours)]

Acknowledgements

Our implementations use the source code from the following repositories and users:

License

The project is available as open source under the terms of the MIT License.

About

Official Pytorch Implementation for Continual Learning For On-Device Environmental Sound Classification

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published