Skip to content

prakashchhipa/Functional_Knowledge_Transfer_SSL

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

25 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Title

Functional Knowledge Transfer with Self-supervised Representation Learning

Venue

Accepted at IEEE International Conference on Image Processing (ICIP 2023)

Chhipa, Prakash Chandra, Muskaan Chopra, Gopal Mengi, Varun Gupta, Richa Upadhyay, Meenakshi Subhash Chippa, Kanjar De, Rajkumar Saini, Seiichi Uchida, and Marcus Liwicki. "Functional Knowledge Transfer with Self-supervised Representation Learning." In 2023 IEEE International Conference on Image Processing (ICIP), pp. 3339-3343. IEEE, 2023.

Article

IEEE Arxiv Version

Poster & Presentation Video

Click here for enlarged view

Video presentation (5+ minutes) describing the work

IMAGE ALT TEXT HERE

Abstract

This work investigates the unexplored usability of self-supervised representation learning in the direction of functional knowledge transfer. In this work, functional knowledge transfer is achieved by joint optimization of self-supervised learning pseudo task and supervised learning task, improving supervised learning task performance. Recent progress in self-supervised learning uses a large volume of data, which becomes a constraint for its applications on small-scale datasets. This work shares a simple yet effective joint training framework that reinforces human-supervised task learning by learning self-supervised representations just-in-time and vice versa. Experiments on three public datasets from different visual domains, Intel Image, CIFAR, and APTOS, reveal a consistent track of performance improvements on classification tasks during joint optimization. Qualitative analysis also supports the robustness of learnt representations.

Method

SimCLR contrastive learning method employed for self-supervised representation learning part.

Datasets

Three publically available datasets from diverse visual domains are chosen for exprimentations.

  1. CIFAR10 - The CIFAR-10 dataset consists of 60000 32x32 colour images in 10 classes (airplane, automobile, bird, cat, deer, dog, frog, horse, ship, and truck), with 6000 images per class. There are 50000 training images and 10000 test images.
  2. Intel Images - This is image data of Natural Scenes around the world. This Data contains around 25000 images (of which 17034 used) of size 150x150 distributed under six categories (buildings, forest, glaciar, mountain, sea, and street).
  3. APTOS 2019 - A set of 3662 retina images of fundus photography under a variety of imaging conditions. A clinician has rated each image for the severity of diabetic retinopathy on a scale of 0 to 4 (0: No DR, 1: Mild, 2: Moderate, 3: Severe, and 4: Proliferative DR).

Results

All the expriments have batch size of 256 and ResNet50 encoder.

Qualitative

Model Weights

  1. Functional Knowledge Transfer Jointly Trained Models a. ResNet50 on CIFAR10 b. ResNet50 on Aptos 2019 c. ResNet50 on Intel Images

  2. SSL Pretrained Models a. ResNet50 on CIFAR10 b. ResNet50 on Aptos 2019 c. ResNet50 on Intel Images

Commands

  1. Pretrain (for representational transfer)

python -m pretrain <resnet_version> <device> <dataset>

  1. Finetune - downstream task

python -m finetune train <resnet_version> <device> <dataset> <pretrained_model_weights_path>

  1. Joint training (for Functional represetation transfer)

python -m joint_train <resnet_version> <device> <dataset>

About

Functional Knowledge Transfer with Self-supervised Representation Learning

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages