Comparison of Feature Extraction Methods on Free-hand Sketches
-
Updated
Apr 3, 2021 - Jupyter Notebook
Comparison of Feature Extraction Methods on Free-hand Sketches
One-class classification approach using error of image transformation into one image
Pre-text pre-training -> image segmentaion pipelines. Utilize contrastive learning and ViT as Encoder. Studied the effects of dataset sizes, dataset similarity and effects of fine-tuning.
Deep Learning Course | Home Works | Spring 2021 | Dr. MohammadReza Mohammadi
Overview of self-supervised video representation learning methods.
Official implementation of "Any Region Can Be Perceived Equally and Effectively on Rotation Pretext Task Using Full Rotation and Weighted-Region Mixture"
A python implementation of “Self-Supervised Learning of Spatial Acoustic Representation with Cross-Channel Signal Reconstruction and Multi-Channel Conformer” [TASLP 2024]
[IEEE T-IP 2022] TCGL: Temporal Contrastive Graph for Self-supervised Video Representation Learning
Overview of unsupervised visual representation learning (or self-supervised learning, unsupervised pre-training) methods.
[TNSRE 2023] Self-supervised Learning for Label-Efficient Sleep Stage Classification: A Comprehensive Evaluation
This repository is mainly dedicated for listing the recent research advancements in the application of Self-Supervised-Learning in medical images computing field
Code for TKDE paper "Self-supervised learning on graphs: Contrastive, generative, or predictive"
Add a description, image, and links to the pretext-task topic page so that developers can more easily learn about it.
To associate your repository with the pretext-task topic, visit your repo's landing page and select "manage topics."