Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Selfie: Self-supervised Pretraining for Image Embedding #384

Open
chullhwan-song opened this issue May 31, 2020 · 1 comment
Open

Selfie: Self-supervised Pretraining for Image Embedding #384

chullhwan-song opened this issue May 31, 2020 · 1 comment

Comments

@chullhwan-song
Copy link
Owner

https://arxiv.org/abs/1906.02940

@chullhwan-song
Copy link
Owner Author

chullhwan-song commented May 31, 2020

Abstract

Introduction

image

  • BERT 의 개념을 이미지분류에 옮겨 놓아 연구
  • 이 개념으니, self training
    • input - 이미지를 순서(?) 정보를 가진 NLP 문장처럼 sequence 정보처럼 만들기 위해 Grid 로 잘라서 Patch1~Path9로 자름.
    • 학습할때, encoder feature(v)와 알아맞추기 위한 Path 순서(Fig.1 에서 Positional Embedding of Patch 4) 정보를 주고 decoder(h)에서 similarity(v^T x h)해서 찾는 softmax를 이용하여 self training 방식.
  • transfer learning - 실제적으로 분류 는 Patch feature extraction을 part를 pre-training 부분을 이용

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant