Keras documentaion에 올라온 코드를 Pytorch로 코드 이전하는 스터디 입니다.
Original github address: https://github.com/keras-team/keras-io
시작 일자: 2021.03.04(목)
Hyeongwon | Jeongsub | Jaehyuk | Subin |
---|---|---|---|
![]() |
![]() |
||
Github | Github | Github | Github |
- Image segmentation with a U-Net-like architecture
- 3D image classification from CT scans
- Semi-supervision and domain adaptation with AdaMatch
- Convolutional autoencoder for image denoising
- OCR model for reading Captchas
- Compact Convolutional Transformers
- Consistency training with supervision
- Next-Frame Video Prediction with Convolutional LSTMs
- CutMix data augmentation for image classification
- Multiclass semantic segmentation using DeepLabV3+
- Monocular depth estimation
- Grad-CAM class activation visualization
- Gradient Centralization for Better Training Performance
- Image Captioning
- Image classification with Vision Transformer
- Model interpretability with Integrated Gradients
- Involutional neural networks
- Keypoint Detection with Transfer Learning
- Knowledge Distillation
- Learning to Resize in Computer Vision
- Metric learning for image similarity search
- MixUp augmentation for image classification
- Image classification with modern MLP models
- 3D volumetric rendering with NeRF
- Self-supervised contrastive learning with NNCLR
- Image classification with Perceiver
- Point cloud classification with PointNet
- RandAugment for Image Classification for Improved Robustness
- Few-Shot learning with Reptile
- Object Detection with RetinaNet
- Semantic Image Clustering
- Semi-supervised image classification using contrastive pretraining with SimCLR
- Image similarity estimation using a Siamese Network with a contrastive loss
- Image similarity estimation using a Siamese Network with a triplet loss
- Image Super-Resolution using an Efficient Sub-Pixel CNN
- Supervised Contrastive Learning
- Image classification with Swin Transformers
- Video Classification with Transformers + Video Vision Transformer
- Visualizing what convnets learn
- Bidirectional LSTM on IMDB
- Character-level recurrent sequence-to-sequence model
- End-to-end Masked Language Modeling with BERT
- Multimodal entailment
- Named Entity Recognition using Transformers
- English-to-Spanish translation with a sequence-to-sequence Transformer
- Natural language image search with a Dual Encoder
- Semantic Similarity with BERT
- Text classification with Switch Transformer
- Text classification with Transformer
- Text Extraction with BERT
- Classification with Gated Residual and Variable Selection Networks
- Collaborative Filtering for Movie Recommendations
- Classification with Neural Decision Forests
- Imbalanced classification: credit card fraud detection
- A Transformer-based recommendation system
- Structured data learning with Wide, Deep, and Cross networks
- Timeseries anomaly detection using an Autoencoder
- Timeseries classification with a Transformer model
- Speaker Recognition
- Automatic Speech Recognition with Transformer
- Variational AutoEncoder
- DCGAN to generate face images
- WGAN-GP overriding Model.train_step
- Neural style transfer
- Deep Dream
- Conditional GAN
- CycleGAN
- Character-level text generation with LSTM
- PixelCNN
- Density estimation using Real NVP
- Face image generation with StyleGAN
- Text generation with a miniature GPT
- Vector-Quantized Variational Autoencoders
- WGAN-GP with R-GCN for the generation of small molecular graphs
- Actor Critic Method
- Deep Deterministic Policy Gradient (DDPG)
- Deep Q-Learning for Atari Breakout
- Proximal Policy Optimization
- Graph attention networks for node classification
- Node Classification with Graph Neural Networks
- Message-passing neural network for molecular property prediction
- Graph representation learning with node2vec