cv논문
Minji Jung edited this page Sep 4, 2021
·
1 revision
Deeper Depth Prediction with Fully Convolutional Residual Networks (IEEE 2016) |
Unsupervised Learning of Depth and Ego-Motion from Video (CVPR 2017) |
Unsupervised Monocular Depth Estimation with Left-Right Consistency (CVPR 2017) |
PlaneNet: Piece-wise Planar Reconstruction from a Single RGB Image (CVPR 2018) |
Unsupervised Learning of Depth and Ego-Motion from Monocular Video Using 3D Geometric Constraints (CVPR 2018) |
Learning the Depths of Moving People by Watching Frozen People (CVPR 2019) |
Unsupervised Monocular Depth and Ego-motion Learning with Structure and Semantics (CVPR Workshop on Visual Odometry & Computer Vision Applications Based on Location Clues (VOCVALC) (2019)) |
Depth from Videos in the Wild: Unsupervised Monocular Depth Learning from Unknown Cameras (ICCV 2019) |
Depth Prediction Without the Sensors: Leveraging Structure for Unsupervised Learning from Monocular Videos (AAAI 2019) |
Unsupervised Monocular Depth and Ego-motion Learning with Structure and Semantics (AAAI 19) |
Consistent Video Depth Estimation (Siggraph 2020) |
Incorporating long-range consistency in cnn-based texture generation (ICLR 2017) |
Deep photo style transfer (CVPR 2017) |
Photorealistic style transfer with screened poisson equation (BMVC 2017) |
A closedform solution to photorealistic image stylization (ECCV 2018) |
Visual attribute transfer through deep image analogy (ACM ToG 2017) |
The contextual loss for image transformation with non-aligned data (ECCV 2018) |
PairedCycleGAN: Asymmetric Style Transfer for Applying and Removing Makeup (CVPR 2018) |
Improved texture networks: Maximizing quality and diversity in feed-forward stylization and texture synthesis (CVPR 2017) |
Stylebank: An explicit representation for neural image style transfer (CVPR 2017) |
Avatarnet: Multi-scale zero-shot style transfer by feature decoration (CVPR 2018) |
Arbitrary Style Transfer with Style-Attentional Networks (CVPR 2019) |
Arbitrary style transfer in real-time with adaptive instance normalization (ICCV 2017) |
Universal style transfer via feature transforms (NIPS 2017) |
Understanding generalized whitening and coloring transform for universal style transfer (ICCV 2019) |
Multimodel style transfer via graph cuts (ICCV 2019) |
2016.WACV.Is image super-resolution helpful for other vision tasks? Super Resolution (Enhancement) |
2017.CVPR.Learning from Simulated and Unsupervised Image through Adversarial Trainin Simulation (SimGAN) (synthesized data(3D 눈 그림) -> 현실성 높여보자.(refine)) |
2017.CVPR.StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation Multi-Domain Transfer (StarGAN) |
2015.ICCV.Unsupervised Visual Representation Learning by Context Prediction |
2016.CVPR.Context Encoders: Feature Learning by Inpainting |
2016.ECCV.Unsupervised Learning of Visual Representations by Solving Jigsaw Puzzles |
이미지 9조각 나누고, 위치 맞추게 하는 Unsupervised learning |
2018.CVPR.Deep Video Generation, Prediction and Completion of Human Action Sequences |
human action prediction 비디오 영상 액션 프레임 주고, 다음 프레임 영상 생성 |
2019.CVPR.A Style-Based Generator Architecture for Generative Adversarial Networks |
2014.CVPR.Rich feature hierarchies for accurate object detection and semantic segmentation (R-CNN) |
2015.PAMI.Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition (SPPNet) |
2015.ICCV.Fast R-CNN |
2015.NIPS.Faster r-cnn: Towards real-time object detection with region proposal networks (Faster R-CNN) |
2015.ICCV.AttentionNet: Aggregating Weak Directions for Accurate Object Detection |
2016.CVPR.You Only Look Once:Unified, Real-Time Object Detection (YOLO) (v4까지 읽어보기) |
2016.ECCV.SSD:Single shot multibox detector (SSD) |
FlowNet: Learning optical flow with convolutional networks (ICCV 2015) |
FlowNet 2.0: Evolution of optical flow estimation with deep networks (CVPR 2017) |
CNN-based patch matching for optical flow with thresholded hinge embedding loss (CVPR 2017) |
Optical flow estimation using a spatial pyramid network (CVPR 2017) |
A lightweight convolutional neural network for optical flow estimation (CVPR 2018) |
PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume (CVPR 2018) |
Occlusion aware unsupervised learning of optical flow (CVPR 2018) |
Iterative residual refinement for joint optical flow and occlusion estimation (CVPR 2019) |
SelFlow: Self-supervised learning of optical flow (CVPR 2019) |
Unsupervised learning of multi-frame optical flow with occlusions (ECCV 2018) |
Conditional prior networks for optical flow (ECCV 2018) |
Semi-supervised learning for optical flow with generative adversarial networks (NIPS 2017) |
Unsupervised deep learning for optical flow estimation (AAAI 2017) |
UnFlow: Unsupervised learning of optical flow with a bidirectional census loss (AAAI 2018) |
Learning optical flow with unlabeled data distillation (AAAI 2019) |
Visual Saliency Detection Based on Multiscale Deep CNN Features (TIP 2016) |
Shallow and Deep Convolutional Networks for Saliency Prediction (CVPR 2016) |
GraB: Visual Saliency via Novel Graph Model and Background Priors (CVPR 2016) |
DHSNet: Deep Hierarchical Saliency Network for Salient Object Detection (CVPR 2016) |
Deep Unsupervised Saliency Detection: A Multiple Noisy Labeling Perspective (CVPR 2018) |
Pyramid Feature Attention Network for Saliency Detection (CVPR 2019) |
Multi-Source Weak Supervision for Saliency Detection (CVPR 2019) |
Pyramid Dilated Deeper ConvLSTM for Video Salient Object Detection (ECCV 2018) |
Learning Uncertain Convolutional Features for Accurate Saliency Detection (ICCV 2017) |
Joint Learning of Saliency Detection and Weakly Supervised Semantic Segmentation (ICCV 2019) |
Depth-Induced Multi-Scale Recurrent Attention Network for Saliency Detection (ICCV 2019) |
A Unified Multiple Graph Learning and Convolutional Network Model for Co-saliency Estimation (ACM Multimedia 2019) |
2015.ICML.Show, Attend, and Tell: Neural Image Caption generation with visual attention |
2015.CVPR.Long-term recurrent convolutional networks for visual recognition and description |
2016.CVPR.Fei-Fei.Densecap: Fully convolutional localization networks for dense captioning |
2017.ICCV.Grad-cam: Visual explanations from deep networks via gradient-based localization |
2018.CVPR.Bottom-up and top-down attention for image captioning and visual question answering |