Skip to content

Bharat-Runwal/Paper-Reading

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

8 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Paper Reading

This Repository contains the papers that i recently read(most of them are in 2020-21),each paper is highlighted with what i think is important points in the papers, i will soon add the summary section for each of the paper that would include the summary to my best understanding of the paper.

Vision

  1. Y. Huang, Y. Sugano and Y. Sato, "Improving Action Segmentation via Graph-Based Temporal Reasoning," 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020, pp. 14021-14031, doi: 10.1109/CVPR42600.2020.01404.
  2. Liu, F., Tian, Y., Cordeiro, F., Belagiannis, V., Reid, I., & Carneiro, G. (2021). Noisy Label Learning for Large-scale Medical Image Classification. ArXiv, abs/2103.04053.
  3. Cordeiro, F., Sachdeva, R., Belagiannis, V., Reid, I., & Carneiro, G. (2021). LongReMix: Robust Learning with High Confidence Samples in a Noisy Label Environment. ArXiv, abs/2103.04173.
  4. Su, J., Maji, S., & Hariharan, B. (2020). When Does Self-supervision Improve Few-shot Learning? ECCV.
  5. Jiang, H., Liu, S., Wang, J., & Wang, X. (2021). Hand-Object Contact Consistency Reasoning for Human Grasps Generation. ArXiv, abs/2104.03304.
  6. Xu, J., & Wang, X. (2021). Rethinking Self-supervised Correspondence Learning: A Video Frame-level Similarity Perspective. ArXiv, abs/2103.17263.
  7. Chen, Z. et al. “Shot in the Dark: Few-Shot Learning with No Base-Class Labels.” ArXiv abs/2010.02430 (2020): n. pag.
  8. Zhang, Michael et al. “Personalized Federated Learning with First Order Model Optimization.” ArXiv abs/2012.08565 (2020): n. pag.
  9. Chen, Wuyang et al. “Contrastive Syn-to-Real Generalization.” ArXiv abs/2104.02290 (2021): n. pag.
  10. Liao, Yuan-Hong et al. “Towards Good Practices for Efficiently Annotating Large-Scale Image Classification Datasets.” ArXiv abs/2104.12690 (2021): n. pag.

NLP

  1. Vaswani, A., Shazeer, N.M., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, L., & Polosukhin, I. (2017). Attention is All you Need. ArXiv, abs/1706.03762.
  2. Reddy, R.R., Yadav, V., Sultan, M.A., Franz, M., Castelli, V., Ji, H., & Sil, A. (2021). Towards Robust Neural Retrieval Models with Synthetic Pre-Training. ArXiv, abs/2104.07800.
  3. Hua, X., & Wang, L. (2020). PAIR: Planning and Iterative Refinement in Pre-trained Transformers for Long Text Generation. EMNLP.
  4. Jin, Xisen et al. “Gradient Based Memory Editing for Task-Free Continual Learning.” ArXiv abs/2006.15294 (2020): n. pag.
  5. Deshpande, A. and Karthik Narasimhan. “Guiding Attention for Self-Supervised Learning with Transformers.” EMNLP (2020).
  6. Ye, Qinyuan et al. “CrossFit: A Few-shot Learning Challenge for Cross-task Generalization in NLP.” ArXiv abs/2104.08835 (2021): n. pag.
  7. Hu, Wenpeng et al. “Overcoming Catastrophic Forgetting for Continual Learning via Model Adaptation.” ICLR (2019).
  8. Tan, Bowen et al. “Progressive Generation of Long Text.” ArXiv abs/2006.15720 (2020): n. pag.
  9. Sanh, Victor et al. “Learning from others' mistakes: Avoiding dataset biases without modeling them.” ArXiv abs/2012.01300 (2020): n. pag.

GAN

  1. Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A.C., & Bengio, Y. (2014). Generative Adversarial Networks. ArXiv, abs/1406.2661.
  2. Radford, A., Metz, L., & Chintala, S. (2016). Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks. CoRR, abs/1511.06434.
  3. Arjovsky, M., & Bottou, L. (2017). Towards Principled Methods for Training Generative Adversarial Networks. ArXiv, abs/1701.04862.

GNN(Graph Neural Networks)

  1. Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., & Yu, P.S. (2021). A Comprehensive Survey on Graph Neural Networks. IEEE Transactions on Neural Networks and Learning Systems, 32, 4-24.
  2. Jin, W., Ma, Y., Liu, X., Tang, X., Wang, S., & Tang, J. (2020). Graph Structure Learning for Robust Graph Neural Networks. Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining.

Optimization

  1. Kalofolias, V. (2016). How to Learn a Graph from Smooth Signals. AISTATS.
  2. Egilmez, H.E., Pavez, E., & Ortega, A. (2016). Graph Learning from Data under Structural and Laplacian Constraints. ArXiv, abs/1611.05181.
  3. E. Pavez, H. E. Egilmez and A. Ortega, "Learning Graphs With Monotone Topology Properties and Multiple Connected Components," in IEEE Transactions on Signal Processing, vol. 66, no. 9, pp. 2399-2413, 1 May1, 2018, doi: 10.1109/TSP.2018.2813337.
  4. L. Zhao, Y. Wang, S. Kumar and D. P. Palomar, "Optimization Algorithms for Graph Laplacian Estimation via ADMM and MM," in IEEE Transactions on Signal Processing, vol. 67, no. 16, pp. 4231-4244, 15 Aug.15, 2019, doi: 10.1109/TSP.2019.2925602.
  5. Y. Sun, P. Babu and D. P. Palomar, "Majorization-Minimization Algorithms in Signal Processing, Communications, and Machine Learning," in IEEE Transactions on Signal Processing, vol. 65, no. 3, pp. 794-816, 1 Feb.1, 2017, doi: 10.1109/TSP.2016.2601299.
  6. Kumar, S., Ying, J., Cardoso, J.V., & Palomar, D. (2020). A Unified Framework for Structured Graph Learning via Spectral Constraints. J. Mach. Learn. Res., 21, 22:1-22:60.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published