Skip to content

Latest commit

 

History

History
executable file
·
5532 lines (4259 loc) · 313 KB

README.md

File metadata and controls

executable file
·
5532 lines (4259 loc) · 313 KB

Few-Shot Papers

This repository contains few-shot learning (FSL) papers mentioned in our FSL survey published in ACM Computing Surveys (JCR Q1, CORE A*).

For convenience, we also include public implementations of respective authors.

We will update this paper list to include new FSL papers periodically.

Citation

Please cite our paper if you find it helpful.

@article{wang2020generalizing,
  title={Generalizing from a few examples: A survey on few-shot learning},
  author={Wang, Yaqing and Yao, Quanming and Kwok, James T and Ni, Lionel M},
  journal={ACM Computing Surveys},
  volume={53},
  number={3},
  pages={1--34},
  year={2020},
  publisher={ACM New York, NY, USA}
}

Content

  1. Survey
  2. Data
  3. Model
    1. Multitask Learning
    2. Embedding/Metric Learning
    3. Learning with External Memory
    4. Generative Modeling
  4. Algorithm
    1. Refining Existing Parameters
    2. Refining Meta-learned Parameters
    3. Learning Search Steps
  5. Applications
    1. Computer Vision
    2. Robotics
    3. Natural Language Processing
    4. Knowledge Graph
    5. Acoustic Signal Processing
    6. Recommendation
    7. Others
  6. Theories
  7. Few-shot Learning and Zero-shot Learning
  8. Variants of Few-shot Learning
  9. Datasets/Benchmarks
  10. Software Library
  1. Generalizing from a few examples: A survey on few-shot learning, CSUR, 2020 Y. Wang, Q. Yao, J. T. Kwok, and L. M. Ni. paper arXiv
  1. Learning from one example through shared densities on transforms, in CVPR, 2000. E. G. Miller, N. E. Matsakis, and P. A. Viola. paper

  2. Domain-adaptive discriminative one-shot learning of gestures, in ECCV, 2014. T. Pfister, J. Charles, and A. Zisserman. paper

  3. One-shot learning of scene locations via feature trajectory transfer, in CVPR, 2016. R. Kwitt, S. Hegenbart, and M. Niethammer. paper

  4. Low-shot visual recognition by shrinking and hallucinating features, in ICCV, 2017. B. Hariharan and R. Girshick. paper code

  5. Improving one-shot learning through fusing side information, arXiv preprint, 2017. Y.H.Tsai and R.Salakhutdinov. paper

  6. Fast parameter adaptation for few-shot image captioning and visual question answering, in ACM MM, 2018. X. Dong, L. Zhu, D. Zhang, Y. Yang, and F. Wu. paper

  7. Exploit the unknown gradually: One-shot video-based person re-identification by stepwise learning, in CVPR, 2018. Y. Wu, Y. Lin, X. Dong, Y. Yan, W. Ouyang, and Y. Yang. paper

  8. Low-shot learning with large-scale diffusion, in CVPR, 2018. M. Douze, A. Szlam, B. Hariharan, and H. Jégou. paper

  9. Diverse few-shot text classification with multiple metrics, in NAACL-HLT, 2018. M. Yu, X. Guo, J. Yi, S. Chang, S. Potdar, Y. Cheng, G. Tesauro, H. Wang, and B. Zhou. paper code

  10. Delta-encoder: An effective sample synthesis method for few-shot object recognition, in NeurIPS, 2018. E. Schwartz, L. Karlinsky, J. Shtok, S. Harary, M. Marder, A. Kumar, R. Feris, R. Giryes, and A. Bronstein. paper

  11. Low-shot learning via covariance-preserving adversarial augmentation networks, in NeurIPS, 2018. H. Gao, Z. Shou, A. Zareian, H. Zhang, and S. Chang. paper

  12. Learning to self-train for semi-supervised few-shot classification, in NeurIPS, 2019. X. Li, Q. Sun, Y. Liu, S. Zheng, Q. Zhou, T.-S. Chua, and B. Schiele. paper

  13. Few-shot learning with global class representations, in ICCV, 2019. A. Li, T. Luo, T. Xiang, W. Huang, and L. Wang. paper

  14. AutoAugment: Learning augmentation policies from data, in CVPR, 2019. E. D. Cubuk, B. Zoph, D. Mane, V. Vasudevan, and Q. V. Le. paper

  15. EDA: Easy data augmentation techniques for boosting performance on text classification tasks, in EMNLP and IJCNLP, 2019. J. Wei and K. Zou. paper

  16. LaSO: Label-set operations networks for multi-label few-shot learning, in CVPR, 2019. A. Alfassy, L. Karlinsky, A. Aides, J. Shtok, S. Harary, R. Feris, R. Giryes, and A. M. Bronstein. paper code

  17. Image deformation meta-networks for one-shot learning, in CVPR, 2019. Z. Chen, Y. Fu, Y.-X. Wang, L. Ma, W. Liu, and M. Hebert. paper code

  18. Spot and learn: A maximum-entropy patch sampler for few-shot image classification, in CVPR, 2019. W.-H. Chu, Y.-J. Li, J.-C. Chang, and Y.-C. F. Wang. paper

  19. Adversarial feature hallucination networks for few-shot learning, in CVPR, 2020. K. Li, Y. Zhang, K. Li, and Y. Fu. paper

  20. Instance credibility inference for few-shot learning, in CVPR, 2020. Y. Wang, C. Xu, C. Liu, L. Zhang, and Y. Fu. paper

  21. Diversity transfer network for few-shot learning, in AAAI, 2020. M. Chen, Y. Fang, X. Wang, H. Luo, Y. Geng, X. Zhang, C. Huang, W. Liu, and B. Wang. paper code

  22. Neural snowball for few-shot relation learning, in AAAI, 2020. T. Gao, X. Han, R. Xie, Z. Liu, F. Lin, L. Lin, and M. Sun. paper code

  23. Associative alignment for few-shot image classification, in ECCV, 2020. A. Afrasiyabi, J. Lalonde, and C. Gagné. paper code

  24. Information maximization for few-shot learning, in NeurIPS, 2020. M. Boudiaf, I. Ziko, J. Rony, J. Dolz, P. Piantanida, and I. B. Ayed. paper code

  25. Self-training for few-shot transfer across extreme task differences, in ICLR, 2021. C. P. Phoo, and B. Hariharan. paper

  26. Free lunch for few-shot learning: Distribution calibration, in ICLR, 2021. S. Yang, L. Liu, and M. Xu. paper code

  27. Parameterless transductive feature re-representation for few-shot learning, in ICML, 2021. W. Cui, and Y. Guo;. paper

  28. Learning intact features by erasing-inpainting for few-shot classification, in AAAI, 2021. J. Li, Z. Wang, and X. Hu. paper

  29. Variational feature disentangling for fine-grained few-shot classification, in ICCV, 2021. J. Xu, H. Le, M. Huang, S. Athar, and D. Samaras. paper

  30. Coarsely-labeled data for better few-shot transfer, in ICCV, 2021. C. P. Phoo, and B. Hariharan. paper

  31. Pseudo-loss confidence metric for semi-supervised few-shot learning, in ICCV, 2021. K. Huang, J. Geng, W. Jiang, X. Deng, and Z. Xu. paper

  32. Iterative label cleaning for transductive and semi-supervised few-shot learning, in ICCV, 2021. M. Lazarou, T. Stathaki, and Y. Avrithis. paper

  33. Meta two-sample testing: Learning kernels for testing with limited data, in NeurIPS, 2021. F. Liu, W. Xu, J. Lu, and D. J. Sutherland. paper

  34. Dynamic distillation network for cross-domain few-shot recognition with unlabeled data, in NeurIPS, 2021. A. Islam, C.-F. Chen, R. Panda, L. Karlinsky, R. Feris, and R. Radke. paper

  35. Towards better understanding and better generalization of low-shot classification in histology images with contrastive learning, in ICLR, 2022. J. Yang, H. Chen, J. Yan, X. Chen, and J. Yao. paper code

  36. FlipDA: Effective and robust data augmentation for few-shot learning, in ACL, 2022. J. Zhou, Y. Zheng, J. Tang, L. Jian, and Z. Yang. paper code

  37. PromDA: Prompt-based data augmentation for low-resource NLU tasks, in ACL, 2022. Y. Wang, C. Xu, Q. Sun, H. Hu, C. Tao, X. Geng, and D. Jiang. paper code

  38. N-shot learning for augmenting task-oriented dialogue state tracking, in Findings of ACL, 2022. I. T. Aksu, Z. Liu, M. Kan, and N. F. Chen. paper

  39. Generating representative samples for few-shot classification, in CVPR, 2022. J. Xu, and H. Le. paper code

  40. Semi-supervised few-shot learning via multi-factor clustering, in CVPR, 2022. J. Ling, L. Liao, M. Yang, and J. Shuai. paper

  41. Information augmentation for few-shot node classification, in IJCAI, 2022. Z. Wu, P. Zhou, G. Wen, Y. Wan, J. Ma, D. Cheng, and X. Zhu. paper

  42. Improving task-specific generalization in few-shot learning via adaptive vicinal risk minimization, in NeurIPS, 2022. L.-K. Huang, and Y. Wei. paper

  43. An embarrassingly simple approach to semi-supervised few-shot learning, in NeurIPS, 2022. X.-S. Wei, H.-Y. Xu, F. Zhang, Y. Peng, and W. Zhou. paper

  44. FeLMi : Few shot learning with hard mixup, in NeurIPS, 2022. A. Roy, A. Shah, K. Shah, P. Dhar, A. Cherian, and R. Chellappa. paper code

  45. Understanding cross-domain few-shot learning based on domain similarity and few-shot difficulty, in NeurIPS, 2022. J. Oh, S. Kim, N. Ho, J.-H. Kim, H. Song, and S.-Y. Yun. paper code

  46. Label hallucination for few-shot classification, in AAAI, 2022. Y. Jian, and L. Torresani. paper code

  47. STUNT: Few-shot tabular learning with self-generated tasks from unlabeled tables, in ICLR, 2023. J. Nam, J. Tack, K. Lee, H. Lee, and J. Shin. paper code

  48. Unsupervised meta-learning via few-shot pseudo-supervised contrastive learning, in ICLR, 2023. H. Jang, H. Lee, and J. Shin. paper code

  49. Progressive mix-up for few-shot supervised multi-source domain transfer, in ICLR, 2023. R. Zhu, R. Zhu, X. Yu, and S. Li. paper code

  50. Cross-level distillation and feature denoising for cross-domain few-shot classification, in ICLR, 2023. H. ZHENG, R. Wang, J. Liu, and A. Kanezaki. paper code

  51. Tuning language models as training data generators for augmentation-enhanced few-shot learning, in ICML, 2023. Y. Meng, M. Michalski, J. Huang, Y. Zhang, T. F. Abdelzaher, and J. Han. paper code

  52. Self-evolution learning for mixup: Enhance data augmentation on few-shot text classification tasks, in EMNLP, 2023. H. Zheng, Q. Zhong, L. Ding, Z. Tian, X. Niu, C. Wang, D. Li, and D. Tao. paper

  53. Effective data augmentation with diffusion models, in ICLR, 2024. B. Trabucco, K. Doherty, M. A. Gurinas, and R. Salakhutdinov. paper code

Multitask Learning

  1. Multi-task transfer methods to improve one-shot learning for multimedia event detection, in BMVC, 2015. W. Yan, J. Yap, and G. Mori. paper

  2. Label efficient learning of transferable representations across domains and tasks, in NeurIPS, 2017. Z. Luo, Y. Zou, J. Hoffman, and L. Fei-Fei. paper

  3. Few-shot adversarial domain adaptation, in NeurIPS, 2017. S. Motiian, Q. Jones, S. Iranmanesh, and G. Doretto. paper

  4. One-shot unsupervised cross domain translation, in NeurIPS, 2018. S. Benaim and L. Wolf. paper

  5. Multi-content GAN for few-shot font style transfer, in CVPR, 2018. S. Azadi, M. Fisher, V. G. Kim, Z. Wang, E. Shechtman, and T. Darrell. paper code

  6. Feature space transfer for data augmentation, in CVPR, 2018. B. Liu, X. Wang, M. Dixit, R. Kwitt, and N. Vasconcelos. paper

  7. Fine-grained visual categorization using meta-learning optimization with sample selection of auxiliary data, in ECCV, 2018. Y. Zhang, H. Tang, and K. Jia. paper

  8. Few-shot charge prediction with discriminative legal attributes, in COLING, 2018. Z. Hu, X. Li, C. Tu, Z. Liu, and M. Sun. paper

  9. Boosting few-shot visual learning with self-supervision, in ICCV, 2019. S. Gidaris, A. Bursuc, N. Komodakis, P. Pérez, and M. Cord. paper

  10. When does self-supervision improve few-shot learning?, in ECCV, 2020. J. Su, S. Maji, and B. Hariharan. paper

  11. Pareto self-supervised training for few-shot learning, in CVPR, 2021. Z. Chen, J. Ge, H. Zhan, S. Huang, and D. Wang. paper

  12. Bridging multi-task learning and meta-learning: Towards efficient training and effective adaptation, in ICML, 2021. H. Wang, H. Zhao, and B. Li;. paper code

  13. Task-level self-supervision for cross-domain few-shot learning, in AAAI, 2022. W. Yuan, Z. Zhang, C. Wang, H. Song, Y. Xie, and L. Ma. paper

  14. Improving few-shot generalization by exploring and exploiting auxiliary data, in NeurIPS, 2023. A. Albalak, C. Raffel, and W. Y. Wang. paper code

Embedding/Metric Learning

  1. Object classification from a single example utilizing class relevance metrics, in NeurIPS, 2005. M. Fink. paper

  2. Optimizing one-shot recognition with micro-set learning, in CVPR, 2010. K. D. Tang, M. F. Tappen, R. Sukthankar, and C. H. Lampert. paper

  3. Siamese neural networks for one-shot image recognition, ICML deep learning workshop, 2015. G. Koch, R. Zemel, and R. Salakhutdinov. paper

  4. Matching networks for one shot learning, in NeurIPS, 2016. O. Vinyals, C. Blundell, T. Lillicrap, D. Wierstra et al. paper

  5. Learning feed-forward one-shot learners, in NeurIPS, 2016. L. Bertinetto, J. F. Henriques, J. Valmadre, P. Torr, and A. Vedaldi. paper

  6. Few-shot learning through an information retrieval lens, in NeurIPS, 2017. E. Triantafillou, R. Zemel, and R. Urtasun. paper

  7. Prototypical networks for few-shot learning, in NeurIPS, 2017. J. Snell, K. Swersky, and R. S. Zemel. paper code

  8. Attentive recurrent comparators, in ICML, 2017. P. Shyam, S. Gupta, and A. Dukkipati. paper

  9. Learning algorithms for active learning, in ICML, 2017. P. Bachman, A. Sordoni, and A. Trischler. paper

  10. Active one-shot learning, arXiv preprint, 2017. M. Woodward and C. Finn. paper

  11. Structured set matching networks for one-shot part labeling, in CVPR, 2018. J. Choi, J. Krishnamurthy, A. Kembhavi, and A. Farhadi. paper

  12. Low-shot learning from imaginary data, in CVPR, 2018. Y.-X. Wang, R. Girshick, M. Hebert, and B. Hariharan. paper

  13. Learning to compare: Relation network for few-shot learning, in CVPR, 2018. F. Sung, Y. Yang, L. Zhang, T. Xiang, P. H. Torr, and T. M. Hospedales. paper code

  14. Dynamic conditional networks for few-shot learning, in ECCV, 2018. F. Zhao, J. Zhao, S. Yan, and J. Feng. paper code

  15. TADAM: Task dependent adaptive metric for improved few-shot learning, in NeurIPS, 2018. B. Oreshkin, P. R. López, and A. Lacoste. paper

  16. Meta-learning for semi-supervised few-shot classification, in ICLR, 2018. M. Ren, S. Ravi, E. Triantafillou, J. Snell, K. Swersky, J. B. Tenen- baum, H. Larochelle, and R. S. Zemel. paper code

  17. Few-shot learning with graph neural networks, in ICLR, 2018. V. G. Satorras and J. B. Estrach. paper code

  18. A simple neural attentive meta-learner, in ICLR, 2018. N. Mishra, M. Rohaninejad, X. Chen, and P. Abbeel. paper

  19. Meta-learning with differentiable closed-form solvers, in ICLR, 2019. L. Bertinetto, J. F. Henriques, P. Torr, and A. Vedaldi. paper

  20. Learning to propagate labels: Transductive propagation network for few-shot learning, in ICLR, 2019. Y. Liu, J. Lee, M. Park, S. Kim, E. Yang, S. Hwang, and Y. Yang. paper code

  21. Multi-level matching and aggregation network for few-shot relation classification, in ACL, 2019. Z.-X. Ye, and Z.-H. Ling. paper

  22. Induction networks for few-shot text classification, in EMNLP-IJCNLP, 2019. R. Geng, B. Li, Y. Li, X. Zhu, P. Jian, and J. Sun. paper

  23. Hierarchical attention prototypical networks for few-shot text classification, in EMNLP-IJCNLP, 2019. S. Sun, Q. Sun, K. Zhou, and T. Lv. paper

  24. Cross attention network for few-shot classification, in NeurIPS, 2019. R. Hou, H. Chang, B. Ma, S. Shan, and X. Chen. paper

  25. Fast and Flexible Multi-Task Classification Using Conditional Neural Adaptive Processes, in NeurIPS, 2019. J. Requeima, J. Gordon, J. Bronskill, S. Nowozin, and R. E. Turner. paper code

  26. Hybrid attention-based prototypical networks for noisy few-shot relation classification, in AAAI, 2019. T. Gao, X. Han, Z. Liu, and M. Sun. paper code

  27. Attention-based multi-context guiding for few-shot semantic segmentation, in AAAI, 2019. T. Hu, P. Yang, C. Zhang, G. Yu, Y. Mu and C. G. M. Snoek. paper

  28. Distribution consistency based covariance metric networks for few-shot learning, in AAAI, 2019. W. Li, L. Wang, J. Xu, J. Huo, Y. Gao and J. Luo. paper

  29. A dual attention network with semantic embedding for few-shot learning, in AAAI, 2019. S. Yan, S. Zhang, and X. He. paper

  30. TapNet: Neural network augmented with task-adaptive projection for few-shot learning, in ICML, 2019. S. W. Yoon, J. Seo, and J. Moon. paper

  31. Prototype propagation networks (PPN) for weakly-supervised few-shot learning on category graph, in IJCAI, 2019. L. Liu, T. Zhou, G. Long, J. Jiang, L. Yao, C. Zhang. paper code

  32. Collect and select: Semantic alignment metric learning for few-shot learning, in ICCV, 2019. F. Hao, F. He, J. Cheng, L. Wang, J. Cao, and D. Tao. paper

  33. Transductive episodic-wise adaptive metric for few-shot learning, in ICCV, 2019. L. Qiao, Y. Shi, J. Li, Y. Wang, T. Huang, and Y. Tian. paper

  34. Few-shot learning with embedded class models and shot-free meta training, in ICCV, 2019. A. Ravichandran, R. Bhotika, and S. Soatto. paper

  35. PARN: Position-aware relation networks for few-shot learning, in ICCV, 2019. Z. Wu, Y. Li, L. Guo, and K. Jia. paper

  36. PANet: Few-shot image semantic segmentation with prototype alignment, in ICCV, 2019. K. Wang, J. H. Liew, Y. Zou, D. Zhou, and J. Feng. paper code

  37. RepMet: Representative-based metric learning for classification and few-shot object detection, in CVPR, 2019. L. Karlinsky, J. Shtok, S. Harary, E. Schwartz, A. Aides, R. Feris, R. Giryes, and A. M. Bronstein. paper code

  38. Edge-labeling graph neural network for few-shot learning, in CVPR, 2019. J. Kim, T. Kim, S. Kim, and C. D. Yoo. paper

  39. Finding task-relevant features for few-shot learning by category traversal, in CVPR, 2019. H. Li, D. Eigen, S. Dodge, M. Zeiler, and X. Wang. paper code

  40. Revisiting local descriptor based image-to-class measure for few-shot learning, in CVPR, 2019. W. Li, L. Wang, J. Xu, J. Huo, Y. Gao, and J. Luo. paper code

  41. TAFE-Net: Task-aware feature embeddings for low shot learning, in CVPR, 2019. X. Wang, F. Yu, R. Wang, T. Darrell, and J. E. Gonzalez. paper code

  42. Improved few-shot visual classification, in CVPR, 2020. P. Bateni, R. Goyal, V. Masrani, F. Wood, and L. Sigal. paper

  43. Boosting few-shot learning with adaptive margin loss, in CVPR, 2020. A. Li, W. Huang, X. Lan, J. Feng, Z. Li, and L. Wang. paper

  44. Adaptive subspaces for few-shot learning, in CVPR, 2020. C. Simon, P. Koniusz, R. Nock, and M. Harandi. paper

  45. DPGN: Distribution propagation graph network for few-shot learning, in CVPR, 2020. L. Yang, L. Li, Z. Zhang, X. Zhou, E. Zhou, and Y. Liu. paper

  46. Few-shot learning via embedding adaptation with set-to-set functions, in CVPR, 2020. H.-J. Ye, H. Hu, D.-C. Zhan, and F. Sha. paper code

  47. DeepEMD: Few-shot image classification with differentiable earth mover's distance and structured classifiers, in CVPR, 2020. C. Zhang, Y. Cai, G. Lin, and C. Shen. paper code

  48. Few-shot text classification with distributional signatures, in ICLR, 2020. Y. Bao, M. Wu, S. Chang, and R. Barzilay. paper code

  49. Learning task-aware local representations for few-shot learning, in IJCAI, 2020. C. Dong, W. Li, J. Huo, Z. Gu, and Y. Gao. paper

  50. SimPropNet: Improved similarity propagation for few-shot image segmentation, in IJCAI, 2020. S. Gairola, M. Hemani, A. Chopra, and B. Krishnamurthy. paper

  51. Asymmetric distribution measure for few-shot learning, in IJCAI, 2020. W. Li, L. Wang, J. Huo, Y. Shi, Y. Gao, and J. Luo. paper

  52. Transductive relation-propagation network for few-shot learning, in IJCAI, 2020. Y. Ma, S. Bai, S. An, W. Liu, A. Liu, X. Zhen, and X. Liu. paper

  53. Weakly supervised few-shot object segmentation using co-attention with visual and semantic embeddings, in IJCAI, 2020. M. Siam, N. Doraiswamy, B. N. Oreshkin, H. Yao, and M. Jägersand. paper

  54. Few-shot learning on graphs via super-classes based on graph spectral measures, in ICLR, 2020. J. Chauhan, D. Nathani, and M. Kaul. paper

  55. SGAP-Net: Semantic-guided attentive prototypes network for few-shot human-object interaction recognition, in AAAI, 2020. Z. Ji, X. Liu, Y. Pang, and X. Li. paper

  56. One-shot image classification by learning to restore prototypes, in AAAI, 2020. W. Xue, and W. Wang. paper

  57. Negative margin matters: Understanding margin in few-shot classification, in ECCV, 2020. B. Liu, Y. Cao, Y. Lin, Q. Li, Z. Zhang, M. Long, and H. Hu. paper code

  58. Prototype rectification for few-shot learning, in ECCV, 2020. J. Liu, L. Song, and Y. Qin. paper

  59. Rethinking few-shot image classification: A good embedding is all you need?, in ECCV, 2020. Y. Tian, Y. Wang, D. Krishnan, J. B. Tenenbaum, and P. Isola. paper code

  60. SEN: A novel feature normalization dissimilarity measure for prototypical few-shot learning networks, in ECCV, 2020. V. N. Nguyen, S. Løkse, K. Wickstrøm, M. Kampffmeyer, D. Roverso, and R. Jenssen. paper

  61. TAFSSL: Task-adaptive feature sub-space learning for few-shot classification, in ECCV, 2020. M. Lichtenstein, P. Sattigeri, R. Feris, R. Giryes, and L. Karlinsky. paper

  62. Attentive prototype few-shot learning with capsule network-based embedding, in ECCV, 2020. F. Wu, J. S.Smith, W. Lu, C. Pang, and B. Zhang. paper

  63. Embedding propagation: Smoother manifold for few-shot classification, in ECCV, 2020. P. Rodríguez, I. Laradji, A. Drouin, and A. Lacoste. paper code

  64. Laplacian regularized few-shot learning, in ICML, 2020. I. M. Ziko, J. Dolz, E. Granger, and I. B. Ayed. paper code

  65. TAdaNet: Task-adaptive network for graph-enriched meta-learning, in KDD, 2020. Q. Suo, i. Chou, W. Zhong, and A. Zhang. paper

  66. Concept learners for few-shot learning, in ICLR, 2021. K. Cao, M. Brbic, and J. Leskovec. paper

  67. Reinforced attention for few-shot learning and beyond, in CVPR, 2021. J. Hong, P. Fang, W. Li, T. Zhang, C. Simon, M. Harandi, and L. Petersson. paper

  68. Mutual CRF-GNN for few-shot learning, in CVPR, 2021. S. Tang, D. Chen, L. Bai, K. Liu, Y. Ge, and W. Ouyang. paper

  69. Few-shot classification with feature map reconstruction networks, in CVPR, 2021. D. Wertheimer, L. Tang, and B. Hariharan. paper code

  70. ECKPN: Explicit class knowledge propagation network for transductive few-shot learning, in CVPR, 2021. C. Chen, X. Yang, C. Xu, X. Huang, and Z. Ma. paper

  71. Exploring complementary strengths of invariant and equivariant representations for few-shot learning, in CVPR, 2021. M. N. Rizve, S. Khan, F. S. Khan, and M. Shah. paper

  72. Rethinking class relations: Absolute-relative supervised and unsupervised few-shot learning, in CVPR, 2021. H. Zhang, P. Koniusz, S. Jian, H. Li, and P. H. S. Torr. paper

  73. Unsupervised embedding adaptation via early-stage feature reconstruction for few-shot classification, in ICML, 2021. D. H. Lee, and S. Chung. paper code

  74. Learning a few-shot embedding model with contrastive learning, in AAAI, 2021. C. Liu, Y. Fu, C. Xu, S. Yang, J. Li, C. Wang, and L. Zhang. paper

  75. Looking wider for better adaptive representation in few-shot learning, in AAAI, 2021. J. Zhao, Y. Yang, X. Lin, J. Yang, and L. He. paper

  76. Tailoring embedding function to heterogeneous few-shot tasks by global and local feature adaptors, in AAAI, 2021. S. Lu, H. Ye, and D.-C. Zhan. paper

  77. Knowledge guided metric learning for few-shot text classification, in NAACL-HLT, 2021. D. Sui, Y. Chen, B. Mao, D. Qiu, K. Liu, and J. Zhao. paper

  78. Mixture-based feature space learning for few-shot image classification, in ICCV, 2021. A. Afrasiyabi, J. Lalonde, and C. Gagné. paper

  79. Z-score normalization, hubness, and few-shot learning, in ICCV, 2021. N. Fei, Y. Gao, Z. Lu, and T. Xiang. paper

  80. Relational embedding for few-shot classification, in ICCV, 2021. D. Kang, H. Kwon, J. Min, and M. Cho. paper code

  81. Transductive few-shot classification on the oblique manifold, in ICCV, 2021. G. Qi, H. Yu, Z. Lu, and S. Li. paper code

  82. Curvature generation in curved spaces for few-shot learning, in ICCV, 2021. Z. Gao, Y. Wu, Y. Jia, and M. Harandi. paper

  83. On episodes, prototypical networks, and few-shot learning, in NeurIPS, 2021. S. Laenen, and L. Bertinetto. paper

  84. Few-shot learning as cluster-induced voronoi diagrams: A geometric approach, in ICLR, 2022. C. Ma, Z. Huang, M. Gao, and J. Xu. paper code

  85. Few-shot learning with siamese networks and label tuning, in ACL, 2022. T. Müller, G. Pérez-Torró, and M. Franco-Salvador. paper code

  86. Learning to affiliate: Mutual centralized learning for few-shot classification, in CVPR, 2022. Y. Liu, W. Zhang, C. Xiang, T. Zheng, D. Cai, and X. He. paper

  87. Matching feature sets for few-shot image classification, in CVPR, 2022. A. Afrasiyabi, H. Larochelle, J. Lalonde, and C. Gagné. paper code

  88. Joint distribution matters: Deep Brownian distance covariance for few-shot classification, in CVPR, 2022. J. Xie, F. Long, J. Lv, Q. Wang, and P. Li. paper

  89. CAD: Co-adapting discriminative features for improved few-shot classification, in CVPR, 2022. P. Chikontwe, S. Kim, and S. H. Park. paper

  90. Ranking distance calibration for cross-domain few-shot learning, in CVPR, 2022. P. Li, S. Gong, C. Wang, and Y. Fu. paper

  91. EASE: Unsupervised discriminant subspace learning for transductive few-shot learning, in CVPR, 2022. H. Zhu, and P. Koniusz. paper code

  92. Cross-domain few-shot learning with task-specific adapters, in CVPR, 2022. W. Li, X. Liu, and H. Bilen. paper code

  93. Hyperbolic knowledge transfer with class hierarchy for few-shot learning, in IJCAI, 2022. B. Zhang, H. Jiang, S. Feng, X. Li, Y. Ye, and R. Ye. paper

  94. Better embedding and more shots for few-shot learning, in IJCAI, 2022. Z. Chi, Z. Wang, M. Yang, W. Guo, and X. Xu. paper

  95. A closer look at prototype classifier for few-shot image classification, in NeurIPS, 2022. M. Hou, and I. Sato. paper

  96. Rethinking generalization in few-shot classification, in NeurIPS, 2022. M. Hiller, R. Ma, M. Harandi, and T. Drummond. paper code

  97. DMN4: Few-shot learning via discriminative mutual nearest neighbor neural network, in AAAI, 2022. Y. Liu, T. Zheng, J. Song, D. Cai, and X. He. paper

  98. Hybrid graph neural networks for few-shot learning, in AAAI, 2022. T. Yu, S. He, Y.-Z. Song, and T. Xiang. paper code

  99. Adaptive poincaré point to set distance for few-shot classification, in AAAI, 2022. R. Ma, P. Fang, T. Drummond, and M. Harandi. paper

  100. Hubs and hyperspheres: Reducing hubness and improving transductive few-shot learning with hyperspherical embeddings, in CVPR, 2023. D. J. Trosten, R. Chakraborty, S. Løkse, K. K. Wickstrøm, R. Jenssen, and M. C. Kampffmeyer. paper code

  101. Revisiting prototypical network for cross domain few-shot learning, in CVPR, 2023. F. Zhou, P. Wang, L. Zhang, W. Wei, and Y. Zhang. paper code

  102. Transductive few-shot learning with prototype-based label propagation by iterative graph refinement, in CVPR, 2023. H. Zhu, and P. Koniusz. paper code

  103. Few-shot classification via ensemble learning with multi-order statistics, in IJCAI, 2023. S. Yang, F. Liu, D. Chen, and J. Zhou. paper

  104. Few-sample feature selection via feature manifold learning, in ICML, 2023. D. Cohen, T. Shnitzer, Y. Kluger, and R. Talmon. paper code

  105. Interval bound interpolation for few-shot learning with few tasks, in ICML, 2023. S. Datta, S. S. Mullick, A. Chakrabarty, and S. Das. paper code

  106. A closer look at few-shot classification again, in ICML, 2023. X. Luo, H. Wu, J. Zhang, L. Gao, J. Xu, and J. Song. paper code

  107. TART: Improved few-shot text classification using task-adaptive reference transformation, in ACL, 2023. S. Lei, X. Zhang, J. He, F. Chen, and C.-T. Lu. paper code

  108. Class-aware patch embedding adaptation for few-shot image classification, in ICCV, 2023. F. Hao, F. He, L. Liu, F. Wu, D. Tao, and J. Cheng. paper code

  109. Frequency guidance matters in few-shot learning, in ICCV, 2023. H. Cheng, S. Yang, J. T. Zhou, L. Guo, and B. Wen. paper

  110. Prototypes-oriented transductive few-shot learning with conditional transport, in ICCV, 2023. L. Tian, J. Feng, X. Chai, W. Chen, L. Wang, X. Liu, and B. Chen. paper code

  111. Understanding few-shot learning: Measuring task relatedness and adaptation difficulty via attributes, in NeurIPS, 2023. M. Hu, H. Chang, Z. Guo, B. Ma, S. Shan, and X. Chen. paper code

  112. DiffKendall: A novel approach for few-shot learning with differentiable Kendall's rank correlation, in NeurIPS, 2023. K. Zheng, H. Zhang, and W. Huang. paper

  113. Compositional prototypical networks for few-shot classification, in AAAI, 2023. Q. Lyu, and W. Wang. paper code

  114. FoPro: Few-shot guided robust webly-supervised prototypical learning, in AAAI, 2023. Y. Qin, X. Chen, C. Chen, Y. Shen, B. Ren, Y. Gu, J. Yang, and C. Shen. paper code

  115. SpatialFormer: Semantic and target aware attentions for few-shot learning, in AAAI, 2023. J. Lai, S. Yang, W. Wu, T. Wu, G. Jiang, X. Wang, J. Liu, B.-B. Gao, W. Zhang, Y. Xie, and C. Wang. paper

  116. RankDNN: Learning to rank for few-shot learning, in AAAI, 2023. Q. Guo, H. Gong, X. Wei, Y. Fu, Y. Yu, W. Zhang, and W. Ge. paper code

  117. Boosting few-shot text classification via distribution estimation, in AAAI, 2023. H. Liu, F. Zhang, X. Zhang, S. Zhao, F. Ma, X.-M. Wu, H. Chen, H. Yu, and X. Zhang. paper

  118. Feature distribution fitting with direction-driven weighting for few-shot images classification, in AAAI, 2023. X. Wei, W. Du, H. Wan, and W. Min. paper

  119. Exploring tuning characteristics of ventral stream's neurons for few-shot image classification, in AAAI, 2023. L. Dong, W. Zhai, and Z.-J. Zha. paper

  120. RAPL: A relation-aware prototype learning approach for few-shot document-level relation extraction, in EMNLP, 2023. S. Meng, X. Hu, A. Liu, S. Li, F. Ma, Y. Yang, and L. Wen. paper code

  121. BECLR: Batch enhanced contrastive few-shot learning, in ICLR, 2024. S. Poulakakis-Daktylidis, and H. Jamali-Rad. paper code

Learning with External Memory

  1. Meta-learning with memory-augmented neural networks, in ICML, 2016. A. Santoro, S. Bartunov, M. Botvinick, D. Wierstra, and T. Lillicrap. paper

  2. Few-shot object recognition from machine-labeled web images, in CVPR, 2017. Z. Xu, L. Zhu, and Y. Yang. paper

  3. Learning to remember rare events, in ICLR, 2017. Ł. Kaiser, O. Nachum, A. Roy, and S. Bengio. paper

  4. Meta networks, in ICML, 2017. T. Munkhdalai and H. Yu. paper

  5. Memory matching networks for one-shot image recognition, in CVPR, 2018. Q. Cai, Y. Pan, T. Yao, C. Yan, and T. Mei. paper

  6. Compound memory networks for few-shot video classification, in ECCV, 2018. L. Zhu and Y. Yang. paper

  7. Memory, show the way: Memory based few shot word representation learning, in EMNLP, 2018. J. Sun, S. Wang, and C. Zong. paper

  8. Rapid adaptation with conditionally shifted neurons, in ICML, 2018. T. Munkhdalai, X. Yuan, S. Mehri, and A. Trischler. paper

  9. Adaptive posterior learning: Few-shot learning with a surprise-based memory module, in ICLR, 2019. T. Ramalho and M. Garnelo. paper code

  10. Coloring with limited data: Few-shot colorization via memory augmented networks, in CVPR, 2019. S. Yoo, H. Bahng, S. Chung, J. Lee, J. Chang, and J. Choo. paper

  11. ACMM: Aligned cross-modal memory for few-shot image and sentence matching, in ICCV, 2019. Y. Huang, and L. Wang. paper

  12. Dynamic memory induction networks for few-shot text classification, in ACL, 2020. R. Geng, B. Li, Y. Li, J. Sun, and X. Zhu. paper

  13. Few-shot visual learning with contextual memory and fine-grained calibration, in IJCAI, 2020. Y. Ma, W. Liu, S. Bai, Q. Zhang, A. Liu, W. Chen, and X. Liu. paper

  14. Learn from concepts: Towards the purified memory for few-shot learning, in IJCAI, 2021. X. Liu, X. Tian, S. Lin, Y. Qu, L. Ma, W. Yuan, Z. Zhang, and Y. Xie. paper

  15. Prototype memory and attention mechanisms for few shot image generation, in ICLR, 2022. T. Li, Z. Li, A. Luo, H. Rockwell, A. B. Farimani, and T. S. Lee. paper code

  16. Hierarchical variational memory for few-shot learning across domains, in ICLR, 2022. Y. Du, X. Zhen, L. Shao, and C. G. M. Snoek. paper code

  17. Remember the difference: Cross-domain few-shot semantic segmentation via meta-memory transfer, in CVPR, 2022. W. Wang, L. Duan, Y. Wang, Q. En, J. Fan, and Z. Zhang. paper

  18. Consistent prototype learning for few-shot continual relation extraction, in ACL, 2023. X. Chen, H. Wu, and X. Shi. paper code

  19. Few-shot generation via recalling brain-inspired episodic-semantic memory, in NeurIPS, 2023. Z. Duan, L. Zhiyi, C. Wang, B. Chen, B. An, and M. Zhou. paper

Generative Modeling

  1. One-shot learning of object categories, TPAMI, 2006. L. Fei-Fei, R. Fergus, and P. Perona. paper

  2. Learning to learn with compound HD models, in NeurIPS, 2011. A. Torralba, J. B. Tenenbaum, and R. R. Salakhutdinov. paper

  3. One-shot learning with a hierarchical nonparametric bayesian model, in ICML Workshop on Unsupervised and Transfer Learning, 2012. R. Salakhutdinov, J. Tenenbaum, and A. Torralba. paper

  4. Human-level concept learning through probabilistic program induction, Science, 2015. B. M. Lake, R. Salakhutdinov, and J. B. Tenenbaum. paper

  5. One-shot generalization in deep generative models, in ICML, 2016. D. Rezende, I. Danihelka, K. Gregor, and D. Wierstra. paper

  6. One-shot video object segmentation, in CVPR, 2017. S. Caelles, K.-K. Maninis, J. Pont-Tuset, L. Leal-Taixé, D. Cremers, and L. Van Gool. paper

  7. Towards a neural statistician, in ICLR, 2017. H. Edwards and A. Storkey. paper

  8. Extending a parser to distant domains using a few dozen partially annotated examples, in ACL, 2018. V. Joshi, M. Peters, and M. Hopkins. paper

  9. MetaGAN: An adversarial approach to few-shot learning, in NeurIPS, 2018. R. Zhang, T. Che, Z. Ghahramani, Y. Bengio, and Y. Song. paper

  10. Few-shot autoregressive density estimation: Towards learning to learn distributions, in ICLR, 2018. S. Reed, Y. Chen, T. Paine, A. van den Oord, S. M. A. Eslami, D. Rezende, O. Vinyals, and N. de Freitas. paper

  11. The variational homoencoder: Learning to learn high capacity generative models from few examples, in UAI, 2018. L. B. Hewitt, M. I. Nye, A. Gane, T. Jaakkola, and J. B. Tenenbaum. paper

  12. Meta-learning probabilistic inference for prediction, in ICLR, 2019. J. Gordon, J. Bronskill, M. Bauer, S. Nowozin, and R. Turner. paper

  13. Variational prototyping-encoder: One-shot learning with prototypical images, in CVPR, 2019. J. Kim, T.-H. Oh, S. Lee, F. Pan, and I. S. Kweon. paper code

  14. Variational few-shot learning, in ICCV, 2019. J. Zhang, C. Zhao, B. Ni, M. Xu, and X. Yang. paper

  15. Infinite mixture prototypes for few-shot learning, in ICML, 2019. K. Allen, E. Shelhamer, H. Shin, and J. Tenenbaum. paper

  16. Dual variational generation for low shot heterogeneous face recognition, in NeurIPS, 2019. C. Fu, X. Wu, Y. Hu, H. Huang, and R. He. paper

  17. Bayesian meta sampling for fast uncertainty adaptation, in ICLR, 2020. Z. Wang, Y. Zhao, P. Yu, R. Zhang, and C. Chen. paper

  18. Empirical Bayes transductive meta-learning with synthetic gradients, in ICLR, 2020. S. X. Hu, P. G. Moreno, Y. Xiao, X. Shen, G. Obozinski, N. D. Lawrence, and A. C. Damianou. paper

  19. Few-shot relation extraction via bayesian meta-learning on relation graphs, in ICML, 2020. M. Qu, T. Gao, L. A. C. Xhonneux, and J. Tang. paper code

  20. Interventional few-shot learning, in NeurIPS, 2020. Z. Yue, H. Zhang, Q. Sun, and X. Hua. paper code

  21. Bayesian few-shot classification with one-vs-each pólya-gamma augmented gaussian processes, in ICLR, 2021. J. Snell, and R. Zemel. paper

  22. Few-shot Bayesian optimization with deep kernel surrogates, in ICLR, 2021. M. Wistuba, and J. Grabocka. paper

  23. A hierarchical transformation-discriminating generative model for few shot anomaly detection, in ICCV, 2021. S. Sheynin, S. Benaim, and L. Wolf. paper

  24. Reinforced few-shot acquisition function learning for Bayesian optimization, in NeurIPS, 2021. B. Hsieh, P. Hsieh, and X. Liu. paper

  25. GanOrCon: Are generative models useful for few-shot segmentation?, in CVPR, 2022. O. Saha, Z. Cheng, and S. Maji. paper

  26. Few shot generative model adaption via relaxed spatial structural alignment, in CVPR, 2022. J. Xiao, L. Li, C. Wang, Z. Zha, and Q. Huang. paper

  27. SCHA-VAE: Hierarchical context aggregation for few-shot generation, in ICML, 2022. G. Giannone, and O. Winther. paper code

  28. Diversity vs. Recognizability: Human-like generalization in one-shot generative models, in NeurIPS, 2022. V. Boutin, L. Singhal, X. Thomas, and T. Serre. paper code

  29. Generalized one-shot domain adaptation of generative adversarial networks, in NeurIPS, 2022. Z. Zhang, Y. Liu, C. Han, T. Guo, T. Yao, and T. Mei. paper code

  30. Towards diverse and faithful one-shot adaption of generative adversarial networks, in NeurIPS, 2022. Y. Zhang, m. Yao, Y. Wei, Z. Ji, J. Bai, and W. Zuo. paper code

  31. Few-shot cross-domain image generation via inference-time latent-code learning, in ICLR, 2023. A. K. Mondal, P. Tiwary, P. Singla, and P. AP. paper code

  32. Adaptive IMLE for few-shot pretraining-free generative modelling, in ICML, 2023. M. Aghabozorgi, S. Peng, and K. Li. paper code

  33. Diversity-enhancing generative network for few-shot hypothesis adaptation, in ICML, 2023. R. Dong, F. Liu, H. Chi, T. Liu, M. Gong, G. Niu, M. Sugiyama, and B. Han. paper

  34. MetaModulation: Learning variational feature hierarchies for few-shot learning with fewer tasks, in ICML, 2023. W. Sun, Y. Du, X. Zhen, F. Wang, L. Wang, and C. G. M. Snoek. paper code

  35. Revisiting logistic-softmax likelihood in bayesian meta-learning for few-shot classification, in NeurIPS, 2023. T. Ke, H. Cao, Z. Ling, and F. Zhou. paper code

  36. Human-like few-shot learning via bayesian reasoning over natural language, in NeurIPS, 2023. and K. Ellis. paper code

  37. CMVAE: Causal meta VAE for unsupervised meta-learning, in AAAI, 2023. G. Qi, and H. Yu. paper code

  38. Progressive few-shot adaptation of generative model with align-free spatial correlation, in AAAI, 2023. J. Moon, H. Kim, and J.-P. Heo. paper

Refining Existing Parameters

  1. Cross-generalization: Learning novel classes from a single example by feature replacement, in CVPR, 2005. E. Bart and S. Ullman. paper

  2. One-shot adaptation of supervised deep convolutional models, in ICLR, 2013. J. Hoffman, E. Tzeng, J. Donahue, Y. Jia, K. Saenko, and T. Darrell. paper

  3. Learning to learn: Model regression networks for easy small sample learning, in ECCV, 2016. Y.-X. Wang and M. Hebert. paper

  4. Learning from small sample sets by combining unsupervised meta-training with CNNs, in NeurIPS, 2016. Y.-X. Wang and M. Hebert. paper

  5. Efficient k-shot learning with regularized deep networks, in AAAI, 2018. D. Yoo, H. Fan, V. N. Boddeti, and K. M. Kitani. paper

  6. CLEAR: Cumulative learning for one-shot one-class image recognition, in CVPR, 2018. J. Kozerawski and M. Turk. paper

  7. Learning structure and strength of CNN filters for small sample size training, in CVPR, 2018. R. Keshari, M. Vatsa, R. Singh, and A. Noore. paper

  8. Dynamic few-shot visual learning without forgetting, in CVPR, 2018. S. Gidaris and N. Komodakis. paper code

  9. Low-shot learning with imprinted weights, in CVPR, 2018. H. Qi, M. Brown, and D. G. Lowe. paper

  10. Neural voice cloning with a few samples, in NeurIPS, 2018. S. Arik, J. Chen, K. Peng, W. Ping, and Y. Zhou. paper

  11. Text classification with few examples using controlled generalization, in NAACL-HLT, 2019. A. Mahabal, J. Baldridge, B. K. Ayan, V. Perot, and D. Roth. paper

  12. Low shot box correction for weakly supervised object detection, in IJCAI, 2019. T. Pan, B. Wang, G. Ding, J. Han, and J. Yong. paper

  13. Diversity with cooperation: Ensemble methods for few-shot classification, in ICCV, 2019. N. Dvornik, C. Schmid, and J. Mairal. paper

  14. Few-shot image recognition with knowledge transfer, in ICCV, 2019. Z. Peng, Z. Li, J. Zhang, Y. Li, G.-J. Qi, and J. Tang. paper

  15. Generating classification weights with gnn denoising autoencoders for few-shot learning, in CVPR, 2019. S. Gidaris, and N. Komodakis. paper code

  16. Dense classification and implanting for few-shot learning, in CVPR, 2019. Y. Lifchitz, Y. Avrithis, S. Picard, and A. Bursuc. paper

  17. Few-shot adaptive faster R-CNN, in CVPR, 2019. T. Wang, X. Zhang, L. Yuan, and J. Feng. paper

  18. TransMatch: A transfer-learning scheme for semi-supervised few-shot learning, in CVPR, 2020. Z. Yu, L. Chen, Z. Cheng, and J. Luo. paper

  19. Learning to select base classes for few-shot classification, in CVPR, 2020. L. Zhou, P. Cui, X. Jia, S. Yang, and Q. Tian. paper

  20. Few-shot NLG with pre-trained language model, in ACL, 2020. Z. Chen, H. Eavani, W. Chen, Y. Liu, and W. Y. Wang. paper code

  21. Span-ConveRT: Few-shot span extraction for dialog with pretrained conversational representations, in ACL, 2020. S. Coope, T. Farghly, D. Gerz, I. Vulic, and M. Henderson. paper

  22. Structural supervision improves few-shot learning and syntactic generalization in neural language models, in EMNLP, 2020. E. Wilcox, P. Qian, R. Futrell, R. Kohita, R. Levy, and M. Ballesteros. paper code

  23. A baseline for few-shot image classification, in ICLR, 2020. G. S. Dhillon, P. Chaudhari, A. Ravichandran, and S. Soatto. paper

  24. Cross-domain few-shot classification via learned feature-wise transformation, in ICLR, 2020. H. Tseng, H. Lee, J. Huang, and M. Yang. paper code

  25. Graph few-shot learning via knowledge transfer, in AAAI, 2020. H. Yao, C. Zhang, Y. Wei, M. Jiang, S. Wang, J. Huang, N. V. Chawla, and Z. Li. paper

  26. Context-Transformer: Tackling object confusion for few-shot detection, in AAAI, 2020. Z. Yang, Y. Wang, X. Chen, J. Liu, and Y. Qiao. paper

  27. A broader study of cross-domain few-shot learning, in ECCV, 2020. Y. Guo, N. C. Codella, L. Karlinsky, J. V. Codella, J. R. Smith, K. Saenko, T. Rosing, and R. Feris. paper code

  28. Selecting relevant features from a multi-domain representation for few-shot classification, in ECCV, 2020. N. Dvornik, C. Schmid, and J. Mairal. paper code

  29. Prototype completion with primitive knowledge for few-shot learning, in CVPR, 2021. B. Zhang, X. Li, Y. Ye, Z. Huang, and L. Zhang. paper code

  30. Partial is better than all: Revisiting fine-tuning strategy for few-shot learning, in AAAI, 2021. Z. Shen, Z. Liu, J. Qin, M. Savvides, and K.-T. Cheng. paper

  31. PTN: A poisson transfer network for semi-supervised few-shot learning, in AAAI, 2021. H. Huang, J. Zhang, J. Zhang, Q. Wu, and C. Xu. paper

  32. A universal representation transformer layer for few-shot image classification, in ICLR, 2021. L. Liu, W. L. Hamilton, G. Long, J. Jiang, and H. Larochelle. paper

  33. Making pre-trained language models better few-shot learners, in ACL-IJCNLP, 2021. T. Gao, A. Fisch, and D. Chen. paper code

  34. Self-supervised network evolution for few-shot classification, in IJCAI, 2021. X. Tang, Z. Teng, B. Zhang, and J. Fan. paper

  35. Calibrate before use: Improving few-shot performance of language models, in ICML, 2021. Z. Zhao, E. Wallace, S. Feng, D. Klein, and S. Singh. paper code

  36. Language models are few-shot learners, in NeurIPS, 2020. T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell, S. Agarwal, A. Herbert-Voss, G. Krueger, T. Henighan, R. Child, A. Ramesh, D. Ziegler, J. Wu, C. Winter, C. Hesse, M. Chen, E. Sigler, M. Litwin, S. Gray, B. Chess, J. Clark, C. Berner, S. McCandlish, A. Radford, I. Sutskever, and D. Amodei. paper

  37. It's not just size that matters: Small language models are also few-shot learners, in NAACL-HLT, 2021. T. Schick, and H. Schütze. paper code

  38. Self-training improves pre-training for few-shot learning in task-oriented dialog systems, in EMNLP, 2021. F. Mi, W. Zhou, L. Kong, F. Cai, M. Huang, and B. Faltings. paper

  39. Few-shot intent detection via contrastive pre-training and fine-tuning, in EMNLP, 2021. J. Zhang, T. Bui, S. Yoon, X. Chen, Z. Liu, C. Xia, Q. H. Tran, W. Chang, and P. S. Yu. paper code

  40. Avoiding inference heuristics in few-shot prompt-based finetuning, in EMNLP, 2021. P. A. Utama, N. S. Moosavi, V. Sanh, and I. Gurevych. paper code

  41. Constrained language models yield few-shot semantic parsers, in EMNLP, 2021. R. Shin, C. H. Lin, S. Thomson, C. Chen, S. Roy, E. A. Platanios, A. Pauls, D. Klein, J. Eisner, and B. V. Durme. paper code

  42. Revisiting self-training for few-shot learning of language model, in EMNLP, 2021. Y. Chen, Y. Zhang, C. Zhang, G. Lee, R. Cheng, and H. Li. paper code

  43. Language models are few-shot butlers, in EMNLP, 2021. V. Micheli, and F. Fleuret. paper code

  44. FewshotQA: A simple framework for few-shot learning of question answering tasks using pre-trained text-to-text models, in EMNLP, 2021. R. Chada, and P. Natarajan. paper

  45. TransPrompt: Towards an automatic transferable prompting framework for few-shot text classification, in EMNLP, 2021. C. Wang, J. Wang, M. Qiu, J. Huang, and M. Gao. paper

  46. Meta distant transfer learning for pre-trained language models, in EMNLP, 2021. C. Wang, H. Pan, M. Qiu, J. Huang, F. Yang, and Y. Zhang. paper

  47. STraTA: Self-training with task augmentation for better few-shot learning, in EMNLP, 2021. T. Vu, M. Luong, Q. V. Le, G. Simon, and M. Iyyer. paper code

  48. Few-shot image classification: Just use a library of pre-trained feature extractors and a simple classifier, in ICCV, 2021. A. Chowdhury, M. Jiang, S. Chaudhuri, and C. Jermaine. paper code

  49. On the importance of distractors for few-shot classification, in ICCV, 2021. R. Das, Y. Wang, and J. M. F. Moura. paper code

  50. A multi-mode modulator for multi-domain few-shot classification, in ICCV, 2021. Y. Liu, J. Lee, L. Zhu, L. Chen, H. Shi, and Y. Yang. paper

  51. Universal representation learning from multiple domains for few-shot classification, in ICCV, 2021. W. Li, X. Liu, and H. Bilen. paper code

  52. Boosting the generalization capability in cross-domain few-shot learning via noise-enhanced supervised autoencoder, in ICCV, 2021. H. Liang, Q. Zhang, P. Dai, and J. Lu. paper

  53. How fine-tuning allows for effective meta-learning, in NeurIPS, 2021. K. Chua, Q. Lei, and J. D. Lee. paper

  54. Multimodal few-shot learning with frozen language models, in NeurIPS, 2021. M. Tsimpoukelli, J. Menick, S. Cabi, S. M. A. Eslami, O. Vinyals, and F. Hill. paper

  55. Grad2Task: Improved few-shot text classification using gradients for task representation, in NeurIPS, 2021. J. Wang, K. Wang, F. Rudzicz, and M. Brudno. paper

  56. True few-shot learning with language models, in NeurIPS, 2021. E. Perez, D. Kiela, and K. Cho. paper

  57. POODLE: Improving few-shot learning via penalizing out-of-distribution samples, in NeurIPS, 2021. D. Le, K. Nguyen, Q. Tran, R. Nguyen, and B. Hua. paper

  58. TOHAN: A one-step approach towards few-shot hypothesis adaptation, in NeurIPS, 2021. H. Chi, F. Liu, W. Yang, L. Lan, T. Liu, B. Han, W. Cheung, and J. Kwok. paper

  59. Task affinity with maximum bipartite matching in few-shot learning, in ICLR, 2022. C. P. Le, J. Dong, M. Soltani, and V. Tarokh. paper

  60. Differentiable prompt makes pre-trained language models better few-shot learners, in ICLR, 2022. N. Zhang, L. Li, X. Chen, S. Deng, Z. Bi, C. Tan, F. Huang, and H. Chen. paper code

  61. ConFeSS: A framework for single source cross-domain few-shot learning, in ICLR, 2022. D. Das, S. Yun, and F. Porikli. paper

  62. Switch to generalize: Domain-switch learning for cross-domain few-shot classification, in ICLR, 2022. Z. Hu, Y. Sun, and Y. Yang. paper

  63. LM-BFF-MS: Improving few-shot fine-tuning of language models based on multiple soft demonstration memory, in ACL, 2022. E. Park, D. H. Jeon, S. Kim, I. Kang, and S. Na. paper code

  64. Meta-learning via language model in-context tuning, in ACL, 2022. Y. Chen, R. Zhong, S. Zha, G. Karypis, and H. He. paper code

  65. Few-shot tabular data enrichment using fine-tuned transformer architectures, in ACL, 2022. A. Harari, and G. Katz. paper

  66. Noisy channel language model prompting for few-shot text classification, in ACL, 2022. S. Min, M. Lewis, H. Hajishirzi, and L. Zettlemoyer. paper code

  67. Prompt for extraction? PAIE: Prompting argument interaction for event argument extraction, in ACL, 2022. Y. Ma, Z. Wang, Y. Cao, M. Li, M. Chen, K. Wang, and J. Shao. paper code

  68. Are prompt-based models clueless?, in ACL, 2022. P. Kavumba, R. Takahashi, and Y. Oda. paper

  69. Prototypical verbalizer for prompt-based few-shot tuning, in ACL, 2022. G. Cui, S. Hu, N. Ding, L. Huang, and Z. Liu. paper code

  70. Fantastically ordered prompts and where to find them: Overcoming few-shot prompt order sensitivity, in ACL, 2022. Y. Lu, M. Bartolo, A. Moore, S. Riedel, and P. Stenetorp. paper

  71. PPT: Pre-trained prompt tuning for few-shot learning, in ACL, 2022. Y. Gu, X. Han, Z. Liu, and M. Huang. paper code

  72. ASCM: An answer space clustered prompting method without answer engineering, in Findings of ACL, 2022. Z. Wang, Y. Yang, Z. Xi, B. Ma, L. Wang, R. Dong, and A. Anwar. paper code

  73. Exploiting language model prompts using similarity measures: A case study on the word-in-context task, in ACL, 2022. M. Tabasi, K. Rezaee, and M. T. Pilehvar. paper

  74. P-Tuning: Prompt tuning can be comparable to fine-tuning across scales and tasks, in ACL, 2022. X. Liu, K. Ji, Y. Fu, W. Tam, Z. Du, Z. Yang, and J. Tang. paper

  75. Cutting down on prompts and parameters: Simple few-shot learning with language models, in Findings of ACL, 2022. R. L. L. IV, I. Balazevic, E. Wallace, F. Petroni, S. Singh, and S. Riedel. paper code

  76. Prompt-free and efficient few-shot learning with language models, in ACL, 2022. R. K. Mahabadi, L. Zettlemoyer, J. Henderson, L. Mathias, M. Saeidi, V. Stoyanov, and M. Yazdani. paper code

  77. Pre-training to match for unified low-shot relation extraction, in ACL, 2022. F. Liu, H. Lin, X. Han, B. Cao, and L. Sun. paper code

  78. Dual context-guided continuous prompt tuning for few-shot learning, in Findings of ACL, 2022. J. Zhou, L. Tian, H. Yu, Z. Xiao, H. Su, and J. Zhou. paper

  79. Cluster & tune: Boost cold start performance in text classification, in ACL, 2022. E. Shnarch, A. Gera, A. Halfon, L. Dankin, L. Choshen, R. Aharonov, and N. Slonim. paper code

  80. Pushing the limits of simple pipelines for few-shot learning: External data and fine-tuning make a difference, in CVPR, 2022. S. X. Hu, D. Li, J. Stühmer, M. Kim, and T. M. Hospedales. paper code

  81. HyperTransformer: Model generation for supervised and semi-supervised few-shot learning, in ICML, 2022. A. Zhmoginov, M. Sandler, and M. Vladymyrov. paper code

  82. Prompting ELECTRA: Few-shot learning with discriminative pre-trained models, in EMNLP, 2022. M. Xia, M. Artetxe, J. Du, D. Chen, and V. Stoyanov. paper code

  83. Continual training of language models for few-shot learning, in EMNLP, 2022. Z. Ke, H. Lin, Y. Shao, H. Xu, L. Shu, and B. Liu. paper code

  84. GPS: Genetic prompt search for efficient few-shot learning, in EMNLP, 2022. H. Xu, Y. Chen, Y. Du, N. Shao, Y. Wang, H. Li, and Z. Yang. paper code

  85. On measuring the intrinsic few-shot hardness of datasets, in EMNLP, 2022. X. Zhao, S. Murty, and C. D. Manning. paper code

  86. AMAL: Meta knowledge-driven few-shot adapter learning, in EMNLP, 2022. S. K. Hong, and T. Y. Jang. paper

  87. Flamingo: A visual language model for few-shot learning, in NeurIPS, 2022. J.-B. Alayrac, J. Donahue, P. Luc, A. Miech, I. Barr, Y. Hasson, K. Lenc, A. Mensch, K. Millican, M. Reynolds, R. Ring, E. Rutherford, S. Cabi, T. Han, Z. Gong, S. Samangooei, M. Monteiro, J. Menick, S. Borgeaud, A. Brock, A. Nematzadeh, S. Sharifzadeh, M. Binkowski, R. Barreira, O. Vinyals, A. Zisserman, and K. Simonyan. paper

  88. Language models with image descriptors are strong few-shot video-language learners, in NeurIPS, 2022. Z. Wang, M. Li, R. Xu, L. Zhou, J. Lei, X. Lin, S. Wang, Z. Yang, C. Zhu, D. Hoiem, S.-F. Chang, M. Bansal, and H. Ji. paper code

  89. Singular value fine-tuning: Few-shot segmentation requires few-parameters fine-tuning, in NeurIPS, 2022. Y. Sun, Q. Chen, X. He, J. Wang, H. Feng, J. Han, E. Ding, J. Cheng, Z. Li, and J. Wang. paper code

  90. Few-shot parameter-efficient fine-tuning is better and cheaper than in-context learning, in NeurIPS, 2022. H. Liu, D. Tam, M. Mohammed, J. Mohta, T. Huang, M. Bansal, and C. Raffel. paper code

  91. Powering finetuning in few-shot learning: Domain-agnostic bias reduction with selected sampling, in AAAI, 2022. R. Tao, H. Zhang, Y. Zheng, and M. Savvides. paper

  92. SELECTION-INFERENCE: Exploiting large language models for interpretable logical reasoning, in ICLR, 2023. A. Creswell, M. Shannahan, and I. Higgins. paper

  93. Revisit finetuning strategy for few-shot learning to transfer the emdeddings, in ICLR, 2023. H. Wang, T. Yue, X. Ye, Z. He, B. Li, and Y. Li. paper [code](https://github.com/whzyf951620/ LinearProbingFinetuningFirthBias)

  94. Model ensemble instead of prompt fusion: A sample-specific knowledge transfer method for few-shot prompt tuning, in ICLR, 2023. X. PENG, C. Xing, P. K. Choubey, C.-S. Wu, and C. Xiong. paper

  95. Bidirectional language models are also few-shot learners, in ICLR, 2023. A. Patel, B. Li, M. S. Rasooli, N. Constant, C. Raffel, and C. Callison-Burch. paper

  96. Prototypical calibration for few-shot learning of language models, in ICLR, 2023. Z. Han, Y. Hao, L. Dong, Y. Sun, and F. Wei. paper code

  97. Prompt, generate, then cache: Cascade of foundation models makes strong few-shot learners, in CVPR, 2023. R. Zhang, X. Hu, B. Li, S. Huang, H. Deng, Y. Qiao, P. Gao, and H. Li. paper code

  98. Supervised masked knowledge distillation for few-shot transformers, in CVPR, 2023. H. Lin, G. Han, J. Ma, S. Huang, X. Lin, and S.-F. Chang. paper code

  99. Boosting transductive few-shot fine-tuning with margin-based uncertainty weighting and probability regularization, in CVPR, 2023. R. Tao, H. Chen, and M. Savvides. paper

  100. Hint-Aug: Drawing hints from foundation vision transformers towards boosted few-shot parameter-efficient tuning, in CVPR, 2023. Z. Yu, S. Wu, Y. Fu, S. Zhang, and Y. C. Lin. paper code

  101. ProD: Prompting-to-disentangle domain knowledge for cross-domain few-shot image classification, in CVPR, 2023. T. Ma, Y. Sun, Z. Yang, and Y. Yang. paper

  102. Few-shot learning with visual distribution calibration and cross-modal distribution alignment, in CVPR, 2023. R. Wang, H. Zheng, X. Duan, J. Liu, Y. Lu, T. Wang, S. Xu, and B. Zhang. paper code

  103. MetricPrompt: Prompting model as a relevance metric for few-shot text classification, in KDD, 2023. H. Dong, W. Zhang, and W. Che. paper code

  104. Efficient training of language models using few-shot learning, in ICML, 2023. S. J. Reddi, S. Miryoosefi, S. Karp, S. Krishnan, S. Kale, S. Kim, and S. Kumar. paper

  105. Multitask pre-training of modular prompt for chinese few-shot learning, in ACL, 2023. T. Sun, Z. He, Q. Zhu, X. Qiu, and X. Huang. paper code

  106. Cold-start data selection for better few-shot language model fine-tuning: A prompt-based uncertainty propagation approach, in ACL, 2023. Y. Yu, R. Zhang, R. Xu, J. Zhang, J. Shen, and C. Zhang. paper code

  107. Instruction induction: From few examples to natural language task descriptions, in ACL, 2023. O. Honovich, U. Shaham, S. R. Bowman, and O. Levy. paper code

  108. Few-shot adaptation works with unpredictable data, in ACL, 2023. J. S. Chan, M. Pieler, J. Jao, J. Scheurer, and E. Perez. paper

  109. Hierarchical verbalizer for few-shot hierarchical text classification, in ACL, 2023. K. Ji, Y. Lian, J. Gao, and B. Wang. paper code

  110. Black box few-shot adaptation for vision-language models, in ICCV, 2023. Y. Ouali, A. Bulat, B. Matinez, and G. Tzimiropoulos. paper code

  111. Read-only prompt optimization for vision-language few-shot learning, in ICCV, 2023. D. Lee, S. Song, J. Suh, J. Choi, S. Lee, and H. J. Kim. paper code

  112. Not all features matter: Enhancing few-shot CLIP with adaptive prior refinement, in ICCV, 2023. X. Zhu, R. Zhang, B. He, A. Zhou, D. Wang, B. Zhao, and P. Gao. paper code

  113. One-shot generative domain adaptation, in ICCV, 2023. C. Yang, Y. Shen, Z. Zhang, Y. Xu, J. Zhu, Z. Wu, and B. Zhou. paper code

  114. Smoothness similarity regularization for few-shot GAN adaptation, in ICCV, 2023. V. Sushko, R. Wang, and J. Gall. paper

  115. Task-aware adaptive learning for cross-domain few-shot learning, in ICCV, 2023. Y. Guo, R. Du, Y. Dong, T. Hospedales, Y. Song, and Z. Ma. paper code

  116. Defending pre-trained language models as few-shot learners against backdoor attacks, in NeurIPS, 2023. Z. Xi, T. Du, C. Li, R. Pang, S. Ji, J. Chen, F. Ma, and T. Wang. paper code

  117. FD-Align: Feature discrimination alignment for fine-tuning pre-trained models in few-shot learning, in NeurIPS, 2023. K. Song, H. Ma, B. Zou, H. Zhang, and W. Huang. paper code

  118. Fairness-guided few-shot prompting for large language models, in NeurIPS, 2023. H. Ma, C. Zhang, Y. Bian, L. Liu, Z. Zhang, P. Zhao, S. Zhang, H. Fu, Q. Hu, and B. Wu. paper

  119. Meta-Adapter: An online few-shot learner for vision-language model, in NeurIPS, 2023. C. Cheng, L. Song, R. Xue, H. Wang, H. Sun, Y. Ge, and Y. Shan. paper

  120. Language models can improve event prediction by few-shot abductive reasoning, in NeurIPS, 2023. X. Shi, S. Xue, K. Wang, F. Zhou, J. Y. Zhang, J. ZHOU, C. Tan, and H. Mei. paper code

  121. ExPT: Synthetic pretraining for few-shot experimental design, in NeurIPS, 2023. T. Nguyen, S. Agrawal, and A. Grover. paper code

  122. LoCoOp: Few-shot out-of-distribution detection via prompt learning, in NeurIPS, 2023. A. Miyai, Q. Yu, G. Irie, and K. Aizawa. paper code

  123. Embroid: Unsupervised prediction smoothing can improve few-shot classification, in NeurIPS, 2023. N. Guha, M. F. Chen, K. Bhatia, A. Mirhoseini, F. Sala, and C. Re. paper

  124. Domain re-modulation for few-shot generative domain adaptation, in NeurIPS, 2023. Y. Wu, Z. Li, C. Wang, H. Zheng, S. Zhao, B. Li, and D. Tao. paper code

  125. Focus your attention when few-shot classification, in NeurIPS, 2023. H. Wang, S. Jie, and Z. Deng. paper code

  126. The effect of diversity in meta-learning, in AAAI, 2023. R. Kumar, T. Deleu, and Y. Bengio. paper code

  127. FEditNet: Few-shot editing of latent semantics in GAN spaces, in AAAI, 2023. M. Xia, Y. Shu, Y. Wang, Y.-K. Lai, Q. Li, P. Wan, Z. Wang, and Y.-J. Liu. paper code

  128. Better generalized few-shot learning even without base data., in AAAI, 2023. S.-W. Kim, and D.-W. Choi. paper code

  129. Prompt-augmented linear probing: Scaling beyond the limit of few-shot in-context learners, in AAAI, 2023. H. Cho, H. J. Kim, J. Kim, S.-W. Lee, S.-g. Lee, K. M. Yoo, and T. Kim. paper

  130. Anchoring fine-tuning of sentence transformer with semantic label information for efficient truly few-shot classification, in EMNLP, 2023. A. Pauli, L. Derczynski, and I. Assent. paper code

  131. Skill-based few-shot selection for in-context learning, in EMNLP, 2023. S. An, B. Zhou, Z. Lin, Q. Fu, B. Chen, N. Zheng, W. Chen, and J.-G. Lou. paper

  132. Transductive learning for textual few-shot classification in API-based embedding models, in EMNLP, 2023. P. Colombo, V. Pellegrain, M. Boudiaf, M. Tami, V. Storchan, I. Ayed, and P. Piantanida. paper

  133. AdaSent: Efficient domain-adapted sentence embeddings for few-shot classification, in EMNLP, 2023. Y. Huang, K. Wang, S. Dutta, R. Patel, G. Glavaš, and I. Gurevych. paper code

  134. A hard-to-beat baseline for training-free CLIP-based adaptation, in ICLR, 2024. Z. Wang, J. Liang, L. Sheng, R. He, Z. Wang, and T. Tan. paper

  135. Group preference optimization: Few-shot alignment of large language models, in ICLR, 2024. S. Zhao, J. Dang, and A. Grover. paper

  136. Consistency-guided prompt learning for vision-language models, in ICLR, 2024. S. Roy, and A. Etemad. paper code

  137. BayesPrompt: Prompting large-scale pre-trained language models on few-shot inference via debiased domain abstraction, in ICLR, 2024. J. Li, F. Song, Y. Jin, W. Qiang, C. Zheng, F. Sun, and H. Xiong. paper

  138. Neural fine-tuning search for few-shot learning, in ICLR, 2024. P. Eustratiadis, Ł. Dudziak, D. Li, and T. Hospedales. paper

  139. DePT: Decomposed prompt tuning for parameter-efficient fine-tuning, in ICLR, 2024. Z. Shi, and A. Lipani. paper code

  140. Few-shot hybrid domain adaptation of image generator, in ICLR, 2024. H. Li, Y. Liu, L. Xia, Y. Lin, T. Zheng, Z. Yang, W. Wang, X. Zhong, X. Ren, and X. He. paper

Refining Meta-learned Parameters

  1. Model-agnostic meta-learning for fast adaptation of deep networks, in ICML, 2017. C. Finn, P. Abbeel, and S. Levine. paper

  2. Bayesian model-agnostic meta-learning, in NeurIPS, 2018. J. Yoon, T. Kim, O. Dia, S. Kim, Y. Bengio, and S. Ahn. paper

  3. Probabilistic model-agnostic meta-learning, in NeurIPS, 2018. C. Finn, K. Xu, and S. Levine. paper

  4. Gradient-based meta-learning with learned layerwise metric and subspace, in ICML, 2018. Y. Lee and S. Choi. paper

  5. Recasting gradient-based meta-learning as hierarchical Bayes, in ICLR, 2018. E. Grant, C. Finn, S. Levine, T. Darrell, and T. Griffiths. paper

  6. Few-shot human motion prediction via meta-learning, in ECCV, 2018. L.-Y. Gui, Y.-X. Wang, D. Ramanan, and J. Moura. paper

  7. The effects of negative adaptation in model-agnostic meta-learning, arXiv preprint, 2018. T. Deleu and Y. Bengio. paper

  8. Unsupervised meta-learning for few-shot image classification, in NeurIPS, 2019. S. Khodadadeh, L. Bölöni, and M. Shah. paper

  9. Amortized bayesian meta-learning, in ICLR, 2019. S. Ravi and A. Beatson. paper

  10. Meta-learning with latent embedding optimization, in ICLR, 2019. A. A. Rusu, D. Rao, J. Sygnowski, O. Vinyals, R. Pascanu, S. Osindero, and R. Hadsell. paper code

  11. LGM-Net: Learning to generate matching networks for few-shot learning, in ICML, 2019. H. Li, W. Dong, X. Mei, C. Ma, F. Huang, and B.-G. Hu. paper code

  12. Meta R-CNN: Towards general solver for instance-level low-shot learning, in ICCV, 2019. X. Yan, Z. Chen, A. Xu, X. Wang, X. Liang, and L. Lin. paper

  13. Task agnostic meta-learning for few-shot learning, in CVPR, 2019. M. A. Jamal, and G.-J. Qi. paper

  14. Meta-transfer learning for few-shot learning, in CVPR, 2019. Q. Sun, Y. Liu, T.-S. Chua, and B. Schiele. paper code

  15. Meta-learning of neural architectures for few-shot learning, in CVPR, 2020. T. Elsken, B. Staffler, J. H. Metzen, and F. Hutter. paper

  16. Attentive weights generation for few shot learning via information maximization, in CVPR, 2020. Y. Guo, and N.-M. Cheung. paper

  17. Few-shot open-set recognition using meta-learning, in CVPR, 2020. B. Liu, H. Kang, H. Li, G. Hua, and N. Vasconcelos. paper

  18. Incremental few-shot object detection, in CVPR, 2020. J.-M. Perez-Rua, X. Zhu, T. M. Hospedales, and T. Xiang. paper

  19. Automated relational meta-learning, in ICLR, 2020. H. Yao, X. Wu, Z. Tao, Y. Li, B. Ding, R. Li, and Z. Li. paper

  20. Meta-learning with warped gradient descent, in ICLR, 2020. S. Flennerhag, A. A. Rusu, R. Pascanu, F. Visin, H. Yin, and R. Hadsell. paper

  21. Meta-learning without memorization, in ICLR, 2020. M. Yin, G. Tucker, M. Zhou, S. Levine, and C. Finn. paper

  22. ES-MAML: Simple Hessian-free meta learning, in ICLR, 2020. X. Song, W. Gao, Y. Yang, K. Choromanski, A. Pacchiano, and Y. Tang. paper

  23. Self-supervised tuning for few-shot segmentation, in IJCAI, 2020. K. Zhu, W. Zhai, and Y. Cao. paper

  24. Multi-attention meta learning for few-shot fine-grained image recognition, in IJCAI, 2020. Y. Zhu, C. Liu, and S. Jiang. paper

  25. An ensemble of epoch-wise empirical Bayes for few-shot learning, in ECCV, 2020. Y. Liu, B. Schiele, and Q. Sun. paper code

  26. Incremental few-shot meta-learning via indirect discriminant alignment, in ECCV, 2020. Q. Liu, O. Majumder, A. Achille, A. Ravichandran, R. Bhotika, and S. Soatto. paper

  27. Model-agnostic boundary-adversarial sampling for test-time generalization in few-shot learning, in ECCV, 2020. J. Kim, H. Kim, and G. Kim. paper code

  28. Bayesian meta-learning for the few-shot setting via deep kernels, in NeurIPS, 2020. M. Patacchiola, J. Turner, E. J. Crowley, M. O'Boyle, and A. J. Storkey. paper code

  29. OOD-MAML: Meta-learning for few-shot out-of-distribution detection and classification, in NeurIPS, 2020. T. Jeong, and H. Kim. paper code

  30. Unraveling meta-learning: Understanding feature representations for few-shot tasks, in ICML, 2020. M. Goldblum, S. Reich, L. Fowl, R. Ni, V. Cherepanova, and T. Goldstein. paper code

  31. Node classification on graphs with few-shot novel labels via meta transformed network embedding, in NeurIPS, 2020. L. Lan, P. Wang, X. Du, K. Song, J. Tao, and X. Guan. paper

  32. Adversarially robust few-shot learning: A meta-learning approach, in NeurIPS, 2020. M. Goldblum, L. Fowl, and T. Goldstein. paper code

  33. BOIL: Towards representation change for few-shot learning, in ICLR, 2021. J. Oh, H. Yoo, C. Kim, and S. Yun. paper code

  34. Few-shot open-set recognition by transformation consistency, in CVPR, 2021. M. Jeong, S. Choi, and C. Kim. paper

  35. Improving generalization in meta-learning via task augmentation, in ICML, 2021. H. Yao, L. Huang, L. Zhang, Y. Wei, L. Tian, J. Zou, J. Huang, and Z. Li. paper

  36. A representation learning perspective on the importance of train-validation splitting in meta-learning, in ICML, 2021. N. Saunshi, A. Gupta, and W. Hu. paper code

  37. Data augmentation for meta-learning, in ICML, 2021. R. Ni, M. Goldblum, A. Sharaf, K. Kong, and T. Goldstein. paper code

  38. Task cooperation for semi-supervised few-shot learning, in AAAI, 2021. H. Ye, X. Li, and D.-C. Zhan. paper

  39. Conditional self-supervised learning for few-shot classification, in IJCAI, 2021. Y. An, H. Xue, X. Zhao, and L. Zhang. paper

  40. Cross-domain few-shot classification via adversarial task augmentation, in IJCAI, 2021. H. Wang, and Z.-H. Deng. paper code

  41. DReCa: A general task augmentation strategy for few-shot natural language inference, in NAACL-HLT, 2021. S. Murty, T. Hashimoto, and C. D. Manning. paper

  42. MetaXL: Meta representation transformation for low-resource cross-lingual learning, in NAACL-HLT, 2021. M. Xia, G. Zheng, S. Mukherjee, M. Shokouhi, G. Neubig, and A. H. Awadallah. paper code

  43. Meta-learning with task-adaptive loss function for few-shot learning, in ICCV, 2021. S. Baik, J. Choi, H. Kim, D. Cho, J. Min, and K. M. Lee. paper code

  44. Meta-Baseline: Exploring simple meta-learning for few-shot learning, in ICCV, 2021. Y. Chen, Z. Liu, H. Xu, T. Darrell, and X. Wang. paper

  45. A lazy approach to long-horizon gradient-based meta-learning, in ICCV, 2021. M. A. Jamal, L. Wang, and B. Gong. paper

  46. Task-aware part mining network for few-shot learning, in ICCV, 2021. J. Wu, T. Zhang, Y. Zhang, and F. Wu. paper

  47. Binocular mutual learning for improving few-shot classification, in ICCV, 2021. Z. Zhou, X. Qiu, J. Xie, J. Wu, and C. Zhang. paper code

  48. Meta-learning with an adaptive task scheduler, in NeurIPS, 2021. H. Yao, Y. Wang, Y. Wei, P. Zhao, M. Mahdavi, D. Lian, and C. Finn. paper

  49. Memory efficient meta-learning with large images, in NeurIPS, 2021. J. Bronskill, D. Massiceti, M. Patacchiola, K. Hofmann, S. Nowozin, and R. Turner. paper

  50. EvoGrad: Efficient gradient-based meta-learning and hyperparameter optimization, in NeurIPS, 2021. O. Bohdal, Y. Yang, and T. Hospedales. paper

  51. Towards enabling meta-learning from target models, in NeurIPS, 2021. S. Lu, H. Ye, L. Gan, and D. Zhan. paper

  52. The role of global labels in few-shot classification and how to infer them, in NeurIPS, 2021. R. Wang, M. Pontil, and C. Ciliberto. paper

  53. How to train your MAML to excel in few-shot classification, in ICLR, 2022. H. Ye, and W. Chao. paper code

  54. Meta-learning with fewer tasks through task interpolation, in ICLR, 2022. H. Yao, L. Zhang, and C. Finn. paper code

  55. Continuous-time meta-learning with forward mode differentiation, in ICLR, 2022. T. Deleu, D. Kanaa, L. Feng, G. Kerg, Y. Bengio, G. Lajoie, and P. Bacon. paper

  56. Bootstrapped meta-learning, in ICLR, 2022. S. Flennerhag, Y. Schroecker, T. Zahavy, H. v. Hasselt, D. Silver, and S. Singh. paper

  57. Learning prototype-oriented set representations for meta-learning, in ICLR, 2022. D. d. Guo, L. Tian, M. Zhang, M. Zhou, and H. Zha. paper

  58. Dynamic kernel selection for improved generalization and memory efficiency in meta-learning, in CVPR, 2022. A. Chavan, R. Tiwari, U. Bamba, and D. K. Gupta. paper code

  59. What matters for meta-learning vision regression tasks?, in CVPR, 2022. N. Gao, H. Ziesche, N. A. Vien, M. Volpp, and G. Neumann. paper code

  60. Multidimensional belief quantification for label-efficient meta-learning, in CVPR, 2022. D. S. Pandey, and Q. Yu. paper

  61. Few-shot node classification on attributed networks with graph meta-learning, in SIGIR, 2022. Y. Liu, M. Li, X. Li, F. Giunchiglia, X. Feng, and R. Guan. paper

  62. The role of deconfounding in meta-learning, in ICML, 2022. Y. Jiang, Z. Chen, K. Kuang, L. Yuan, X. Ye, Z. Wang, F. Wu, and Y. Wei. paper

  63. Stochastic deep networks with linear competing units for model-agnostic meta-learning, in ICML, 2022. K. Kalais, and S. Chatzis. paper code

  64. Efficient variance reduction for meta-learning, in ICML, 2022. H. Yang, and J. T. Kwok. paper

  65. Subspace learning for effective meta-learning, in ICML, 2022. W. Jiang, J. Kwok, and Y. Zhang. paper

  66. Robust meta-learning with sampling noise and label noise via Eigen-Reptile, in ICML, 2022. D. Chen, L. Wu, S. Tang, X. Yun, B. Long, and Y. Zhuang. paper code

  67. Attentional meta-learners for few-shot polythetic classification, in ICML, 2022. B. J. Day, R. V. Torné, N. Simidjievski, and P. Lió. paper code

  68. PLATINUM: Semi-supervised model agnostic meta-learning using submodular mutual information, in ICML, 2022. C. Li, S. Kothawade, F. Chen, and R. K. Iyer. paper code

  69. Finding meta winning ticket to train your MAML, in KDD, 2022. D. Gao, Y. Xie, Z. Zhou, Z. Wang, Y. Li, and B. Ding. paper

  70. p-Meta: Towards on-device deep model adaptation, in KDD, 2022. Z. Qu, Z. Zhou, Y. Tong, and L. Thiele. paper

  71. FAITH: Few-shot graph classification with hierarchical task graphs, in IJCAI, 2022. S. Wang, Y. Dong, X. Huang, C. Chen, and J. Li. paper code

  72. Meta-learning fast weight language models, in EMNLP, 2022. K. Clark, K. Guu, M.-W. Chang, P. Pasupat, G. Hinton, and M. Norouzi. paper

  73. Understanding benign overfitting in gradient-based meta learning, in NeurIPS, 2022. L. Chen, S. Lu, and T. Chen. paper

  74. Meta-learning with self-improving momentum target, in NeurIPS, 2022. J. Tack, J. Park, H. Lee, J. Lee, and J. Shin. paper

  75. Adversarial task up-sampling for meta-learning, in NeurIPS, 2022. Y. Wu, L.-K. Huang, and Y. Wei. paper

  76. PAC prediction sets for meta-learning, in NeurIPS, 2022. S. Park, E. Dobriban, I. Lee, and O. Bastani. paper

  77. A contrastive rule for meta-learning, in NeurIPS, 2022. N. Zucchet, S. Schug, J. V. Oswald, D. Zhao, and J. Sacramento. paper code

  78. On enforcing better conditioned meta-learning for rapid few-shot adaptation, in NeurIPS, 2022. M. Hiller, M. Harandi, and T. Drummond. paper

  79. Conditional meta-learning of linear representations, in NeurIPS, 2022. G. Denevi, m. pontil, and C. Ciliberto. paper

  80. Meta-ticket: Finding optimal subnetworks for few-shot learning within randomly initialized neural networks, in NeurIPS, 2022. D. Chijiwa, S. Yamaguchi, A. Kumagai, and Y. Ida. paper code

  81. MetaNODE: Prototype optimization as a neural ODE for few-shot learning, in AAAI, 2022. B. Zhang, X. Li, S. Feng, Y. Ye, and R. Ye. paper

  82. A nested bi-level optimization framework for robust few shot learning, in AAAI, 2022. K. Killamsetty, C. Li, C. Zhao, F. Chen, and R. K. Iyer. paper

  83. Enhancing meta learning via multi-objective soft improvement functions, in ICLR, 2023. R. Yu, W. Chen, X. Wang, and J. Kwok. paper

  84. Understanding train-validation split in meta-learning with neural networks, in ICLR, 2023. X. Zuo, Z. Chen, H. Yao, Y. Cao, and Q. Gu. paper

  85. Bi-level meta-learning for few-shot domain generalization, in CVPR, 2023. X. Qin, X. Song, and S. Jiang. paper

  86. SHOT: Suppressing the hessian along the optimization trajectory for gradient-based meta-learning, in NeurIPS, 2023. J. Lee, J. Yoo, and N. Kwak. paper code

  87. Meta-AdaM: An meta-learned adaptive optimizer with momentum for few-shot learning, in NeurIPS, 2023. S. Sun, and H. Gao. paper

  88. ESPT: A self-supervised episodic spatial pretext task for improving few-shot learning, in AAAI, 2023. Y. Rong, X. Lu, Z. Sun, Y. Chen, and S. Xiong. paper code

  89. Scalable bayesian meta-learning through generalized implicit gradients, in AAAI, 2023. Y. Zhang, B. Li, S. Gao, and G. B. Giannakis. paper code

  90. A hierarchical Bayesian model for few-shot meta learning, in ICLR, 2024. M. Kim, and T. Hospedales. paper

  91. First-order ANIL provably learns representations despite overparametrisation, in ICLR, 2024. O. K. Yüksel, E. Boursier, and N. Flammarion. paper

  92. Meta-learning priors using unrolled proximal neural networks, in ICLR, 2024. Y. Zhang, and G. B. Giannakis. paper

Learning Search Steps

  1. Optimization as a model for few-shot learning, in ICLR, 2017. S. Ravi and H. Larochelle. paper code

  2. Meta Navigator: Search for a good adaptation policy for few-shot learning, in ICCV, 2021. C. Zhang, H. Ding, G. Lin, R. Li, C. Wang, and C. Shen. paper

Computer Vision

  1. Learning robust visual-semantic embeddings, in CVPR, 2017. Y.-H. Tsai, L.-K. Huang, and R. Salakhutdinov. paper

  2. One-shot action localization by learning sequence matching network, in CVPR, 2018. H. Yang, X. He, and F. Porikli. paper

  3. Incremental few-shot learning for pedestrian attribute recognition, in EMNLP, 2018. L. Xiang, X. Jin, G. Ding, J. Han, and L. Li. paper

  4. Few-shot video-to-video synthesis, in NeurIPS, 2019. T.-C. Wang, M.-Y. Liu, A. Tao, G. Liu, J. Kautz, and B. Catanzaro. paper code

  5. Few-shot object detection via feature reweighting, in ICCV, 2019. B. Kang, Z. Liu, X. Wang, F. Yu, J. Feng, and T. Darrell. paper code

  6. Few-shot unsupervised image-to-image translation, in ICCV, 2019. M.-Y. Liu, X. Huang, A. Mallya, T. Karras, T. Aila, J. Lehtinen, and J. Kautz. paper code

  7. Feature weighting and boosting for few-shot segmentation, in ICCV, 2019. K. Nguyen, and S. Todorovic. paper

  8. Few-shot adaptive gaze estimation, in ICCV, 2019. S. Park, S. D. Mello, P. Molchanov, U. Iqbal, O. Hilliges, and J. Kautz. paper

  9. AMP: Adaptive masked proxies for few-shot segmentation, in ICCV, 2019. M. Siam, B. N. Oreshkin, and M. Jagersand. paper code

  10. Few-shot generalization for single-image 3D reconstruction via priors, in ICCV, 2019. B. Wallace, and B. Hariharan. paper

  11. Few-shot adversarial learning of realistic neural talking head models, in ICCV, 2019. E. Zakharov, A. Shysheya, E. Burkov, and V. Lempitsky. paper code

  12. Pyramid graph networks with connection attentions for region-based one-shot semantic segmentation, in ICCV, 2019. C. Zhang, G. Lin, F. Liu, J. Guo, Q. Wu, and R. Yao. paper

  13. Time-conditioned action anticipation in one shot, in CVPR, 2019. Q. Ke, M. Fritz, and B. Schiele. paper

  14. Few-shot learning with localization in realistic settings, in CVPR, 2019. D. Wertheimer, and B. Hariharan. paper code

  15. Improving few-shot user-specific gaze adaptation via gaze redirection synthesis, in CVPR, 2019. Y. Yu, G. Liu, and J.-M. Odobez. paper

  16. CANet: Class-agnostic segmentation networks with iterative refinement and attentive few-shot learning, in CVPR, 2019. C. Zhang, G. Lin, F. Liu, R. Yao, and C. Shen. paper code

  17. Multi-level Semantic Feature Augmentation for One-shot Learning, in TIP, 2019. Z. Chen, Y. Fu, Y. Zhang, Y.-G. Jiang, X. Xue, and L. Sigal. paper code

  18. 3FabRec: Fast few-shot face alignment by reconstruction, in CVPR, 2020. B. Browatzki, and C. Wallraven. paper

  19. Few-shot video classification via temporal alignment, in CVPR, 2020. K. Cao, J. Ji, Z. Cao, C.-Y. Chang, J. C. Niebles. paper

  20. One-shot adversarial attacks on visual tracking with dual attention, in CVPR, 2020. X. Chen, X. Yan, F. Zheng, Y. Jiang, S.-T. Xia, Y. Zhao, and R. Ji. paper

  21. FGN: Fully guided network for few-shot instance segmentation, in CVPR, 2020. Z. Fan, J.-G. Yu, Z. Liang, J. Ou, C. Gao, G.-S. Xia, and Y. Li. paper

  22. CRNet: Cross-reference networks for few-shot segmentation, in CVPR, 2020. W. Liu, C. Zhang, G. Lin, and F. Liu. paper

  23. Revisiting pose-normalization for fine-grained few-shot recognition, in CVPR, 2020. L. Tang, D. Wertheimer, and B. Hariharan. paper

  24. Few-shot learning of part-specific probability space for 3D shape segmentation, in CVPR, 2020. L. Wang, X. Li, and Y. Fang. paper

  25. Semi-supervised learning for few-shot image-to-image translation, in CVPR, 2020. Y. Wang, S. Khan, A. Gonzalez-Garcia, J. van de Weijer, and F. S. Khan. paper

  26. Multi-domain learning for accurate and few-shot color constancy, in CVPR, 2020. J. Xiao, S. Gu, and L. Zhang. paper

  27. One-shot domain adaptation for face generation, in CVPR, 2020. C. Yang, and S.-N. Lim. paper

  28. MetaPix: Few-shot video retargeting, in ICLR, 2020. J. Lee, D. Ramanan, and R. Girdhar. paper

  29. Few-shot human motion prediction via learning novel motion dynamics, in IJCAI, 2020. C. Zang, M. Pei, and Y. Kong. paper

  30. Shaping visual representations with language for few-shot classification, in ACL, 2020. J. Mu, P. Liang, and N. D. Goodman. paper

  31. MarioNETte: Few-shot face reenactment preserving identity of unseen targets, in AAAI, 2020. S. Ha, M. Kersner, B. Kim, S. Seo, and D. Kim. paper

  32. One-shot learning for long-tail visual relation detection, in AAAI, 2020. W. Wang, M. Wang, S. Wang, G. Long, L. Yao, G. Qi, and Y. Chen. paper code

  33. Differentiable meta-learning model for few-shot semantic segmentation, in AAAI, 2020. P. Tian, Z. Wu, L. Qi, L. Wang, Y. Shi, and Y. Gao. paper

  34. Part-aware prototype network for few-shot semantic segmentation, in ECCV, 2020. Y. Liu, X. Zhang, S. Zhang, and X. He. paper code

  35. Prototype mixture models for few-shot semantic segmentation, in ECCV, 2020. B. Yang, C. Liu, B. Li, J. Jiao, and Q. Ye. paper code

  36. Few-shot action recognition with permutation-invariant attention, in ECCV, 2020. H. Zhang, L. Zhang, X. Qi, H. Li, P. H. S. Torr, and P. Koniusz. paper

  37. Few-shot compositional font generation with dual memory, in ECCV, 2020. J. Cha, S. Chun, G. Lee, B. Lee, S. Kim, and H. Lee. paper code

  38. Few-shot object detection and viewpoint estimation for objects in the wild, in ECCV, 2020. Y. Xiao, and R. Marlet. paper

  39. Few-shot scene-adaptive anomaly detection, in ECCV, 2020. Y. Lu, F. Yu, M. K. K. Reddy, and Y. Wang. paper code

  40. Few-shot semantic segmentation with democratic attention networks, in ECCV, 2020. H. Wang, X. Zhang, Y. Hu, Y. Yang, X. Cao, and X. Zhen. paper

  41. Few-shot single-view 3-D object reconstruction with compositional priors, in ECCV, 2020. M. Michalkiewicz, S. Parisot, S. Tsogkas, M. Baktashmotlagh, A. Eriksson, and E. Belilovsky. paper

  42. COCO-FUNIT: Few-shot unsupervised image translation with a content conditioned style encoder, in ECCV, 2020. K. Saito, K. Saenko, and M. Liu. paper code

  43. Multi-scale positive sample refinement for few-shot object detection, in ECCV, 2020. J. Wu, S. Liu, D. Huang, and Y. Wang. paper code

  44. Large-scale few-shot learning via multi-modal knowledge discovery, in ECCV, 2020. S. Wang, J. Yue, J. Liu, Q. Tian, and M. Wang. paper

  45. Graph convolutional networks for learning with few clean and many noisy labels, in ECCV, 2020. A. Iscen, G. Tolias, Y. Avrithis, O. Chum, and C. Schmid. paper

  46. Self-supervised few-shot learning on point clouds, in NeurIPS, 2020. C. Sharma, and M. Kaul. paper code

  47. Restoring negative information in few-shot object detection, in NeurIPS, 2020. Y. Yang, F. Wei, M. Shi, and G. Li. paper code

  48. Few-shot image generation with elastic weight consolidation, in NeurIPS, 2020. Y. Li, R. Zhang, J. Lu, and E. Shechtman. paper

  49. Few-shot visual reasoning with meta-analogical contrastive learning, in NeurIPS, 2020. Y. Kim, J. Shin, E. Yang, and S. J. Hwang. paper

  50. CrossTransformers: spatially-aware few-shot transfer, in NeurIPS, 2020. C. Doersch, A. Gupta, and A. Zisserman. paper

  51. Make one-shot video object segmentation efficient again, in NeurIPS, 2020. T. Meinhardt, and L. Leal-Taixé. paper code

  52. Frustratingly simple few-shot object detection, in ICML, 2020. X. Wang, T. E. Huang, J. Gonzalez, T. Darrell, and F. Yu. paper code

  53. Adversarial style mining for one-shot unsupervised domain adaptation, in NeurIPS, 2020. Y. Luo, P. Liu, T. Guan, J. Yu, and Y. Yang. paper code

  54. Disentangling 3D prototypical networks for few-shot concept learning, in ICLR, 2021. M. Prabhudesai, S. Lal, D. Patil, H. Tung, A. W. Harley, and K. Fragkiadaki. paper

  55. Learning normal dynamics in videos with meta prototype network, in CVPR, 2021. H. Lv, C. Chen, Z. Cui, C. Xu, Y. Li, and J. Yang. paper code

  56. Learning dynamic alignment via meta-filter for few-shot learning, in CVPR, 2021. C. Xu, Y. Fu, C. Liu, C. Wang, J. Li, F. Huang, L. Zhang, and X. Xue. paper

  57. Delving deep into many-to-many attention for few-shot video object segmentation, in CVPR, 2021. H. Chen, H. Wu, N. Zhao, S. Ren, and S. He. paper code

  58. Adaptive prototype learning and allocation for few-shot segmentation, in CVPR, 2021. G. Li, V. Jampani, L. Sevilla-Lara, D. Sun, J. Kim, and J. Kim. paper code

  59. FAPIS: A few-shot anchor-free part-based instance segmenter, in CVPR, 2021. K. Nguyen, and S. Todorovic. paper

  60. FSCE: Few-shot object detection via contrastive proposal encoding, in CVPR, 2021. B. Sun, B. Li, S. Cai, Y. Yuan, and C. Zhang. paper code

  61. Few-shot 3D point cloud semantic segmentation, in CVPR, 2021. N. Zhao, T. Chua, and G. H. Lee. paper code

  62. Generalized few-shot object detection without forgetting, in CVPR, 2021. Z. Fan, Y. Ma, Z. Li, and J. Sun. paper

  63. Few-shot human motion transfer by personalized geometry and texture modeling, in CVPR, 2021. Z. Huang, X. Han, J. Xu, and T. Zhang. paper code

  64. Labeled from unlabeled: Exploiting unlabeled data for few-shot deep HDR deghosting, in CVPR, 2021. K. R. Prabhakar, G. Senthil, S. Agrawal, R. V. Babu, and R. K. S. S. Gorthi. paper

  65. Few-shot transformation of common actions into time and space, in CVPR, 2021. P. Yang, P. Mettes, and C. G. M. Snoek. paper code

  66. Temporal-relational CrossTransformers for few-shot action recognition, in CVPR, 2021. T. Perrett, A. Masullo, T. Burghardt, M. Mirmehdi, and D. Damen. paper

  67. pixelNeRF: Neural radiance fields from one or few images, in CVPR, 2021. A. Yu, V. Ye, M. Tancik, and A. Kanazawa. paper code

  68. Hallucination improves few-shot object detection, in CVPR, 2021. W. Zhang, and Y. Wang. paper

  69. Few-shot object detection via classification refinement and distractor retreatment, in CVPR, 2021. Y. Li, H. Zhu, Y. Cheng, W. Wang, C. S. Teo, C. Xiang, P. Vadakkepat, and T. H. Lee. paper

  70. Dense relation distillation with context-aware aggregation for few-shot object detection, in CVPR, 2021. H. Hu, S. Bai, A. Li, J. Cui, and L. Wang. paper code

  71. Few-shot segmentation without meta-learning: A good transductive inference is all you need? , in CVPR, 2021. M. Boudiaf, H. Kervadec, Z. I. Masud, P. Piantanida, I. B. Ayed, and J. Dolz. paper code

  72. Few-shot image generation via cross-domain correspondence, in CVPR, 2021. U. Ojha, Y. Li, J. Lu, A. A. Efros, Y. J. Lee, E. Shechtman, and R. Zhang. paper

  73. Self-guided and cross-guided learning for few-shot segmentation, in CVPR, 2021. B. Zhang, J. Xiao, and T. Qin. paper code

  74. Anti-aliasing semantic reconstruction for few-shot semantic segmentation, in CVPR, 2021. B. Liu, Y. Ding, J. Jiao, X. Ji, and Q. Ye. paper

  75. Beyond max-margin: Class margin equilibrium for few-shot object detection, in CVPR, 2021. B. Li, B. Yang, C. Liu, F. Liu, R. Ji, and Q. Ye. paper code

  76. Incremental few-shot instance segmentation, in CVPR, 2021. D. A. Ganea, B. Boom, and R. Poppe. paper code

  77. Scale-aware graph neural network for few-shot semantic segmentation, in CVPR, 2021. G. Xie, J. Liu, H. Xiong, and L. Shao. paper

  78. Semantic relation reasoning for shot-stable few-shot object detection, in CVPR, 2021. C. Zhu, F. Chen, U. Ahmed, Z. Shen, and M. Savvides. paper

  79. Accurate few-shot object detection with support-query mutual guidance and hybrid loss, in CVPR, 2021. L. Zhang, S. Zhou, J. Guan, and J. Zhang. paper

  80. Transformation invariant few-shot object detection, in CVPR, 2021. A. Li, and Z. Li. paper

  81. MetaHTR: Towards writer-adaptive handwritten text recognition, in CVPR, 2021. A. K. Bhunia, S. Ghose, A. Kumar, P. N. Chowdhury, A. Sain, and Y. Song. paper

  82. What if we only use real datasets for scene text recognition? Toward scene text recognition with fewer labels, in CVPR, 2021. J. Baek, Y. Matsui, and K. Aizawa. paper code

  83. Few-shot font generation with localized style representations and factorization, in AAAI, 2021. S. Park, S. Chun, J. Cha, B. Lee, and H. Shim. paper code

  84. Attributes-guided and pure-visual attention alignment for few-shot recognition, in AAAI, 2021. S. Huang, M. Zhang, Y. Kang, and D. Wang. paper code

  85. One-shot face reenactment using appearance adaptive normalization, in AAAI, 2021. G. Yao, Y. Yuan, T. Shao, S. Li, S. Liu, Y. Liu, M. Wang, and K. Zhou. paper

  86. FL-MSRE: A few-shot learning based approach to multimodal social relation extraction, in AAAI, 2021. H. Wan, M. Zhang, J. Du, Z. Huang, Y. Yang, and J. Z. Pan. paper code

  87. StarNet: Towards weakly supervised few-shot object detection, in AAAI, 2021. L. Karlinsky, J. Shtok, A. Alfassy, M. Lichtenstein, S. Harary, E. Schwartz, S. Doveh, P. Sattigeri, R. Feris, A. Bronstein, and R. Giryes. paper code

  88. Progressive one-shot human parsing, in AAAI, 2021. H. He, J. Zhang, B. Thuraisingham, and D. Tao. paper code

  89. Knowledge is power: Hierarchical-knowledge embedded meta-learning for visual reasoning in artistic domains, in KDD, 2021. W. Zheng, L. Yan, C. Gou, and F.-Y. Wang. paper

  90. MEDA: Meta-learning with data augmentation for few-shot text classification, in IJCAI, 2021. P. Sun, Y. Ouyang, W. Zhang, and X.-Y. Dai. paper

  91. Learning implicit temporal alignment for few-shot video classification, in IJCAI, 2021. S. Zhang, J. Zhou, and X. He. paper code

  92. Few-shot neural human performance rendering from sparse RGBD videos, in IJCAI, 2021. A. Pang, X. Chen, H. Luo, M. Wu, J. Yu, and L. Xu. paper

  93. Uncertainty-aware few-shot image classification, in IJCAI, 2021. Z. Zhang, C. Lan, W. Zeng, Z. Chen, and S. Chan. paper

  94. Few-shot learning with part discovery and augmentation from unlabeled images, in IJCAI, 2021. W. Chen, C. Si, W. Wang, L. Wang, Z. Wang, and T. Tan. paper

  95. Few-shot partial-label learning, in IJCAI, 2021. Y. Zhao, G. Yu, L. Liu, Z. Yan, L. Cui, and C. Domeniconi. paper

  96. One-shot affordance detection, in IJCAI, 2021. H. Luo, W. Zhai, J. Zhang, Y. Cao, and D. Tao. paper

  97. DeFRCN: Decoupled faster R-CNN for few-shot object detection, in ICCV, 2021. L. Qiao, Y. Zhao, Z. Li, X. Qiu, J. Wu, and C. Zhang. paper

  98. Learning meta-class memory for few-shot semantic segmentation, in ICCV, 2021. Z. Wu, X. Shi, G. Lin, and J. Cai. paper

  99. UVStyle-Net: Unsupervised few-shot learning of 3D style similarity measure for B-Reps, in ICCV, 2021. P. Meltzer, H. Shayani, A. Khasahmadi, P. K. Jayaraman, A. Sanghi, and J. Lambourne. paper

  100. LoFGAN: Fusing local representations for few-shot image generation, in ICCV, 2021. Z. Gu, W. Li, J. Huo, L. Wang, and Y. Gao. paper

  101. H3D-Net: Few-shot high-fidelity 3D head reconstruction, in ICCV, 2021. E. Ramon, G. Triginer, J. Escur, A. Pumarola, J. Garcia, X. Giró-i-Nieto, and F. Moreno-Noguer. paper

  102. Learned spatial representations for few-shot talking-head synthesis, in ICCV, 2021. M. Meshry, S. Suri, L. S. Davis, and A. Shrivastava. paper

  103. Putting NeRF on a diet: Semantically consistent few-shot view synthesis, in ICCV, 2021. A. Jain, M. Tancik, and P. Abbeel. paper

  104. Hypercorrelation squeeze for few-shot segmentation, in ICCV, 2021. J. Min, D. Kang, and M. Cho. paper code

  105. Few-shot semantic segmentation with cyclic memory network, in ICCV, 2021. G. Xie, H. Xiong, J. Liu, Y. Yao, and L. Shao. paper

  106. Simpler is better: Few-shot semantic segmentation with classifier weight transformer, in ICCV, 2021. Z. Lu, S. He, X. Zhu, L. Zhang, Y. Song, and T. Xiang. paper code

  107. Unsupervised few-shot action recognition via action-appearance aligned meta-adaptation, in ICCV, 2021. J. Patravali, G. Mittal, Y. Yu, F. Li, and M. Chen. paper

  108. Multiple heads are better than one: few-shot font generation with multiple localized experts, in ICCV, 2021. S. Park, S. Chun, J. Cha, B. Lee, and H. Shim. paper code

  109. Mining latent classes for few-shot segmentation, in ICCV, 2021. L. Yang, W. Zhuo, L. Qi, Y. Shi, and Y. Gao. paper code

  110. Partner-assisted learning for few-shot image classification, in ICCV, 2021. J. Ma, H. Xie, G. Han, S. Chang, A. Galstyan, and W. Abd-Almageed. paper

  111. Hierarchical graph attention network for few-shot visual-semantic learning, in ICCV, 2021. C. Yin, K. Wu, Z. Che, B. Jiang, Z. Xu, and J. Tang. paper

  112. Video pose distillation for few-shot, fine-grained sports action recognition, in ICCV, 2021. J. Hong, M. Fisher, M. Gharbi, and K. Fatahalian. paper

  113. Universal-prototype enhancing for few-shot object detection, in ICCV, 2021. A. Wu, Y. Han, L. Zhu, and Y. Yang. paper code

  114. Query adaptive few-shot object detection with heterogeneous graph convolutional networks, in ICCV, 2021. G. Han, Y. He, S. Huang, J. Ma, and S. Chang. paper

  115. Few-shot visual relationship co-localization, in ICCV, 2021. R. Teotia, V. Mishra, M. Maheshwari, and A. Mishra. paper code

  116. Shallow Bayesian meta learning for real-world few-shot recognition, in ICCV, 2021. X. Zhang, D. Meng, H. Gouk, and T. M. Hospedales. paper code

  117. Super-resolving cross-domain face miniatures by peeking at one-shot exemplar, in ICCV, 2021. P. Li, X. Yu, and Y. Yang. paper

  118. Few-shot segmentation via cycle-consistent transformer, in NeurIPS, 2021. G. Zhang, G. Kang, Y. Yang, and Y. Wei. paper

  119. Generalized and discriminative few-shot object detection via SVD-dictionary enhancement, in NeurIPS, 2021. A. WU, S. Zhao, C. Deng, and W. Liu. paper

  120. Re-ranking for image retrieval and transductive few-shot classification, in NeurIPS, 2021. X. SHEN, Y. Xiao, S. Hu, O. Sbai, and M. Aubry. paper

  121. Neural view synthesis and matching for semi-supervised few-shot learning of 3D pose, in NeurIPS, 2021. A. Wang, S. Mei, A. L. Yuille, and A. Kortylewski. paper

  122. MetaAvatar: Learning animatable clothed human models from few depth images, in NeurIPS, 2021. S. Wang, M. Mihajlovic, Q. Ma, A. Geiger, and S. Tang. paper

  123. Few-shot object detection via association and discrimination, in NeurIPS, 2021. Y. Cao, J. Wang, Y. Jin, T. Wu, K. Chen, Z. Liu, and D. Lin. paper

  124. Rectifying the shortcut learning of background for few-shot learning, in NeurIPS, 2021. X. Luo, L. Wei, L. Wen, J. Yang, L. Xie, Z. Xu, and Q. Tian. paper

  125. D2C: Diffusion-decoding models for few-shot conditional generation, in NeurIPS, 2021. A. Sinha, J. Song, C. Meng, and S. Ermon. paper

  126. Few-shot backdoor attacks on visual object tracking, in ICLR, 2022. Y. Li, H. Zhong, X. Ma, Y. Jiang, and S. Xia. paper code

  127. Temporal alignment prediction for supervised representation learning and few-shot sequence classification, in ICLR, 2022. B. Su, and J. Wen. paper code

  128. Learning non-target knowledge for few-shot semantic segmentation, in CVPR, 2022. Y. Liu, N. Liu, Q. Cao, X. Yao, J. Han, and L. Shao. paper

  129. Learning what not to segment: A new perspective on few-shot segmentation, in CVPR, 2022. C. Lang, G. Cheng, B. Tu, and J. Han. paper code

  130. Few-shot keypoint detection with uncertainty learning for unseen species, in CVPR, 2022. C. Lu, and P. Koniusz. paper

  131. XMP-Font: Self-supervised cross-modality pre-training for few-shot font generation, in CVPR, 2022. W. Liu, F. Liu, F. Ding, Q. He, and Z. Yi. paper

  132. Spatio-temporal relation modeling for few-shot action recognition, in CVPR, 2022. A. Thatipelli, S. Narayan, S. Khan, R. M. Anwer, F. S. Khan, and B. Ghanem. paper code

  133. Attribute group editing for reliable few-shot image generation, in CVPR, 2022. G. Ding, X. Han, S. Wang, S. Wu, X. Jin, D. Tu, and Q. Huang. paper code

  134. Few-shot backdoor defense using Shapley estimation, in CVPR, 2022. J. Guan, Z. Tu, R. He, and D. Tao. paper

  135. Hybrid relation guided set matching for few-shot action recognition, in CVPR, 2022. X. Wang, S. Zhang, Z. Qing, M. Tang, Z. Zuo, C. Gao, R. Jin, and N. Sang. paper code

  136. Label, verify, correct: A simple few shot object detection method, in CVPR, 2022. P. Kaul, W. Xie, and A. Zisserman. paper

  137. InfoNeRF: Ray entropy minimization for few-shot neural volume rendering, in CVPR, 2022. M. Kim, S. Seo, and B. Han. paper

  138. A closer look at few-shot image generation, in CVPR, 2022. Y. Zhao, H. Ding, H. Huang, and N. Cheung. paper code

  139. Motion-modulated temporal fragment alignment network for few-shot action recognition, in CVPR, 2022. J. Wu, T. Zhang, Z. Zhang, F. Wu, and Y. Zhang. paper

  140. Kernelized few-shot object detection with efficient integral aggregation, in CVPR, 2022. S. Zhang, L. Wang, N. Murray, and P. Koniusz. paper code

  141. FS6D: Few-shot 6D pose estimation of novel objects, in CVPR, 2022. Y. He, Y. Wang, H. Fan, J. Sun, and Q. Chen. paper

  142. Look closer to supervise better: One-shot font generation via component-based discriminator, in CVPR, 2022. Y. Kong, C. Luo, W. Ma, Q. Zhu, S. Zhu, N. Yuan, and L. Jin. paper

  143. Generalized few-shot semantic segmentation, in CVPR, 2022. Z. Tian, X. Lai, L. Jiang, S. Liu, M. Shu, H. Zhao, and J. Jia. paper code

  144. Dynamic prototype convolution network for few-shot semantic segmentation, in CVPR, 2022. J. Liu, Y. Bao, G. Xie, H. Xiong, J. Sonke, and E. Gavves. paper

  145. OSOP: A multi-stage one shot object pose estimation framework, in CVPR, 2022. I. Shugurov, F. Li, B. Busam, and S. Ilic. paper

  146. Semantic-aligned fusion transformer for one-shot object detection, in CVPR, 2022. Y. Zhao, X. Guo, and Y. Lu. paper

  147. OnePose: One-shot object pose estimation without CAD models, in CVPR, 2022. J. Sun, Z. Wang, S. Zhang, X. He, H. Zhao, G. Zhang, and X. Zhou. paper code

  148. Few-shot object detection with fully cross-transformer, in CVPR, 2022. G. Han, J. Ma, S. Huang, L. Chen, and S. Chang. paper

  149. Learning to memorize feature hallucination for one-shot image generation, in CVPR, 2022. Y. Xie, Y. Fu, Y. Tai, Y. Cao, J. Zhu, and C. Wang. paper

  150. Few-shot font generation by learning fine-grained local styles, in CVPR, 2022. L. Tang, Y. Cai, J. Liu, Z. Hong, M. Gong, M. Fan, J. Han, J. Liu, E. Ding, and J. Wang. paper

  151. Balanced and hierarchical relation learning for one-shot object detection, in CVPR, 2022. H. Yang, S. Cai, H. Sheng, B. Deng, J. Huang, X. Hua, Y. Tang, and Y. Zhang. paper

  152. Few-shot head swapping in the wild, in CVPR, 2022. C. Shu, H. Wu, H. Zhou, J. Liu, Z. Hong, C. Ding, J. Han, J. Liu, E. Ding, and J. Wang. paper

  153. Integrative few-shot learning for classification and segmentation, in CVPR, 2022. D. Kang, and M. Cho. paper

  154. Attribute surrogates learning and spectral tokens pooling in transformers for few-shot learning, in CVPR, 2022. Y. He, W. Liang, D. Zhao, H. Zhou, W. Ge, Y. Yu, and W. Zhang. paper code

  155. Task discrepancy maximization for fine-grained few-shot classification, in CVPR, 2022. S. Lee, W. Moon, and J. Heo. paper

  156. Channel importance matters in few-shot image classification, in ICML, 2022. X. Luo, J. Xu, and Z. Xu. paper

  157. Long-short term cross-transformer in compressed domain for few-shot video classification, in IJCAI, 2022. W. Luo, Y. Liu, B. Li, W. Hu, Y. Miao, and Y. Li. paper

  158. HifiHead: One-shot high fidelity neural head synthesis with 3D control, in IJCAI, 2022. F. Zhu, J. Zhu, W. Chu, Y. Tai, Z. Xie, X. Huang, and C. Wang. paper code

  159. Iterative few-shot semantic segmentation from image label text, in IJCAI, 2022. H. Wang, L. Liu, W. Zhang, J. Zhang, Z. Gan, Y. Wang, C. Wang, and H. Wang. paper code

  160. Beyond the prototype: Divide-and-conquer proxies for few-shot segmentation, in IJCAI, 2022. C. Lang, B. Tu, G. Cheng, and J. Han. paper code

  161. CATrans: Context and affinity transformer for few-shot segmentation, in IJCAI, 2022. S. Zhang, T. Wu, S. Wu, and G. Guo. paper

  162. Masked feature generation network for few-shot learning, in IJCAI, 2022. Y. Yu, D. Zhang, and Z. Ji. paper

  163. Decoupling classifier for boosting few-shot object detection and instance segmentation, in NeurIPS, 2022. B.-B. Gao, X. Chen, Z. Huang, C. Nie, J. Liu, J. Lai, G. JIANG, X. Wang, and C. Wang. paper code

  164. Searching for better spatio-temporal alignment in few-shot action recognition, in NeurIPS, 2022. Y. Cao, X. Su, Q. Tang, S. You, X. Lu, and C. Xu. paper

  165. Feature-proxy transformer for few-shot segmentation, in NeurIPS, 2022. J.-W. Zhang, Y. Sun, Y. Yang, and W. Chen. paper code

  166. Intermediate prototype mining transformer for few-shot semantic segmentation, in NeurIPS, 2022. Y. liu, N. Liu, X. Yao, J. Han, paper code

  167. OnePose++: Keypoint-free one-shot object pose estimation without CAD models, in NeurIPS, 2022. X. He, J. Sun, Y. Wang, D. Huang, H. Bao, and X. Zhou. paper code

  168. Mask matching transformer for few-shot segmentation, in NeurIPS, 2022. S. Jiao, G. Zhang, S. Navasardyan, L. Chen, Y. Zhao, Y. Wei, and H. Shi. paper code

  169. Learning dense object descriptors from multiple views for low-shot category generalization, in NeurIPS, 2022. S. Stojanov, N. A. Thai, Z. Huang, and J. M. Rehg. paper code

  170. Pose adaptive dual mixup for few-shot single-view 3D reconstruction, in AAAI, 2022. T. Y. Cheng, H.-R. Yang, N. Trigoni, H.-T. Chen, and T.-L. Liu. paper

  171. Meta faster R-CNN: Towards accurate few-shot object detection with attentive feature alignment, in AAAI, 2022. G. Han, S. Huang, J. Ma, Y. He, and S.-F. Chang. paper code

  172. TA2N: Two-stage action alignment network for few-shot action recognition, in AAAI, 2022. S. Li, H. Liu, R. Qian, Y. Li, J. See, M. Fei, X. Yu, and W. Lin. paper

  173. Learning from the target: Dual prototype network for few shot semantic segmentation, in AAAI, 2022. B. Mao, X. Zhang, L. Wang, Q. Zhang, S. Xiang, and C. Pan. paper

  174. OA-FSUI2IT: A novel few-shot cross domain object detection framework with object-aware few-shot unsupervised image-to-image translation, in AAAI, 2022. L. Zhao, Y. Meng, and L. Xu. paper

  175. When facial expression recognition meets few-shot learning: A joint and alternate learning framework, in AAAI, 2022. X. Zou, Y. Yan, J.-H. Xue, S. Chen, and H. Wang. paper

  176. Dual attention networks for few-shot fine-grained recognition, in AAAI, 2022. S.-L. Xu, F. Zhang, X.-S. Wei, and J. Wang. paper

  177. Inferring prototypes for multi-label few-shot image classification with word vector guided attention, in AAAI, 2022. K. Yan, C. Zhang, J. Hou, P. Wang, Z. Bouraoui, S. Jameel, and S. Schockaert. paper

  178. Analogy-forming transformers for few-shot 3D parsing, in ICLR, 2023. N. Gkanatsios, M. Singh, Z. Fang, S. Tulsiani, and K. Fragkiadaki. paper code

  179. Suppressing the heterogeneity: A strong feature extractor for few-shot segmentation, in ICLR, 2023. Z. Hu, Y. Sun, and Y. Yang. paper

  180. Universal few-shot learning of dense prediction tasks with visual token matching, in ICLR, 2023. D. Kim, J. Kim, S. Cho, C. Luo, and S. Hong. paper code

  181. Meta learning to bridge vision and language models for multimodal few-shot learning, in ICLR, 2023. I. Najdenkoska, X. Zhen, and M. Worring. paper code

  182. Few-shot semantic image synthesis with class affinity transfer, in CVPR, 2023. M. Careil, J. Verbeek, and S. Lathuilière. paper

  183. Semantic prompt for few-shot image recognition, in CVPR, 2023. W. Chen, C. Si, Z. Zhang, L. Wang, Z. Wang, and T. Tan. paper

  184. ViewNet: A novel projection-based backbone with view pooling for few-shot point cloud classification, in CVPR, 2023. J. Chen, M. Yang, and S. Velipasalar. paper

  185. Meta-tuning loss functions and data augmentation for few-shot object detection, in CVPR, 2023. B. Demirel, O. B. Baran, and R. G. Cinbis. paper

  186. Few-shot geometry-aware keypoint localization, in CVPR, 2023. X. He, G. Bharaj, D. Ferman, H. Rhodin, and P. Garrido. paper code

  187. AsyFOD: An asymmetric adaptation paradigm for few-shot domain adaptive object detection, in CVPR, 2023. Y. Gao, K.-Y. Lin, J. Yan, Y. Wang, and W.-S. Zheng. paper code

  188. NIFF: Alleviating forgetting in generalized few-shot object detection via neural instance feature forging, in CVPR, 2023. K. Guirguis, J. Meier, G. Eskandar, M. Kayser, B. Yang, and J. Beyerer. paper

  189. A strong baseline for generalized few-shot semantic segmentation, in CVPR, 2023. S. Hajimiri, M. Boudiaf, I. B. Ayed, and J. Dolz. paper code

  190. StyleAdv: Meta style adversarial training for cross-domain few-shot learning, in CVPR, 2023. Y. Fu, Y. Xie, Y. Fu, and Y.-G. Jiang. paper code

  191. BlendFields: Few-shot example-driven facial modeling, in CVPR, 2023. K. Kania, S. J. Garbin, A. Tagliasacchi, V. Estellers, K. M. Yi, J. Valentin, T. Trzcinski, and M. Kowalski. paper

  192. Learning orthogonal prototypes for generalized few-shot semantic segmentation, in CVPR, 2023. S. Liu, Y. Zhang, Z. Qiu, H. Xie, Y. Zhang, and T. Yao. paper

  193. DiGeo: Discriminative geometry-aware learning for generalized few-shot object detection, in CVPR, 2023. J. Ma, Y. Niu, J. Xu, S. Huang, G. Han, and S.-F. Chang. paper code

  194. Hierarchical dense correlation distillation for few-shot segmentation, in CVPR, 2023. B. Peng, Z. Tian, X. Wu, C. Wang, S. Liu, J. Su, and J. Jia. paper code

  195. Rethinking the correlation in few-shot segmentation: A buoys view, in CVPR, 2023. Y. Wang, R. Sun, and T. Zhang. paper

  196. CF-Font: Content fusion for few-shot font generation, in CVPR, 2023. C. Wang, M. Zhou, T. Ge, Y. Jiang, H. Bao, and W. Xu. paper code

  197. MoLo: Motion-augmented long-short contrastive learning for few-shot action recognition, in CVPR, 2023. X. Wang, S. Zhang, Z. Qing, C. Gao, Y. Zhang, D. Zhao, and N. Sang. paper code

  198. Active exploration of multimodal complementarity for few-shot action recognition, in CVPR, 2023. Y. Wanyan, X. Yang, C. Chen, and C. Xu. paper

  199. Generating features with increased crop-related diversity for few-shot object detection, in CVPR, 2023. J. Xu, H. Le, and D. Samaras. paper

  200. SMAE: Few-shot learning for HDR deghosting with saturation-aware masked autoencoders, in CVPR, 2023. Q. Yan, S. Zhang, W. Chen, H. Tang, Y. Zhu, J. Sun, L. V. Gool, and Y. Zhang. paper

  201. MIANet: Aggregating unbiased instance and general information for few-shot semantic segmentation, in CVPR, 2023. Y. Yang, Q. Chen, Y. Feng, and T. Huang. paper code

  202. FreeNeRF: Improving few-shot neural rendering with free frequency regularization, in CVPR, 2023. J. Yang, M. Pavone, and Y. Wang. paper code

  203. Exploring incompatible knowledge transfer in few-shot image generation, in CVPR, 2023. Y. Zhao, C. Du, M. Abdollahzadeh, T. Pang, M. Lin, S. Yan, and N.-M. Cheung. paper code

  204. Where is my spot? few-shot image generation via latent subspace optimization, in CVPR, 2023. C. Zheng, B. Liu, H. Zhang, X. Xu, and S. He. paper code

  205. Distilling self-supervised vision transformers for weakly-supervised few-shot classification & segmentation, in CVPR, 2023. D. Kang, P. Koniusz, M. Cho, and N. Murray. paper

  206. FGNet: Towards filling the intra-class and inter-class gaps for few-shot segmentation, in IJCAI, 2023. Y. Zhang, W. Yang, and S. Wang. paper code

  207. Clustered-patch element connection for few-shot learning, in IJCAI, 2023. J. Lai, S. Yang, J. Zhou, W. Wu, X. Chen, J. Liu, B.-B. Gao, and C. Wang. paper

  208. GeCoNeRF: Few-shot neural radiance fields via geometric consistency, in ICML, 2023. M. Kwak, J. Song, and S. Kim. paper code

  209. Few-shot common action localization via cross-attentional fusion of context and temporal dynamics, in ICCV, 2023. J. Lee, M. Jain, and S. Yun. paper

  210. StyleDomain: Efficient and lightweight parameterizations of StyleGAN for one-shot and few-shot domain adaptation, in ICCV, 2023. A. Alanov, V. Titov, M. Nakhodnov, and D. Vetrov. paper code

  211. FlipNeRF: Flipped reflection rays for few-shot novel view synthesis, in ICCV, 2023. S. Seo, Y. Chang, and N. Kwak. paper

  212. Few-shot physically-aware articulated mesh generation via hierarchical deformation, in ICCV, 2023. X. Liu, B. Wang, H. Wang, and L. Yi. paper code

  213. SparseNeRF: Distilling depth ranking for few-shot novel view synthesis, in ICCV, 2023. G. Wang, Z. Chen, C. C. Loy, and Z. Liu. paper code

  214. Few-shot video classification via representation fusion and promotion learning, in ICCV, 2023. H. Xia, K. Li, M. R. Min, and Z. Ding. paper

  215. Augmenting and aligning snippets for few-shot video domain adaptation, in ICCV, 2023. Y. Xu, J. Yang, Y. Zhou, Z. Chen, M. Wu, and X. Li. paper code

  216. One-shot recognition of any material anywhere using contrastive learning with physics-based rendering, in ICCV, 2023. M. S. Drehwald, S. Eppel, J. Li, H. Hao, and A. Aspuru-Guzik. paper code

  217. FS-DETR: Few-shot detection transformer with prompting and without re-training, in ICCV, 2023. A. Bulat, R. Guerrero, B. Martinez, and G. Tzimiropoulos. paper

  218. Confidence-based visual dispersal for few-shot unsupervised domain adaptation, in ICCV, 2023. Y. Xiong, H. Chen, Z. Lin, S. Zhao, and G. Ding. paper code

  219. CDFSL-V: Cross-domain few-shot learning for videos, in ICCV, 2023. S. Samarasinghe, M. N. Rizve, N. Kardan, and M. Shah. paper code

  220. Generalized few-shot point cloud segmentation via geometric words, in ICCV, 2023. Y. Xu, C. Hu, N. Zhao, and G. H. Lee. paper code

  221. Invariant training 2D-3D joint hard samples for few-shot point cloud recognition, in ICCV, 2023. X. Yi, J. Deng, Q. Sun, X. Hua, J. Lim, and H. Zhang. paper code

  222. CIRI: Curricular inactivation for residue-aware one-shot video inpainting, in ICCV, 2023. W. Zheng, C. Xu, X. Xu, W. Liu, and S. He. paper code

  223. s-Adaptive decoupled prototype for few-shot object detection, in ICCV, 2023. J. Du, S. Zhang, Q. Chen, H. Le, Y. Sun, Y. Ni, J. Wang, B. He, and J. Wang. paper

  224. Parallel attention interaction network for few-shot skeleton-based action recognition, in ICCV, 2023. X. Liu, S. Zhou, L. Wang, and G. Hua. paper

  225. Robust one-shot face video re-enactment using hybrid latent spaces of StyleGAN2, in ICCV, 2023. T. Oorloff, and Y. Yacoob. paper code

  226. Informative data mining for one-shot cross-domain semantic segmentation, in ICCV, 2023. Y. Wang, J. Liang, J. Xiao, S. Mei, Y. Yang, and Z. Zhang. paper code

  227. The euclidean space is evil: Hyperbolic attribute editing for few-shot image generation, in ICCV, 2023. L. Li, Y. Zhang, and S. Wang. paper code

  228. Few shot font generation via transferring similarity guided global style and quantization local style, in ICCV, 2023. W. Pan, A. Zhu, X. Zhou, B. K. Iwana, and S. Li. paper code

  229. Boosting few-shot action recognition with graph-guided hybrid matching, in ICCV, 2023. J. Xing, M. Wang, Y. Ruan, B. Chen, Y. Guo, B. Mu, G. Dai, J. Wang, and Y. Liu. paper code

  230. MSI: Maximize support-set information for few-shot segmentation, in ICCV, 2023. S. Moon, S. S. Sohn, H. Zhou, S. Yoon, V. Pavlovic, M. H. Khan, and M. Kapadia. paper code

  231. FastRecon: Few-shot industrial anomaly detection via fast feature reconstruction, in ICCV, 2023. Z. Fang, X. Wang, H. Li, J. Liu, Q. Hu, and J. Xiao. paper code

  232. Self-calibrated cross attention network for few-shot segmentation, in ICCV, 2023. Q. Xu, W. Zhao, G. Lin, and C. Long. paper code

  233. Multi-grained temporal prototype learning for few-shot video object segmentation, in ICCV, 2023. N. Liu, K. Nan, W. Zhao, Y. Liu, X. Yao, S. Khan, H. Cholakkal, R. M. Anwer, J. Han, and F. S. Khan. paper code

  234. HyperReenact: One-shot reenactment via jointly learning to refine and retarget faces, in ICCV, 2023. S. Bounareli, C. Tzelepis, V. Argyriou, I. Patras, and G. Tzimiropoulos. paper code

  235. General image-to-image translation with one-shot image guidance, in ICCV, 2023. B. Cheng, Z. Liu, Y. Peng, and Y. Lin. paper code

  236. ActorsNeRF: Animatable few-shot human rendering with generalizable NeRFs, in ICCV, 2023. J. Mu, S. Sang, N. Vasconcelos, and X. Wang. paper code

  237. One-shot implicit animatable avatars with model-based priors, in ICCV, 2023. Y. Huang, H. Yi, W. Liu, H. Wang, B. Wu, W. Wang, B. Lin, D. Zhang, and D. Cai. paper code

  238. Preface: A data-driven volumetric prior for few-shot ultra high-resolution face synthesis, in ICCV, 2023. M. C. Bühler, K. Sarkar, T. Shah, G. Li, D. Wang, L. Helminger, S. Orts-Escolano, D. Lagun, O. Hilliges, T. Beeler, and A. Meka. paper

  239. DINAR: Diffusion inpainting of neural textures for one-shot human avatars, in ICCV, 2023. D. Svitov, D. Gudkov, R. Bashirov, and V. Lempitsky. paper

  240. Tune-a-video: One-shot tuning of image diffusion models for text-to-video generation, in ICCV, 2023. J. Z. Wu, Y. Ge, X. Wang, S. W. Lei, Y. Gu, Y. Shi, W. Hsu, Y. Shan, X. Qie, and M. Z. Shou. paper code

  241. Phasic content fusing diffusion model with directional distribution consistency for few-shot model adaption, in ICCV, 2023. T. Hu, J. Zhang, L. Liu, R. Yi, S. Kou, H. Zhu, X. Chen, Y. Wang, C. Wang, and L. Ma. paper code

  242. Prototypical variational autoencoder for 3D few-shot object detection, in NeurIPS, 2023. W. Tang, B. YANG, X. Li, Y. Liu, P. Heng, and C. Fu. paper

  243. Generalizable one-shot 3D neural head avatar, in NeurIPS, 2023. X. Li, S. D. Mello, S. Liu, K. Nagano, U. Iqbal, and J. Kautz. paper code

  244. Focus on query: Adversarial mining transformer for few-shot segmentation, in NeurIPS, 2023. Y. Wang, N. Luo, and T. Zhang. paper code

  245. Bi-directional feature reconstruction network for fine-grained few-shot image classification, in AAAI, 2023. J. Wu, D. Chang, A. Sain, X. Li, Z. Ma, J. Cao, J. Guo, and Y.-Z. Song. paper code

  246. Revisiting the spatial and temporal modeling for few-shot action recognition, in AAAI, 2023. J. Xing, M. Wang, Y. Liu, and B. Mu. paper

  247. Disentangle and remerge: Interventional knowledge distillation for few-shot object detection from a conditional causal perspective, in AAAI, 2023. J. Li, Y. Zhang, W. Qiang, L. Si, C. Jiao, X. Hu, C. Zheng, and F. Sun. paper code

  248. Breaking immutable: Information-coupled prototype elaboration for few-shot object detection, in AAAI, 2023. X. Lu, W. Diao, Y. Mao, J. Li, P. Wang, X. Sun, and K. Fu. paper code

  249. Few-shot object detection via variational feature aggregation, in AAAI, 2023. J. Han, Y. Ren, J. Ding, K. Yan, and G.-S. Xia. paper

  250. Few-shot 3D point cloud semantic segmentation via stratified class-specific attention based transformer network, in AAAI, 2023. C. Zhang, Z. Wu, X. Wu, Z. Zhao, and S. Wang. paper code

  251. Few-shot composition learning for image retrieval with prompt tuning, in AAAI, 2023. J. Wu, R. Wang, H. Zhao, R. Zhang, C. Lu, S. Li, and R. Henao. paper

  252. Real3D-Portrait: One-shot realistic 3D talking portrait synthesis, in ICLR, 2024. Z. Ye, T. Zhong, Y. Ren, J. Yang, W. Li, J. Huang, Z. Jiang, J. He, R. Huang, J. Liu, C. Zhang, X. Yin, Z. MA, and Z. Zhao. paper code

  253. Personalize segment anything model with one shot, in ICLR, 2024. R. Zhang, Z. Jiang, Z. Guo, S. Yan, J. Pan, H. Dong, Y. Qiao, P. Gao, and H. Li. paper

  254. Matcher: Segment anything with one shot using all-purpose feature matching, in ICLR, 2024. Y. Liu, M. Zhu, H. Li, H. Chen, X. Wang, and C. Shen. paper

  255. SparseDFF: Sparse-view feature distillation for one-shot dexterous manipulation, in ICLR, 2024. Q. Wang, H. Zhang, C. Deng, Y. You, H. Dong, Y. Zhu, and L. Guibas. paper

Robotics

  1. Towards one shot learning by imitation for humanoid robots, in ICRA, 2010. Y. Wu and Y. Demiris. paper

  2. Learning manipulation actions from a few demonstrations, in ICRA, 2013. N. Abdo, H. Kretzschmar, L. Spinello, and C. Stachniss. paper

  3. Learning assistive strategies from a few user-robot interactions: Model-based reinforcement learning approach, in ICRA, 2016. M. Hamaya, T. Matsubara, T. Noda, T. Teramae, and J. Morimoto. paper

  4. One-shot imitation learning, in NeurIPS, 2017. Y. Duan, M. Andrychowicz, B. Stadie, J. Ho, J. Schneider, I. Sutskever, P. Abbeel, and W. Zaremba. paper

  5. Meta-learning language-guided policy learning, in ICLR, 2019. J. D. Co-Reyes, A. Gupta, S. Sanjeev, N. Altieri, J. DeNero, P. Abbeel, and S. Levine. paper

  6. Meta reinforcement learning with autonomous inference of subtask dependencies, in ICLR, 2020. S. Sohn, H. Woo, J. Choi, and H. Lee. paper

  7. Watch, try, learn: Meta-learning from demonstrations and rewards, in ICLR, 2020. A. Zhou, E. Jang, D. Kappler, A. Herzog, M. Khansari, P. Wohlhart, Y. Bai, M. Kalakrishnan, S. Levine, and C. Finn. paper

  8. Few-shot Bayesian imitation learning with logical program policies, in AAAI, 2020. T. Silver, K. R. Allen, A. K. Lew, L. P. Kaelbling, and J. Tenenbaum. paper

  9. One solution is not all you need: Few-shot extrapolation via structured MaxEnt RL, in NeurIPS, 2020. S. Kumar, A. Kumar, S. Levine, and C. Finn. paper

  10. Bowtie networks: Generative modeling for joint few-shot recognition and novel-view synthesis, in ICLR, 2021. Z. Bao, Y. Wang, and M. Hebert. paper

  11. Demonstration-conditioned reinforcement learning for few-shot imitation, in ICML, 2021. C. R. Dance, J. Perez, and T. Cachet. paper

  12. Hierarchical few-shot imitation with skill transition models, in ICLR, 2022. K. Hakhamaneshi, R. Zhao, A. Zhan, P. Abbeel, and M. Laskin. paper

  13. Prompting decision transformer for few-shot policy generalization, in ICML, 2022. M. Xu, Y. Shen, S. Zhang, Y. Lu, D. Zhao, J. B. Tenenbaum, and C. Gan. paper code

  14. Stage conscious attention network (SCAN): A demonstration-conditioned policy for few-shot imitation, in AAAI, 2022. J.-F. Yeh, C.-M. Chung, H.-T. Su, Y.-T. Chen, and W. H. Hsu. paper

  15. Online prototype alignment for few-shot policy transfer, in ICML, 2023. Q. Yi, R. Zhang, S. Peng, J. Guo, Y. Gao, K. Yuan, R. Chen, S. Lan, X. Hu, Z. Du, X. Zhang, Q. Guo, and Y. Chen. paper code

  16. LLM-planner: Few-shot grounded planning for embodied agents with large language models, in ICCV, 2023. C. H. Song, J. Wu, C. Washington, B. M. Sadler, W. Chao, and Y. Su. paper code

  17. Where2Explore: Few-shot affordance learning for unseen novel categories of articulated objects, in NeurIPS, 2023. C. Ning, R. Wu, H. Lu, K. Mo, and H. Dong. paper

  18. Skill machines: Temporal logic skill composition in reinforcement learning, in ICLR, 2024. G. N. Tasse, D. Jarvis, S. James, and B. Rosman. paper

Natural Language Processing

  1. High-risk learning: Acquiring new word vectors from tiny data, in EMNLP, 2017. A. Herbelot and M. Baroni. paper

  2. Few-shot representation learning for out-of-vocabulary words, in ACL, 2019. Z. Hu, T. Chen, K.-W. Chang, and Y. Sun. paper

  3. Learning to customize model structures for few-shot dialogue generation tasks, in ACL, 2020. Y. Song, Z. Liu, W. Bi, R. Yan, and M. Zhang. paper

  4. Few-shot slot tagging with collapsed dependency transfer and label-enhanced task-adaptive projection network, in ACL, 2020. Y. Hou, W. Che, Y. Lai, Z. Zhou, Y. Liu, H. Liu, and T. Liu. paper

  5. Meta-reinforced multi-domain state generator for dialogue systems, in ACL, 2020. Y. Huang, J. Feng, M. Hu, X. Wu, X. Du, and S. Ma. paper

  6. Universal natural language processing with limited annotations: Try few-shot textual entailment as a start, in EMNLP, 2020. W. Yin, N. F. Rajani, D. Radev, R. Socher, and C. Xiong. paper code

  7. Simple and effective few-shot named entity recognition with structured nearest neighbor learning, in EMNLP, 2020. Y. Yang, and A. Katiyar. paper code

  8. Discriminative nearest neighbor few-shot intent detection by transferring natural language inference, in EMNLP, 2020. J. Zhang, K. Hashimoto, W. Liu, C. Wu, Y. Wan, P. Yu, R. Socher, and C. Xiong. paper code

  9. Few-shot learning for opinion summarization, in EMNLP, 2020. A. Bražinskas, M. Lapata, and I. Titov. paper code

  10. Few-shot complex knowledge base question answering via meta reinforcement learning, in EMNLP, 2020. Y. Hua, Y. Li, G. Haffari, G. Qi, and T. Wu. paper code

  11. Self-supervised meta-learning for few-shot natural language classification tasks, in EMNLP, 2020. T. Bansal, R. Jha, T. Munkhdalai, and A. McCallum. paper code

  12. Uncertainty-aware self-training for few-shot text classification, in NeurIPS, 2020. S. Mukherjee, and A. Awadallah. paper code

  13. Learning to extrapolate knowledge: Transductive few-shot out-of-graph link prediction, in NeurIPS, 2020:. J. Baek, D. B. Lee, and S. J. Hwang. paper code

  14. MetaNER: Named entity recognition with meta-learning, in WWW, 2020. J. Li, S. Shang, and L. Shao. paper

  15. Conditionally adaptive multi-task learning: Improving transfer learning in NLP using fewer parameters & less data, in ICLR, 2021. J. Pilault, A. E. hattami, and C. Pal. paper code

  16. Revisiting few-sample BERT fine-tuning, in ICLR, 2021. T. Zhang, F. Wu, A. Katiyar, K. Q. Weinberger, and Y. Artzi. paper code

  17. Few-shot conversational dense retrieval, in SIGIR, 2021. S. Yu, Z. Liu, C. Xiong, T. Feng, and Z. Liu. paper code

  18. Few-shot language coordination by modeling theory of mind, in ICML, 2021. H. Zhu, G. Neubig, and Y. Bisk. paper code

  19. KEML: A knowledge-enriched meta-learning framework for lexical relation classification, in AAAI, 2021. C. Wang, M. Qiu, J. Huang, and X. He. paper

  20. Few-shot learning for multi-label intent detection, in AAAI, 2021. Y. Hou, Y. Lai, Y. Wu, W. Che, and T. Liu. paper code

  21. SALNet: Semi-supervised few-shot text classification with attention-based lexicon construction, in AAAI, 2021. J.-H. Lee, S.-K. Ko, and Y.-S. Han. paper

  22. Learning from my friends: Few-shot personalized conversation systems via social networks, in AAAI, 2021. Z. Tian, W. Bi, Z. Zhang, D. Lee, Y. Song, and N. L. Zhang. paper code

  23. Relative and absolute location embedding for few-shot node classification on graph, in AAAI, 2021. Z. Liu, Y. Fang, C. Liu, and S. C.H. Hoi. paper

  24. Few-shot question answering by pretraining span selection, in ACL-IJCNLP, 2021. O. Ram, Y. Kirstain, J. Berant, A. Globerson, and O. Levy. paper code

  25. A closer look at few-shot crosslingual transfer: The choice of shots matters, in ACL-IJCNLP, 2021. M. Zhao, Y. Zhu, E. Shareghi, I. Vulic, R. Reichart, A. Korhonen, and H. Schütze. paper code

  26. Learning from miscellaneous other-classwords for few-shot named entity recognition, in ACL-IJCNLP, 2021. M. Tong, S. Wang, B. Xu, Y. Cao, M. Liu, L. Hou, and J. Li. paper code

  27. Distinct label representations for few-shot text classification, in ACL-IJCNLP, 2021. S. Ohashi, J. Takayama, T. Kajiwara, and Y. Arase. paper code

  28. Entity concept-enhanced few-shot relation extraction, in ACL-IJCNLP, 2021. S. Yang, Y. Zhang, G. Niu, Q. Zhao, and S. Pu. paper code

  29. On training instance selection for few-shot neural text generation, in ACL-IJCNLP, 2021. E. Chang, X. Shen, H.-S. Yeh, and V. Demberg. paper code

  30. Unsupervised neural machine translation for low-resource domains via meta-learning, in ACL-IJCNLP, 2021. C. Park, Y. Tae, T. Kim, S. Yang, M. A. Khan, L. Park, and J. Choo. paper code

  31. Meta-learning with variational semantic memory for word sense disambiguation, in ACL-IJCNLP, 2021. Y. Du, N. Holla, X. Zhen, C. Snoek, and E. Shutova. paper code

  32. Multi-label few-shot learning for aspect category detection, in ACL-IJCNLP, 2021. M. Hu, S. Z. H. Guo, C. Xue, H. Gao, T. Gao, R. Cheng, and Z. Su. paper

  33. TextSETTR: Few-shot text style extraction and tunable targeted restyling, in ACL-IJCNLP, 2021. P. Rileya, N. Constantb, M. Guob, G. Kumarc, D. Uthusb, and Z. Parekh. paper

  34. Few-shot text ranking with meta adapted synthetic weak supervision, in ACL-IJCNLP, 2021. S. Sun, Y. Qian, Z. Liu, C. Xiong, K. Zhang, J. Bao, Z. Liu, and P. Bennett. paper code

  35. PROTAUGMENT: Intent detection meta-learning through unsupervised diverse paraphrasing, in ACL-IJCNLP, 2021. T. Dopierre, C. Gravier, and W. Logerais. paper code

  36. AUGNLG: Few-shot natural language generation using self-trained data augmentation, in ACL-IJCNLP, 2021. X. Xu, G. Wang, Y.-B. Kim, and S. Lee. paper code

  37. Meta self-training for few-shot neural sequence labeling, in KDD, 2021. Y. Wang, S. Mukherjee, H. Chu, Y. Tu, M. Wu, J. Gao, and A. H. Awadallah. paper code

  38. Knowledge-enhanced domain adaptation in few-shot relation classification, in KDD, 2021. J. Zhang, J. Zhu, Y. Yang, W. Shi, C. Zhang, and H. Wang. paper code

  39. Few-shot text classification with triplet networks, data augmentation, and curriculum learning, in NAACL-HLT, 2021. J. Wei, C. Huang, S. Vosoughi, Y. Cheng, and S. Xu. paper code

  40. Few-shot intent classification and slot filling with retrieved examples, in NAACL-HLT, 2021. D. Yu, L. He, Y. Zhang, X. Du, P. Pasupat, and Q. Li. paper

  41. Non-parametric few-shot learning for word sense disambiguation, in NAACL-HLT, 2021. H. Chen, M. Xia, and D. Chen. paper code

  42. Towards few-shot fact-checking via perplexity, in NAACL-HLT, 2021. N. Lee, Y. Bang, A. Madotto, and P. Fung. paper

  43. ConVEx: Data-efficient and few-shot slot labeling, in NAACL-HLT, 2021. M. Henderson, and I. Vulic. paper

  44. Few-shot text generation with natural language instructions, in EMNLP, 2021. T. Schick, and H. Schütze. paper

  45. Towards realistic few-shot relation extraction, in EMNLP, 2021. S. Brody, S. Wu, and A. Benton. paper code

  46. Few-shot emotion recognition in conversation with sequential prototypical networks, in EMNLP, 2021. G. Guibon, M. Labeau, H. Flamein, L. Lefeuvre, and C. Clavel. paper code

  47. Learning prototype representations across few-shot tasks for event detection, in EMNLP, 2021. V. Lai, F. Dernoncourt, and T. H. Nguyen. paper

  48. Exploring task difficulty for few-shot relation extraction, in EMNLP, 2021. J. Han, B. Cheng, and W. Lu. paper code

  49. Honey or poison? Solving the trigger curse in few-shot event detection via causal intervention, in EMNLP, 2021. J. Chen, H. Lin, X. Han, and L. Sun. paper code

  50. Nearest neighbour few-shot learning for cross-lingual classification, in EMNLP, 2021. M. S. Bari, B. Haider, and S. Mansour. paper

  51. Knowledge-aware meta-learning for low-resource text classification, in EMNLP, 2021. H. Yao, Y. Wu, M. Al-Shedivat, and E. P. Xing. paper code

  52. Few-shot named entity recognition: An empirical baseline study, in EMNLP, 2021. J. Huang, C. Li, K. Subudhi, D. Jose, S. Balakrishnan, W. Chen, B. Peng, J. Gao, and J. Han. paper

  53. MetaTS: Meta teacher-student network for multilingual sequence labeling with minimal supervision, in EMNLP, 2021. Z. Li, D. Zhang, T. Cao, Y. Wei, Y. Song, and B. Yin. paper

  54. Meta-LMTC: Meta-learning for large-scale multi-label text classification, in EMNLP, 2021. R. Wang, X. Su, S. Long, X. Dai, S. Huang, and J. Chen. paper

  55. Ontology-enhanced prompt-tuning for few-shot learning, in WWW, 2022. H. Ye, N. Zhang, S. Deng, X. Chen, H. Chen, F. Xiong, X. Chen, and H. Chen. paper

  56. EICO: Improving few-shot text classification via explicit and implicit consistency regularization, in Findings of ACL, 2022. L. Zhao, and C. Yao. paper

  57. Dialogue summaries as dialogue states (DS2), template-guided summarization for few-shot dialogue state tracking, in Findings of ACL, 2022. J. Shin, H. Yu, H. Moon, A. Madotto, and J. Park. paper code

  58. A few-shot semantic parser for wizard-of-oz dialogues with the precise thingtalk representation, in Findings of ACL, 2022. G. Campagna, S. J. Semnani, R. Kearns, L. J. K. Sato, S. Xu, and M. S. Lam. paper

  59. Multi-stage prompting for knowledgeable dialogue generation, in Findings of ACL, 2022. Z. Liu, M. Patwary, R. Prenger, S. Prabhumoye, W. Ping, M. Shoeybi, and B. Catanzaro. paper code

  60. Few-shot named entity recognition with self-describing networks, in ACL, 2022. J. Chen, Q. Liu, H. Lin, X. Han, and L. Sun. paper code

  61. CLIP models are few-shot learners: Empirical studies on VQA and visual entailment, in ACL, 2022. H. Song, L. Dong, W. Zhang, T. Liu, and F. Wei. paper

  62. CONTaiNER: Few-shot named entity recognition via contrastive learning, in ACL, 2022. S. S. S. Das, A. Katiyar, R. J. Passonneau, and R. Zhang. paper code

  63. Few-shot controllable style transfer for low-resource multilingual settings, in ACL, 2022. K. Krishna, D. Nathani, X. Garcia, B. Samanta, and P. Talukdar. paper

  64. Label semantic aware pre-training for few-shot text classification, in ACL, 2022. A. Mueller, J. Krone, S. Romeo, S. Mansour, E. Mansimov, Y. Zhang, and D. Roth. paper

  65. Inverse is better! Fast and accurate prompt for few-shot slot tagging, in Findings of ACL, 2022. Y. Hou, C. Chen, X. Luo, B. Li, and W. Che. paper

  66. Label semantics for few shot named entity recognition, in Findings of ACL, 2022. J. Ma, M. Ballesteros, S. Doss, R. Anubhai, S. Mallya, Y. Al-Onaizan, and D. Roth. paper

  67. Hierarchical recurrent aggregative generation for few-shot NLG, in Findings of ACL, 2022. G. Zhou, G. Lampouras, and I. Iacobacci. paper

  68. Towards few-shot entity recognition in document images: A label-aware sequence-to-sequence framework, in Findings of ACL, 2022. Z. Wang, and J. Shang. paper

  69. A good prompt is worth millions of parameters: Low-resource prompt-based learning for vision-language models, in ACL, 2022. W. Jin, Y. Cheng, Y. Shen, W. Chen, and X. Ren. paper code

  70. Generated knowledge prompting for commonsense reasoning, in ACL, 2022. J. Liu, A. Liu, X. Lu, S. Welleck, P. West, R. L. Bras, Y. Choi, and H. Hajishirzi. paper code

  71. End-to-end modeling via information tree for one-shot natural language spatial video grounding, in ACL, 2022. M. Li, T. Wang, H. Zhang, S. Zhang, Z. Zhao, J. Miao, W. Zhang, W. Tan, J. Wang, P. Wang, S. Pu, and F. Wu. paper

  72. Leveraging task transferability to meta-learning for clinical section classification with limited data, in ACL, 2022. Z. Chen, J. Kim, R. Bhakta, and M. Y. Sir. paper

  73. Improving meta-learning for low-resource text classification and generation via memory imitation, in ACL, 2022. Y. Zhao, Z. Tian, H. Yao, Y. Zheng, D. Lee, Y. Song, J. Sun, and N. L. Zhang. paper

  74. A simple yet effective relation information guided approach for few-shot relation extraction, in Findings of ACL, 2022. Y. Liu, J. Hu, X. Wan, and T. Chang. paper code

  75. Decomposed meta-learning for few-shot named entity recognition, in Findings of ACL, 2022. T. Ma, H. Jiang, Q. Wu, T. Zhao, and C. Lin. paper code

  76. Meta-learning for fast cross-lingual adaptation in dependency parsing, in ACL, 2022. A. Langedijk, V. Dankers, P. Lippe, S. Bos, B. C. Guevara, H. Yannakoudakis, and E. Shutova. paper code

  77. Enhancing cross-lingual natural language inference by prompt-learning from cross-lingual templates, in ACL, 2022. K. Qi, H. Wan, J. Du, and H. Chen. paper code

  78. Few-shot stance detection via target-aware prompt distillation, in SIGIR, 2022. Y. Jiang, J. Gao, H. Shen, and X. Cheng. paper code

  79. Relation-guided few-shot relational triple extraction, in SIGIR, 2022. X. Cong, J. Sheng, S. Cui, B. Yu, T. Liu, and B. Wang. paper

  80. Curriculum contrastive context denoising for few-shot conversational dense retrieval, in SIGIR, 2022. K. Mao, Z. Dou, and H. Qian. paper code

  81. Few-shot subgoal planning with language models, in NAACL, 2022. L. Logeswaran, Y. Fu, M. Lee, and H. Lee. paper

  82. Template-free prompt tuning for few-shot NER, in NAACL, 2022. R. Ma, X. Zhou, T. Gui, Y. Tan, L. Li, Q. Zhang, and X. Huang. paper code

  83. Few-shot document-level relation extraction, in NAACL, 2022. N. Popovic, and M. Färber. paper code

  84. An enhanced span-based decomposition method for few-shot sequence labeling, in NAACL, 2022. P. Wang, R. Xu, T. Liu, Q. Zhou, Y. Cao, B. Chang, and Z. Sui. paper code

  85. Automatic multi-label prompting: Simple and interpretable few-shot classification, in NAACL, 2022. H. Wang, C. Xu, and J. McAuley. paper code

  86. On the effect of pretraining corpora on in-context few-shot learning by a large-scale language model, in NAACL, 2022. S. Shin, S.-W. Lee, H. Ahn, S. Kim, H. Kim, B. Kim, K. Cho, G. Lee, W. Park, J.-W. Ha, and N. Sung. paper

  87. MGIMN: Multi-grained interactive matching network for few-shot text classification, in NAACL, 2022. J. Zhang, M. Maimaiti, G. Xing, Y. Zheng, and J. Zhang. paper

  88. On the economics of multilingual few-shot learning: Modeling the cost-performance trade-offs of machine translated and manual data, in NAACL, 2022. K. Ahuja, M. Choudhury, and S. Dandapat. paper code

  89. OmniTab: Pretraining with natural and synthetic data for few-shot table-based question answering, in NAACL, 2022. Z. Jiang, Y. Mao, P. He, G. Neubig, and W. Chen. paper code

  90. Fine-tuning pre-trained language models for few-shot intent detection: Supervised pre-training and isotropization, in NAACL, 2022. H. Zhang, H. Liang, Y. Zhang, L.-M. Zhan, X.-M. Wu, X. Lu, and A. Y. Lam. paper code

  91. Embedding hallucination for few-shot language fine-tuning, in NAACL, 2022. Y. Jian, C. Gao, and S. Vosoughi. paper code

  92. Few-shot semantic parsing with language models trained on code, in NAACL, 2022. R. Shin, and B. V. Durme. paper

  93. LEA: Meta knowledge-driven self-attentive document embedding for few-shot text classification, in NAACL, 2022. S. Hong, and T. Y. Jang. paper

  94. Contrastive learning for prompt-based few-shot language learners, in NAACL, 2022. Y. Jian, C. Gao, and S. Vosoughi. paper code

  95. Learn from relation information: Towards prototype representation rectification for few-shot relation extraction, in NAACL, 2022. Y. Liu, J. Hu, X. Wan, and T.-H. Chang. paper code

  96. Efficient few-shot fine-tuning for opinion summarization, in NAACL, 2022. A. Brazinskas, R. Nallapati, M. Bansal, and M. Dreyer. paper code

  97. Improving few-shot image classification using machine- and user-generated natural language descriptions, in NAACL, 2022. K. Nishida, K. Nishida, and S. Nishioka. paper

  98. RGL: A simple yet effective relation graph augmented prompt-based tuning approach for few-shot learning, in NAACL, 2022. Y. Wang, X. Tian, H. Xiong, Y. Li, Z. Chen, S. Guo, and D. Dou. paper code

  99. “Diversity and uncertainty in moderation” are the key to data selection for multilingual few-shot transfer, in NAACL, 2022. S. Kumar, S. Dandapat, and M. Choudhury. paper

  100. A generative language model for few-shot aspect-based sentiment analysis, in NAACL, 2022. E. Hosseini-Asl, W. Liu, and C. Xiong. paper code

  101. Improving few-shot relation classification by prototypical representation learning with definition text, in NAACL, 2022. L. Zhenzhen, Y. Zhang, J.-Y. Nie, and D. Li. paper

  102. Few-shot self-rationalization with natural language prompts, in NAACL, 2022. A. Marasovic, I. Beltagy, D. Downey, and M. E. Peters. paper code

  103. How to translate your samples and choose your shots? Analyzing translate-train & few-shot cross-lingual transfer, in NAACL, 2022. I. Jundi, and G. Lapesa. paper code

  104. LMTurk: Few-shot learners as crowdsourcing workers in a language-model-as-a-service framework, in NAACL, 2022. M. Zhao, F. Mi, Y. Wang, M. Li, X. Jiang, Q. Liu, and H. Schuetze. paper code

  105. LiST: Lite prompted self-training makes efficient few-shot learners, in NAACL, 2022. Y. Wang, S. Mukherjee, X. Liu, J. Gao, A. H. Awadallah, and J. Gao. paper code

  106. Improving in-context few-shot learning via self-supervised training, in NAACL, 2022. M. Chen, J. Du, R. Pasunuru, T. Mihaylov, S. Iyer, V. Stoyanov, and Z. Kozareva. paper

  107. Por qué não utiliser alla språk? mixed training with gradient optimization in few-shot cross-lingual transfer, in NAACL, 2022. H. Xu, and K. Murray. paper code

  108. On the effectiveness of sentence encoding for intent detection meta-learning, in NAACL, 2022. T. Ma, Q. Wu, Z. Yu, T. Zhao, and C.-Y. Lin. paper code

  109. Few-shot fine-grained entity typing with automatic label interpretation and instance generation, in KDD, 2022. J. Huang, Y. Meng, and J. Han. paper code

  110. Label-enhanced prototypical network with contrastive learning for multi-label few-shot aspect category detection, in KDD, 2022. H. Liu, F. Zhang, X. Zhang, S. Zhao, J. Sun, H. Yu, and X. Zhang. paper

  111. Task-adaptive few-shot node classification, in KDD, 2022. S. Wang, K. Ding, C. Zhang, C. Chen, and J. Li. paper code

  112. Diversity features enhanced prototypical network for few-shot intent detection, in IJCAI, 2022. F. Yang, X. Zhou, Y. Wang, A. Atawulla, and R. Bi. paper

  113. Function-words adaptively enhanced attention networks for few-shot inverse relation classification, in IJCAI, 2022. C. Dou, S. Wu, X. Zhang, Z. Feng, and K. Wang. paper code

  114. Curriculum-based self-training makes better few-shot learners for data-to-text generation, in IJCAI, 2022. P. Ke, H. Ji, Z. Yang, Y. Huang, J. Feng, X. Zhu, and M. Huang. paper code

  115. Graph-based model generation for few-shot relation extraction, in EMNLP, 2022. W. Li, and T. Qian. paper code

  116. Prompt-based meta-learning for few-shot text classification, in EMNLP, 2022. H. Zhang, X. Zhang, H. Huang, and L. Yu. paper code

  117. Language models of code are few-shot commonsense learners, in EMNLP, 2022. A. Madaan, S. Zhou, U. Alon, Y. Yang, and G. Neubig. paper code

  118. Large language models are few-shot clinical information extractors, in EMNLP, 2022. M. Agrawal, S. Hegselmann, H. Lang, Y. Kim, and D. Sontag. paper

  119. ToKen: Task decomposition and knowledge infusion for few-shot hate speech detection, in EMNLP, 2022. B. AlKhamissi, F. Ladhak, S. Iyer, V. Stoyanov, Z. Kozareva, X. Li, P. Fung, L. Mathias, A. Celikyilmaz, and M. Diab. paper

  120. Exploiting domain-slot related keywords description for few-shot cross-domain dialogue state tracking, in EMNLP, 2022. Q. Gao, G. Dong, Y. Mou, L. Wang, C. Zeng, D. Guo, M. Sun, and W. Xu. paper

  121. KECP: Knowledge enhanced contrastive prompting for few-shot extractive question answering, in EMNLP, 2022. J. Wang, C. Wang, M. Qiu, Q. Shi, H. Wang, J. huang, and M. Gao. paper code

  122. SpanProto: A two-stage span-based prototypical network for few-shot named entity recognition, in EMNLP, 2022. J. Wang, C. Wang, C. Tan, M. Qiu, S. Huang, J. huang, and M. Gao. paper code

  123. Few-shot query-focused summarization with prefix-merging, in EMNLP, 2022. R. Yuan, Z. Wang, Z. Cao, and W. Li. paper

  124. Incorporating relevance feedback for information-seeking retrieval using few-shot document re-ranking, in EMNLP, 2022. T. Baumgartner, L. F. R. Ribeiro, N. Reimers, and I. Gurevych. paper code

  125. Few-shot learning with multilingual generative language models, in EMNLP, 2022. X. V. Lin, T. Mihaylov, M. Artetxe, T. Wang, S. Chen, D. Simig, M. Ott, N. Goyal, S. Bhosale, J. Du, R. Pasunuru, S. Shleifer, P. S. Koura, V. Chaudhary, B. O'Horo, J. Wang, L. Zettlemoyer, Z. Kozareva, M. Diab, V. Stoyanov, and X. Li. paper code

  126. Don't stop fine-tuning: On training regimes for few-shot cross-lingual transfer with multilingual language models, in EMNLP, 2022. F. D. Schmidt, I. Vulic, and G. Glavas. paper code

  127. Better few-shot relation extraction with label prompt dropout, in EMNLP, 2022. P. Zhang, and W. Lu. paper code

  128. A dual prompt learning framework for few-shot dialogue state tracking, in WWW, 2023. Y. Yang, W. Lei, P. Huang, J. Cao, J. Li, and T.-S. Chua. paper code

  129. MetaTroll: Few-shot detection of state-sponsored trolls with transformer adapters, in WWW, 2023. L. Tian, X. Zhang, and J. H. Lau. paper code

  130. ContrastNet: A contrastive learning framework for few-shot text classification, in AAAI, 2022. J. Chen, R. Zhang, Y. Mao, and J. Xu. paper

  131. Few-shot cross-lingual stance detection with sentiment-based pre-training, in AAAI, 2022. M. Hardalov, A. Arora, P. Nakov, and I. Augenstein. paper

  132. ALP: Data augmentation using lexicalized PCFGs for few-shot text classification, in AAAI, 2022. H. H. Kim, D. Woo, S. J. Oh, J.-W. Cha, and Y.-S. Han. paper

  133. CINS: Comprehensive instruction for few-shot learning in task-oriented dialog systems, in AAAI, 2022. F. Mi, Y. Wang, and Y. Li. paper

  134. An empirical study of GPT-3 for few-shot knowledge-based VQA, in AAAI, 2022. Z. Yang, Z. Gan, J. Wang, X. Hu, Y. Lu, Z. Liu, and L. Wang. paper

  135. PROMPTAGATOR: Few-shot dense retrieval from 8 examples, in ICLR, 2023. Z. Dai, V. Y. Zhao, J. Ma, Y. Luan, J. Ni, J. Lu, A. Bakalov, K. Guu, K. Hall, and M.-W. Chang. paper

  136. QAID: Question answering inspired few-shot intent detection, in ICLR, 2023. A. Yehudai, M. Vetzler, Y. Mass, K. Lazar, D. Cohen, and B. Carmeli. paper

  137. CLUR: Uncertainty estimation for few-shot text classification with contrastive learning, in KDD, 2023. J. He, X. Zhang, S. Lei, A. Alhamadani, F. Chen, B. Xiao, and C.-T. Lu. paper code

  138. Learning few-shot sample-set operations for noisy multi-label aspect category detection, in IJCAI, 2023. S. Zhao, W. Chen, and T. Wang. paper

  139. Few-shot document-level event argument extraction, in ACL, 2023. X. Yang, Y. Lu, and L. R. Petzold. paper code

  140. FLamE: Few-shot learning from natural language explanations, in ACL, 2023. Y. Zhou, Y. Zhang, and C. Tan. paper

  141. MetaAdapt: Domain adaptive few-shot misinformation detection via meta learning, in ACL, 2023. Z. Yue, H. Zeng, Y. Zhang, L. Shang, and D. Wang. paper code

  142. Code4Struct: Code generation for few-shot event structure prediction, in ACL, 2023. X. Wang, S. Li, and H. Ji. paper code

  143. MANNER: A variational memory-augmented model for cross domain few-shot named entity recognition, in ACL, 2023. J. Fang, X. Wang, Z. Meng, P. Xie, F. Huang, and Y. Jiang. paper code

  144. Dual class knowledge propagation network for multi-label few-shot intent detection, in ACL, 2023. F. Zhang, W. Chen, F. Ding, and T. Wang. paper

  145. Few-shot event detection: An empirical study and a unified view, in ACL, 2023. Y. Ma, Z. Wang, Y. Cao, and A. Sun. paper code

  146. CodeIE: Large code generation models are better few-shot information extractors, in ACL, 2023. P. Li, T. Sun, Q. Tang, H. Yan, Y. Wu, X. Huang, and X. Qiu. paper code

  147. Few-shot data-to-text generation via unified representation and multi-source learning, in ACL, 2023. A. H. Li, M. Shang, E. Spiliopoulou, J. Ma, P. Ng, Z. Wang, B. Min, W. Y. Wang, K. R. McKeown, V. Castelli, D. Roth, and B. Xiang. paper

  148. Few-shot in-context learning on knowledge base question answering, in ACL, 2023. T. Li, X. Ma, A. Zhuang, Y. Gu, Y. Su, and W. Chen. paper code

  149. Linguistic representations for fewer-shot relation extraction across domains, in ACL, 2023. S. Gururaja, R. Dutt, T. Liao, and C. P. Rosé. paper code

  150. Few-shot reranking for multi-hop QA via language model prompting, in ACL, 2023. M. Khalifa, L. Logeswaran, M. Lee, H. Lee, and L. Wang. paper code

  151. A domain-transfer meta task design paradigm for few-shot slot tagging, in AAAI, 2023. F. Yang, X. Zhou, Y. Yang, B. Ma, R. Dong, and A. Atawulla. paper

  152. Revisiting sparse retrieval for few-shot entity linking, in EMNLP, 2023. Y. Chen, Z. Xu, B. Hu, and M. Zhang. paper code

  153. Vicinal risk minimization for few-shot cross-lingual transfer in abusive language detection, in EMNLP, 2023. G. D. l. P. Sarracén, P. Rosso, R. Litschko, G. Glavaš, and S. Ponzetto. paper

  154. Hypernetwork-based decoupling to improve model generalization for few-shot relation extraction, in EMNLP, 2023. L. Zhang, C. Zhou, F. Meng, J. Su, Y. Chen, and J. Zhou. paper code

  155. Towards low-resource automatic program repair with meta-learning and pretrained language models, in EMNLP, 2023. W. Wang, Y. Wang, S. Hoi, and S. Joty. paper code

  156. Few-shot detection of machine-generated text using style representations, in ICLR, 2024. R. A. R. Soto, K. Koch, A. Khan, B. Y. Chen, M. Bishop, and N. Andrews. paper

Knowledge Graph

  1. MetaEXP: Interactive explanation and exploration of large knowledge graphs, in WWW, 2018. F. Behrens, S. Bischoff, P. Ladenburger, J. Rückin, L. Seidel, F. Stolp, M. Vaichenker, A. Ziegler, D. Mottin, F. Aghaei, E. Müller, M. Preusse, N. Müller, and M. Hunger. paper code

  2. Meta relational learning for few-shot link prediction in knowledge graphs, in EMNLP-IJCNLP, 2019. M. Chen, W. Zhang, W. Zhang, Q. Chen, and H. Chen. paper

  3. Adapting meta knowledge graph information for multi-hop reasoning over few-shot relations, in EMNLP-IJCNLP, 2019. X. Lv, Y. Gu, X. Han, L. Hou, J. Li, and Z. Liu. paper

  4. Knowledge graph transfer network for few-shot recognition, in AAAI, 2020. R. Chen, T. Chen, X. Hui, H. Wu, G. Li, and L. Lin. paper

  5. Few-shot knowledge graph completion, in AAAI, 2020. C. Zhang, H. Yao, C. Huang, M. Jiang, Z. Li, and N. V. Chawla. paper

  6. Adaptive attentional network for few-shot knowledge graph completion, in EMNLP, 2020. J. Sheng, S. Guo, Z. Chen, J. Yue, L. Wang, T. Liu, and H. Xu. paper code

  7. Relational learning with gated and attentive neighbor aggregator for few-shot knowledge graph completion, in SIGIR, 2021. G. Niu, Y. Li, C. Tang, R. Geng, J. Dai, Q. Liu, H. Wang, J. Sun, F. Huang, and L. Si. paper

  8. Learning inter-entity-interaction for few-shot knowledge graph completion, in EMNLP, 2022. Y. Li, K. Yu, X. Huang, and Y. Zhang. paper code

  9. Meta-learning based knowledge extrapolation for temporal knowledge graph, in WWW, 2023. Z. Chen, C. Xu, F. Su, Z. Huang, and Y. Dou. paper

  10. Learning to sample and aggregate: Few-shot reasoning over temporal knowledge graphs, in NeurIPS, 2022. R. Wang, z. li, D. Sun, S. Liu, J. Li, B. Yin, and T. Abdelzaher. paper

  11. Few-shot relational reasoning via connection subgraph pretraining, in NeurIPS, 2022. Q. Huang, H. Ren, and J. Leskovec. paper code

  12. Hierarchical relational learning for few-shot knowledge graph completion, in ICLR, 2023. H. Wu, J. Yin, B. Rajaratnam, and J. Guo. paper code

  13. The unreasonable effectiveness of few-shot learning for machine translation, in ICML, 2023. X. Garcia, Y. Bansal, C. Cherry, G. F. Foster, M. Krikun, M. Johnson, and O. Firat. paper

  14. Prompting large language models with chain-of-thought for few-shot knowledge base question generation, in EMNLP, 2023. Y. Liang, J. Wang, H. Zhu, L. Wang, W. Qian, and Y. Lan. paper

Acoustic Signal Processing

  1. One-shot learning of generative speech concepts, in CogSci, 2014. B. Lake, C.-Y. Lee, J. Glass, and J. Tenenbaum. paper

  2. Machine speech chain with one-shot speaker adaptation, INTERSPEECH, 2018. A. Tjandra, S. Sakti, and S. Nakamura. paper

  3. Investigation of using disentangled and interpretable representations for one-shot cross-lingual voice conversion, INTERSPEECH, 2018. S. H. Mohammadi and T. Kim. paper

  4. Few-shot audio classification with attentional graph neural networks, INTERSPEECH, 2019. S. Zhang, Y. Qin, K. Sun, and Y. Lin. paper

  5. One-shot voice conversion with disentangled representations by leveraging phonetic posteriorgrams, INTERSPEECH, 2019. S. H. Mohammadi, and T. Kim. paper

  6. One-shot voice conversion with global speaker embeddings, INTERSPEECH, 2019. H. Lu, Z. Wu, D. Dai, R. Li, S. Kang, J. Jia, and H. Meng. paper

  7. One-shot voice conversion by separating speaker and content representations with instance normalization, INTERSPEECH, 2019. J.-C. Chou, and H.-Y. Lee. paper

  8. Audio2Head: Audio-driven one-shot talking-head generation with natural head motion, in IJCAI, 2021. S. Wang, L. Li, Y. Ding, C. Fan, and X. Yu. paper

  9. Few-shot low-resource knowledge graph completion with multi-view task representation generation, in KDD, 2023. S. Pei, Z. Kou, Q. Zhang, and X. Zhang. paper code

  10. Normalizing flow-based neural process for few-shot knowledge graph completion, in SIGIR, 2023. L. Luo, Y.-F. Li, G. Haffari, and S. Pan. paper code

Recommendation

  1. A meta-learning perspective on cold-start recommendations for items, in NeurIPS, 2017. M. Vartak, A. Thiagarajan, C. Miranda, J. Bratman, and H. Larochelle. paper

  2. MeLU: Meta-learned user preference estimator for cold-start recommendation, in KDD, 2019. H. Lee, J. Im, S. Jang, H. Cho, and S. Chung. paper code

  3. Sequential scenario-specific meta learner for online recommendation, in KDD, 2019. Z. Du, X. Wang, H. Yang, J. Zhou, and J. Tang. paper code

  4. Few-shot learning for new user recommendation in location-based social networks, in WWW, 2020. R. Li, X. Wu, X. Chen, and W. Wang. paper

  5. MAMO: Memory-augmented meta-optimization for cold-start recommendation, in KDD, 2020. M. Dong, F. Yuan, L. Yao, X. Xu, and L. Zhu. paper code

  6. Meta-learning on heterogeneous information networks for cold-start recommendation, in KDD, 2020. Y. Lu, Y. Fang, and C. Shi. paper code

  7. MetaSelector: Meta-learning for recommendation with user-level adaptive model selection, in WWW, 2020. M. Luo, F. Chen, P. Cheng, Z. Dong, X. He, J. Feng, and Z. Li. paper

  8. Fast adaptation for cold-start collaborative filtering with meta-learning, in ICDM, 2020. T. Wei, Z. Wu, R. Li, Z. Hu, F. Feng, X. H. Sun, and W. Wang. paper

  9. Preference-adaptive meta-learning for cold-start recommendation, in IJCAI, 2021. L. Wang, B. Jin, Z. Huang, H. Zhao, D. Lian, Q. Liu, and E. Chen. paper

  10. Meta-learning helps personalized product search, in WWW, 2022. B. Wu, Z. Meng, Q. Zhang, and S. Liang. paper

  11. Alleviating cold-start problem in CTR prediction with a variational embedding learning framework, in WWW, 2022. X. Xu, C. Yang, Q. Yu, Z. Fang, J. Wang, C. Fan, Y. He, C. Peng, Z. Lin, and J. Shao. paper

  12. PNMTA: A pretrained network modulation and task adaptation approach for user cold-start recommendation, in WWW, 2022. H. Pang, F. Giunchiglia, X. Li, R. Guan, and X. Feng. paper

  13. Few-shot news recommendation via cross-lingual transfer, in WWW, 2023. T. Guo, L. Yu, B. Shihada, and X. Zhang. paper code

  14. ColdNAS: Search to modulate for user cold-start recommendation, in WWW, 2023. S. Wu, Y. Wang, Q. Jing, D. Dong, D. Dou, and Q. Yao. paper code

  15. Contrastive collaborative filtering for cold-start item recommendation, in WWW, 2023. Z. Zhou, L. Zhang, and N. Yang. paper code

  16. Bootstrapping contrastive learning enhanced music cold-start matching, in WWW, 2023. X. Zhao, Y. Zhang, Q. Xiao, Y. Ren, and Y. Yang. paper

  17. A dynamic meta-learning model for time-sensitive cold-start recommendations, in AAAI, 2022. K. P. Neupane, E. Zheng, Y. Kong, and Q. Yu. paper

  18. SMINet: State-aware multi-aspect interests representation network for cold-start users recommendation, in AAAI, 2022. W. Tao, Y. Li, L. Li, Z. Chen, H. Wen, P. Chen, T. Liang, and Q. Lu. paper code

  19. Multimodality helps unimodality: Cross-modal few-shot learning with multimodal models, in CVPR, 2023. Z. Lin, S. Yu, Z. Kuang, D. Pathak, and D. Ramanan. paper code

  20. M2EU: Meta learning for cold-start recommendation via enhancing user preference estimation, in SIGIR, 2023. Z. Wu, and X. Zhou. paper code

  21. TAML: Time-aware meta learning for cold-start problem in news recommendation, in SIGIR, 2023. J. Li, Y. Zhang, X. Lin, X. Yang, G. Zhou, L. Li, H. Chen, and J. Zhou. paper

  22. Uncertainty-aware consistency learning for cold-start item recommendation, in SIGIR, 2023. T. Liu, C. Gao, Z. Wang, D. Li, J. Hao, D. Jin, and Y. Li. paper

  23. DCBT: A simple but effective way for unified warm and cold recommendation, in SIGIR, 2023. J. Yang, L. Zhang, Y. He, K. Ding, Z. Huan, X. Zhang, and L. Mo. paper

  24. A preference learning decoupling framework for user cold-start recommendation, in SIGIR, 2023. C. Wang, Y. Zhu, A. Sun, Z. Wang, and K. Wang. paper

  25. Aligning distillation for cold-start item recommendation, in SIGIR, 2023. F. Huang, Z. Wang, X. Huang, Y. Qian, Z. Li, and H. Chen. paper code

Others

  1. Low data drug discovery with one-shot learning, ACS Central Science, 2017. H. Altae-Tran, B. Ramsundar, A. S. Pappu, and V. Pande. paper

  2. SMASH: One-shot model architecture search through hypernetworks, in ICLR, 2018. A. Brock, T. Lim, J. Ritchie, and N. Weston. paper

  3. SPARC: Self-paced network representation for few-shot rare category characterization, in KDD, 2018. D. Zhou, J. He, H. Yang, and W. Fan. paper

  4. MetaPred: Meta-learning for clinical risk prediction with limited patient electronic health records, in KDD, 2019. X. S. Zhang, F. Tang, H. H. Dodge, J. Zhou, and F. Wang. paper code

  5. AffnityNet: Semi-supervised few-shot learning for disease type prediction, in AAAI, 2019. T. Ma, and A. Zhang. paper

  6. Data augmentation using learned transformations for one-shot medical image segmentation, in CVPR, 2019. A. Zhao, G. Balakrishnan, F. Durand, J. V. Guttag, and A. V. Dalca. paper

  7. Learning from multiple cities: A meta-learning approach for spatial-temporal prediction, in WWW, 2019. H. Yao, Y. Liu, Y. Wei, X. Tang, and Z. Li. paper code

  8. Federated meta-learning for fraudulent credit card detection, in IJCAI, 2020. W. Zheng, L. Yan, C. Gou, and F. Wang. paper

  9. Differentially private meta-learning, in ICLR, 2020. J. Li, M. Khodak, S. Caldas, and A. Talwalkar. paper

  10. Towards fast adaptation of neural architectures with meta learning, in ICLR, 2020. D. Lian, Y. Zheng, Y. Xu, Y. Lu, L. Lin, P. Zhao, J. Huang, and S. Gao. paper

  11. LT-Net: Label transfer by learning reversible voxel-wise correspondence for one-shot medical image segmentation, in CVPR, 2020. S. Wang, S. Cao, D. Wei, R. Wang, K. Ma, L. Wang, D. Meng, and Y. Zheng. paper

  12. Few-shot pill recognition, in CVPR, 2020. S. Ling, A. Pastor, J. Li, Z. Che, J. Wang, J. Kim, and P. L. Callet. paper

  13. Self-supervision with superpixels: Training few-shot medical image segmentation without annotation, in ECCV, 2020. C. Ouyang, C. Biffi, C. Chen, T. Kart, H. Qiu, and D. Rueckert. paper code

  14. Deep complementary joint model for complex scene registration and few-shot segmentation on medical images, in ECCV, 2020. Y. He, T. Li, G. Yang, Y. Kong, Y. Chen, H. Shu, J. Coatrieux, J. Dillenseger, and S. Li. paper

  15. Using optimal embeddings to learn new intents with few examples: An application in the insurance domain, in KDD, 2020. S. Acharya, and G. Fung. paper

  16. Meta-learning for query conceptualization at web scale, in KDD, 2020. F. X. Han, D. Niu, H. Chen, W. Guo, S. Yan, and B. Long. paper

  17. Few-sample and adversarial representation learning for continual stream mining, in WWW, 2020. Z. Wang, Y. Wang, Y. Lin, E. Delord, and L. Khan. paper

  18. Few-shot graph learning for molecular property prediction, in WWW, 2021. Z. Guo, C. Zhang, W. Yu, J. Herr, O. Wiest, M. Jiang, and N. V. Chawla. paper code

  19. Taxonomy-aware learning for few-shot event detection, in WWW, 2021. J. Zheng, F. Cai, W. Chen, W. Lei, and H. Chen. paper

  20. Learning from graph propagation via ordinal distillation for one-shot automated essay scoring, in WWW, 2021. Z. Jiang, M. Liu, Y. Yin, H. Yu, Z. Cheng, and Q. Gu. paper

  21. Few-shot network anomaly detection via cross-network meta-learning, in WWW, 2021. K. Ding, Q. Zhou, H. Tong, and H. Liu. paper

  22. Few-shot knowledge validation using rules, in WWW, 2021. M. Loster, D. Mottin, P. Papotti, J. Ehmüller, B. Feldmann, and F. Naumann. paper

  23. Graph learning regularization and transfer learning for few-shot event detection, in SIGIR, 2021. V. D. Lai, M. V. Nguyen, T. H. Nguyen, and F. Dernoncourt. paper code

  24. Graph-evolving meta-learning for low-resource medical dialogue generation, in AAAI, 2021. S. Lin, P. Zhou, X. Liang, J. Tang, R. Zhao, Z. Chen, and L. Lin. paper

  25. Modeling the probabilistic distribution of unlabeled data for one-shot medical image segmentation, in AAAI, 2021. Y. Ding, X. Yu, and Y. Yang. paper code

  26. Progressive network grafting for few-shot knowledge distillation, in AAAI, 2021. C. Shen, X. Wang, Y. Yin, J. Song, S. Luo, and M. Song. paper code

  27. Curriculum meta-learning for next POI recommendation, in KDD, 2021. Y. Chen, X. Wang, M. Fan, J. Huang, S. Yang, and W. Zhu. paper code

  28. MFNP: A meta-optimized model for few-shot next POI recommendation, in IJCAI, 2021. H. Sun, J. Xu, K. Zheng, P. Zhao, P. Chao, and X. Zhou. paper

  29. Physics-aware spatiotemporal modules with auxiliary tasks for meta-learning, in IJCAI, 2021. S. Seo, C. Meng, S. Rambhatla, and Y. Liu. paper

  30. Recurrent mask refinement for few-shot medical image segmentation, in ICCV, 2021. H. Tang, X. Liu, S. Sun, X. Yan, and X. Xie. paper code

  31. Property-aware relation networks for few-shot molecular property prediction, in NeurIPS, 2021. Y. Wang, A. Abuduweili, Q. Yao, and D. Dou. paper code

  32. Few-shot data-driven algorithms for low rank approximation, in NeurIPS, 2021. P. Indyk, T. Wagner, and D. Woodruff. paper

  33. Non-Gaussian Gaussian processes for few-shot regression, in NeurIPS, 2021. M. Sendera, J. Tabor, A. Nowak, A. Bedychaj, M. Patacchiola, T. Trzcinski, P. Spurek, and M. Zieba. paper

  34. HELP: Hardware-adaptive efficient latency prediction for NAS via meta-learning, in NeurIPS, 2021. H. Lee, S. Lee, S. Chong, and S. J. Hwang. paper

  35. Learning to learn dense Gaussian processes for few-shot learning, in NeurIPS, 2021. Z. Wang, Z. Miao, X. Zhen, and Q. Qiu. paper

  36. A meta-learning based stress category detection framework on social media, in WWW, 2022. X. Wang, L. Cao, H. Zhang, L. Feng, Y. Ding, and N. Li. paper

  37. Which images to label for few-shot medical landmark detection?, in CVPR, 2022. Q. Quan, Q. Yao, J. Li, and S. K. Zhou. paper

  38. Recognizing medical search query intent by few-shot learning, in SIGIR, 2022. Y. Wang, S. Wang, L. Yanyan, and D. Dou. paper code

  39. MetaCare++: Meta-learning with hierarchical subtyping for cold-start diagnosis prediction in healthcare data, in SIGIR, 2022. Y. Tan, C. Yang, X. Wei, C. Chen, W. Liu, L. Li, and J. Z. a. X. Zheng. paper

  40. Spatio-temporal graph few-shot learning with cross-city knowledge transfer, in KDD, 2022. B. Lu, X. Gan, W. Zhang, H. Yao, L. Fu, and X. Wang. paper code

  41. Few-shot learning for trajectory-based mobile game cheating detection, in KDD, 2022. Y. Su, D. Yao, X. Chu, W. Li, J. Bi, S. Zhao, R. Wu, S. Zhang, J. Tao, and H. Deng. paper code

  42. Improving few-shot text-to-SQL with meta self-training via column specificity, in IJCAI, 2022. X. Guo, Y. Chen, G. Qi, T. Wu, and H. Xu. paper code

  43. Graph few-shot learning with task-specific structures, in NeurIPS, 2022. S. Wang, C. Chen, and J. Li. paper code

  44. Meta-learning dynamics forecasting using task inference, in NeurIPS, 2022. R. Wang, R. Walters, and R. Yu. paper code

  45. Rapid model architecture adaption for meta-learning, in NeurIPS, 2022. Y. Zhao, X. Gao, I. Shumailov, N. Fusi, and R. D. Mullins. paper

  46. Cross-domain few-shot graph classification, in AAAI, 2022. and K. Hassani. paper

  47. Meta propagation networks for graph few-shot semi-supervised learning, in AAAI, 2022. K. Ding, J. Wang, J. Caverlee, and H. Liu. paper code

  48. Pushing the limits of few-shot anomaly detection in industry vision: GraphCore, in ICLR, 2023. G. Xie, J. Wang, J. Liu, F. Zheng, and Y. Jin. paper

  49. Context-enriched molecule representations improve few-shot drug discovery, in ICLR, 2023. J. Schimunek, P. Seidl, L. Friedrich, D. Kuhn, F. Rippmann, S. Hochreiter, and G. Klambauer. paper code

  50. Sequential latent variable models for few-shot high-dimensional time-series forecasting, in ICLR, 2023. X. Jiang, R. Missel, Z. Li, and L. Wang. paper code

  51. Transfer NAS with meta-learned Bayesian surrogates, in ICLR, 2023. G. Shala, T. Elsken, F. Hutter, and J. Grabocka. paper code

  52. Few-shot domain adaptation for end-to-end communication, in ICLR, 2023. J. Raghuram, Y. Zeng, D. Garcia, R. Ruiz, S. Jha, J. Widmer, and S. Banerjee. paper code

  53. Rethinking few-shot medical segmentation: A vector quantization view, in CVPR, 2023. S. Huang, T. Xu, N. Shen, F. Mu, and J. Li. paper

  54. Virtual node tuning for few-shot node classification, in KDD, 2023. Z. Tan, R. Guo, K. Ding, and H. Liu. paper

  55. Contrastive meta-learning for few-shot node classification, in KDD, 2023. S. Wang, Z. Tan, H. Liu, and J. Li. paper code

  56. Task-equivariant graph few-shot learning, in KDD, 2023. S. Kim, J. Lee, N. Lee, W. Kim, S. Choi, and C. Park. paper code

  57. Leveraging transferable knowledge concept graph embedding for cold-start cognitive diagnosis, in SIGIR, 2023. W. Gao, H. Wang, Q. Liu, F. Wang, X. Lin, L. Yue, Z. Zhang, R. Lv, and S. Wang. paper code

  58. Dual meta-learning with longitudinally consistent regularization for one-shot brain tissue segmentation across the human lifespan, in ICCV, 2023. Y. Sun, F. Wang, J. Shu, H. Wang, L. Wang, D. Meng, and C. Lian. paper code

  59. The rise of AI language pathologists: Exploring two-level prompt learning for few-shot weakly-supervised whole slide image classification, in NeurIPS, 2023. L. Qu, x. Luo, K. Fu, M. Wang, and Z. Song. paper code

  60. Robust one-shot segmentation of brain tissues via image-aligned style transformation, in AAAI, 2023. J. Lv, X. Zeng, S. Wang, R. Duan, Z. Wang, and Q. Li. paper code

  61. Supervised contrastive few-shot learning for high-frequency time series, in AAAI, 2023. X. Chen, C. Ge, M. Wang, and J. Wang. paper

  62. Cross-domain few-shot graph classification with a reinforced task coordinator, in AAAI, 2023. Q. Zhang, S. Pei, Q. Yang, C. Zhang, N. V. Chawla, and X. Zhang. paper

  63. Robust graph meta-learning via manifold calibration with proxy subgraphs, in AAAI, 2023. Z. Wang, L. Cao, W. Lin, M. Jiang, and K. C. Tan. paper

  64. Few-shot defect image generation via defect-aware feature manipulation, in AAAI, 2023. Y. Duan, Y. Hong, L. Niu, and L. Zhang. paper code

  65. Multi-label few-shot ICD coding as autoregressive generation with prompt, in AAAI, 2023. Z. Yang, S. Kwon, Z. Yao, and H. Yu. paper code

  66. Enhancing small medical learners with privacy-preserving contextual prompting, in ICLR, 2024. X. Zhang, S. Li, X. Yang, C. Tian, Y. Qin, and L. R. Petzol. paper code

  67. TEST: Text prototype aligned embedding to activate LLM's ability for time series, in ICLR, 2024. C. Sun, Y. Li, H. Li, and S. Hong. paper

  1. Learning to learn around a common mean, in NeurIPS, 2018. G. Denevi, C. Ciliberto, D. Stamos, and M. Pontil. paper

  2. Meta-learning and universality: Deep representations and gradient descent can approximate any learning algorithm, in ICLR, 2018. C. Finn and S. Levine. paper

  3. A theoretical analysis of the number of shots in few-shot learning, in ICLR, 2020. T. Cao, M. T. Law, and S. Fidler. paper

  4. Rapid learning or feature reuse? Towards understanding the effectiveness of MAML, in ICLR, 2020. A. Raghu, M. Raghu, S. Bengio, and O. Vinyals. paper

  5. Robust meta-learning for mixed linear regression with small batches, in NeurIPS, 2020. W. Kong, R. Somani, S. Kakade, and S. Oh. paper

  6. One-shot distributed ridge regression in high dimensions, in ICML, 2020. Y. Sheng, and E. Dobriban. paper

  7. Bridging the gap between practice and PAC-Bayes theory in few-shot meta-learning, in NeurIPS, 2021. N. Ding, X. Chen, T. Levinboim, S. Goodman, and R. Soricut. paper

  8. Generalization bounds for meta-learning: An information-theoretic analysis, in NeurIPS, 2021. Q. CHEN, C. Shui, and M. Marchand. paper

  9. Generalization bounds for meta-learning via PAC-Bayes and uniform stability, in NeurIPS, 2021. A. Farid, and A. Majumdar. paper

  10. Unraveling model-agnostic meta-learning via the adaptation learning rate, in ICLR, 2022. Y. Zou, F. Liu, and Q. Li. paper

  11. On the importance of firth bias reduction in few-shot classification, in ICLR, 2022. S. Ghaffari, E. Saleh, D. Forsyth, and Y. Wang. paper code

  12. Global convergence of MAML and theory-inspired neural architecture search for few-shot learning, in CVPR, 2022. H. Wang, Y. Wang, R. Sun, and B. Li. paper

  13. Smoothed embeddings for certified few-shot learning, in NeurIPS, 2022. M. Pautov, O. Kuznetsova, N. Tursynbek, A. Petiushko, and I. Oseledets. paper code

  14. Towards few-shot adaptation of foundation models via multitask finetuning, in ICLR, 2024. Z. Xu, Z. Shi, J. Wei, F. Mu, Y. Li, and Y. Liang. paper code

  1. Label-embedding for attribute-based classification, in CVPR, 2013. Z. Akata, F. Perronnin, Z. Harchaoui, and C. Schmid. paper

  2. A unified semantic embedding: Relating taxonomies and attributes, in NeurIPS, 2014. S. J. Hwang and L. Sigal. paper

  3. Multi-attention network for one shot learning, in CVPR, 2017. P. Wang, L. Liu, C. Shen, Z. Huang, A. van den Hengel, and H. T. Shen. paper

  4. Few-shot and zero-shot multi-label learning for structured label spaces, in EMNLP, 2018. A. Rios and R. Kavuluru. paper

  5. Learning compositional representations for few-shot recognition, in ICCV, 2019. P. Tokmakov, Y.-X. Wang, and M. Hebert. paper code

  6. Large-scale few-shot learning: Knowledge transfer with class hierarchy, in CVPR, 2019. A. Li, T. Luo, Z. Lu, T. Xiang, and L. Wang. paper

  7. Generalized zero- and few-shot learning via aligned variational autoencoders, in CVPR, 2019. E. Schonfeld, S. Ebrahimi, S. Sinha, T. Darrell, and Z. Akata. paper code

  8. F-VAEGAN-D2: A feature generating framework for any-shot learning, in CVPR, 2019. Y. Xian, S. Sharma, B. Schiele, and Z. Akata. paper

  9. TGG: Transferable graph generation for zero-shot and few-shot learning, in ACM MM, 2019. C. Zhang, X. Lyu, and Z. Tang. paper

  10. Adaptive cross-modal few-shot learning, in NeurIPS, 2019. C. Xing, N. Rostamzadeh, B. N. Oreshkin, and P. O. Pinheiro. paper

  11. Learning meta model for zero- and few-shot face anti-spoofing, in AAAI, 2020. Y. Qin, C. Zhao, X. Zhu, Z. Wang, Z. Yu, T. Fu, F. Zhou, J. Shi, and Z. Lei. paper

  12. RD-GAN: Few/Zero-shot chinese character style transfer via radical decomposition and rendering, in ECCV, 2020. Y. Huang, M. He, L. Jin, and Y. Wang. paper

  13. An empirical study on large-scale multi-label text classification including few and zero-shot labels, in EMNLP, 2020. I. Chalkidis, M. Fergadiotis, S. Kotitsas, P. Malakasiotis, N. Aletras, and I. Androutsopoulos. paper

  14. Multi-label few/zero-shot learning with knowledge aggregated from multiple label graphs, in EMNLP, 2020. J. Lu, L. Du, M. Liu, and J. Dipnall. paper

  15. Emergent complexity and zero-shot transfer via unsupervised environment design, in NeurIPS, 2020. M. Dennis, N. Jaques, E. Vinitsky, A. Bayen, S. Russell, A. Critch, and S. Levine. paper

  16. Learning graphs for knowledge transfer with limited labels, in CVPR, 2021. P. Ghosh, N. Saini, L. S. Davis, and A. Shrivastava. paper

  17. Improving zero and few-shot abstractive summarization with intermediate fine-tuning and data augmentation, in NAACL-HLT, 2021. A. R. Fabbri, S. Han, H. Li, H. Li, M. Ghazvininejad, S. R. Joty, D. R. Radev, and Y. Mehdad. paper

  18. SEQZERO: Few-shot compositional semantic parsing with sequential prompts and zero-shot models, in NAACL, 2022. J. Yang, H. Jiang, Q. Yin, D. Zhang, B. Yin, and D. Yang. paper code

  19. Label verbalization and entailment for effective zero and few-shot relation extraction, in EMNLP, 2021. O. Sainz, O. L. d. Lacalle, G. Labaka, A. Barrena, and E. Agirre. paper code

  20. An empirical investigation of word alignment supervision for zero-shot multilingual neural machine translation, in EMNLP, 2021. A. Raganato, R. Vázquez, M. Creutz, and J. Tiedemann. paper

  21. Bridge to target domain by prototypical contrastive learning and label confusion: Re-explore zero-shot learning for slot filling, in EMNLP, 2021. L. Wang, X. Li, J. Liu, K. He, Y. Yan, and W. Xu. paper code

  22. A label-aware BERT attention network for zero-shot multi-intent detection in spoken language understanding, in EMNLP, 2021. T. Wu, R. Su, and B. Juang. paper

  23. Zero-shot dialogue disentanglement by self-supervised entangled response selection, in EMNLP, 2021. T. Chi, and A. I. Rudnicky. paper code

  24. Robust retrieval augmented generation for zero-shot slot filling, in EMNLP, 2021. M. R. Glass, G. Rossiello, M. F. M. Chowdhury, and A. Gliozzo. paper code

  25. Everything is all it takes: A multipronged strategy for zero-shot cross-lingual information extraction, in EMNLP, 2021. M. Yarmohammadi, S. Wu, M. Marone, H. Xu, S. Ebner, G. Qin, Y. Chen, J. Guo, C. Harman, K. Murray, A. S. White, M. Dredze, and B. V. Durme. paper code

  26. An empirical study on multiple information sources for zero-shot fine-grained entity typing, in EMNLP, 2021. Y. Chen, H. Jiang, L. Liu, S. Shi, C. Fan, M. Yang, and R. Xu. paper

  27. Zero-shot dialogue state tracking via cross-task transfer, in EMNLP, 2021. Z. Lin, B. Liu, A. Madotto, S. Moon, Z. Zhou, P. Crook, Z. Wang, Z. Yu, E. Cho, R. Subba, and P. Fung. paper code

  28. Finetuned language models are zero-shot learners, in ICLR, 2022. J. Wei, M. Bosma, V. Zhao, K. Guu, A. W. Yu, B. Lester, N. Du, A. M. Dai, and Q. V. Le. paper code

  29. Zero-shot stance detection via contrastive learning, in WWW, 2022. B. Liang, Z. Chen, L. Gui, Y. He, M. Yang, and R. Xu. paper code

  30. Reframing instructional prompts to GPTk's language, in Findings of ACL, 2022. D. Khashabi, C. Baral, Y. Choi, and H. Hajishirzi. paper

  31. JointCL: A joint contrastive learning framework for zero-shot stance detection, in ACL, 2022. B. Liang, Q. Zhu, X. Li, M. Yang, L. Gui, Y. He, and R. Xu. paper code

  32. Knowledgeable prompt-tuning: Incorporating knowledge into prompt verbalizer for text classification, in ACL, 2022. S. Hu, N. Ding, H. Wang, Z. Liu, J. Wang, J. Li, W. Wu, and M. Sun. paper code

  33. Uni-Perceiver: Pre-training unified architecture for generic perception for zero-shot and few-shot tasks, in CVPR, 2022. X. Zhu, J. Zhu, H. Li, X. Wu, H. Li, X. Wang, and J. Dai. paper

  34. Enhancing zero-shot stance detection via targeted background knowledge, in SIGIR, 2022. Q. Zhu, B. Liang, J. Sun, J. Du, and L. Z. a. X. Ruifeng. paper

  35. Textual entailment for event argument extraction: Zero- and few-shot with multi-source learning, in NAACL, 2022. O. Sainz, I. Gonzalez-Dios, O. L. d. Lacalle, B. Min, and E. Agirre. paper code

  36. Extreme zero-shot learning for extreme text classification, in NAACL, 2022. Y. Xiong, W.-C. Chang, C.-J. Hsieh, H.-F. Yu, and I. S. Dhillon. paper code

  37. Domain-oriented prefix-tuning: Towards efficient and generalizable fine-tuning for zero-shot dialogue summarization, in NAACL, 2022. L. Zhao, F. Zheng, W. Zeng, K. He, W. Xu, H. Jiang, W. Wu, and Y. Wu. paper code

  38. Nearest neighbor zero-shot inference, in EMNLP, 2022. W. Shi, J. Michael, S. Gururangan, and L. Zettlemoyer. paper code

  39. Continued pretraining for better zero- and few-shot promptability, in EMNLP, 2022. Z. Wu, R. L. L. IV, P. Walsh, A. Bhagia, D. Groeneveld, S. Singh, and I. Beltagy. paper code

  40. InstructDial: Improving zero and few-shot generalization in dialogue through instruction tuning, in EMNLP, 2022. P. Gupta, C. Jiao, Y.-T. Yeh, S. Mehri, M. Eskenazi, and J. P. Bigham. paper code

  41. Prompt-and-Rerank: A method for zero-shot and few-shot arbitrary textual style transfer with small language models, in EMNLP, 2022. M. Suzgun, L. Melas-Kyriazi, and D. Jurafsky. paper code

  42. Learning instructions with unlabeled data for zero-shot cross-task generalization, in EMNLP, 2022. Y. Gu, P. Ke, X. Zhu, and M. Huang. paper code

  43. Zero-shot cross-lingual transfer of prompt-based tuning with a unified multilingual prompt, in EMNLP, 2022. L. Huang, S. Ma, D. Zhang, F. Wei, and H. Wang. paper code

  44. Finetune like you pretrain: Improved finetuning of zero-shot vision models, in CVPR, 2023. S. Goyal, A. Kumar, S. Garg, Z. Kolter, and A. Raghunathan. paper code

  45. WinCLIP: Zero-/few-shot anomaly classification and segmentation, in CVPR, 2023. J. Jeong, Y. Zou, T. Kim, D. Zhang, A. Ravichandran, and O. Dabeer. paper

  46. SemSup-XC: Semantic supervision for zero and few-shot extreme classification, in ICML, 2023. P. Aggarwal, A. Deshpande, and K. R. Narasimhan. paper code

  47. Zero- and few-shot event detection via prompt-based meta learning, in ACL, 2023. Z. Yue, H. Zeng, M. Lan, H. Ji, and D. Wang. paper code

  48. HINT: Hypernetwork instruction tuning for efficient zero- and few-shot generalisation, in ACL, 2023. H. Ivison, A. Bhagia, Y. Wang, H. Hajishirzi, and M. E. Peters. paper code

  49. What does the failure to reason with "respectively" in zero/few-shot settings tell us about language models? acl 2023, in ACL, 2023. R. Cui, S. Lee, D. Hershcovich, and A. Søgaard. paper code

  50. Pre-training intent-aware encoders for zero- and few-shot intent classification, in EMNLP, 2023. M. Sung, J. Gung, E. Mansimov, N. Pappas, R. Shu, S. Romeo, Y. Zhang, and V. Castelli. paper

  51. ZGUL: Zero-shot generalization to unseen languages using multi-source ensembling of language adapters, in EMNLP, 2023. V. Rathore, R. Dhingra, P. Singla, and Mausam. paper code

  52. Adaptive end-to-end metric learning for zero-shot cross-domain slot filling, in EMNLP, 2023. Y. Shi, L. Wu, and M. Shao. paper code

  53. Empirical study of zero-shot NER with ChatGPT, in EMNLP, 2023. T. Xie, Q. Li, J. Zhang, Y. Zhang, Z. Liu, and H. Wang. paper code

  54. Learning to describe for predicting zero-shot drug-drug interactions, in EMNLP, 2023. F. Zhu, Y. Zhang, L. Chen, B. Qin, and R. Xu. paper code

  55. The benefits of label-description training for zero-shot text classification, in EMNLP, 2023. L. Gao, D. Ghosh, and K. Gimpel. paper code

  56. Gen-Z: Generative zero-shot text classification with contextualized label descriptions, in ICLR, 2024. S. Kumar, C. Y. Park, and Y. Tsvetkov. paper

  57. Evaluating the zero-shot robustness of instruction-tuned language models, in ICLR, 2024. J. Sun, C. Shaib, and B. C. Wallace. paper

  58. Boosting prompting mechanisms for zero-shot speech synthesis, in ICLR, 2024. Z. Jiang, J. Liu, Y. Ren, J. He, Z. Ye, S. Ji, Q. Yang, C. Zhang, P. Wei, C. Wang, X. Yin, Z. MA, and Z. Zhao. paper

  59. Zero and few-shot semantic parsing with ambiguous inputs, in ICLR, 2024. E. Stengel-Eskin, K. Rawlins, and B. V. Durme. paper

  60. Uni3D: Exploring unified 3D representation at scale, in ICLR, 2024. J. Zhou, J. Wang, B. Ma, Y.-S. Liu, T. Huang, and X. Wang. paper

  1. Continuous adaptation via meta-learning in nonstationary and competitive environments, in ICLR, 2018. M. Al-Shedivat, T. Bansal, Y. Burda, I. Sutskever, I. Mordatch, and P. Abbeel. paper

  2. Deep online learning via meta-learning: Continual adaptation for model-based RL, in ICLR, 2018. A. Nagabandi, C. Finn, and S. Levine. paper

  3. Incremental few-shot learning with attention attractor networks, in NeurIPS, 2019. M. Ren, R. Liao, E. Fetaya, and R. S. Zemel. paper code

  4. Bidirectional one-shot unsupervised domain mapping, in ICCV, 2019. T. Cohen, and L. Wolf. paper

  5. XtarNet: Learning to extract task-adaptive representation for incremental few-shot learning, in ICML, 2020. S. W. Yoon, D. Kim, J. Seo, and J. Moon. paper code

  6. Few-shot class-incremental learning, in CVPR, 2020. X. Tao, X. Hong, X. Chang, S. Dong, X. Wei, and Y. Gong. paper

  7. Wandering within a world: Online contextualized few-shot learning, in ICLR, 2021. M. Ren, M. L. Iuzzolino, M. C. Mozer, and R. Zemel. paper

  8. Repurposing pretrained models for robust out-of-domain few-shot learning, in ICLR, 2021. N. Kwon, H. Na, G. Huang, and S. Lacoste-Julien. paper code

  9. Prototypical cross-domain self-supervised learning for few-shot unsupervised domain adaptation, in CVPR, 2021. X. Yue, Z. Zheng, S. Zhang, Y. Gao, T. Darrell, K. Keutzer, and A. S. Vincentelli. paper

  10. Self-promoted prototype refinement for few-shot class-incremental learning, in CVPR, 2021. K. Zhu, Y. Cao, W. Zhai, J. Cheng, and Z. Zha. paper

  11. Semantic-aware knowledge distillation for few-shot class-incremental learning, in CVPR, 2021. A. Cheraghian, S. Rahman, P. Fang, S. K. Roy, L. Petersson, and M. Harandi. paper

  12. Few-shot incremental learning with continually evolved classifiers, in CVPR, 2021. C. Zhang, N. Song, G. Lin, Y. Zheng, P. Pan, and Y. Xu. paper

  13. Learning a universal template for few-shot dataset generalization, in ICML, 2021. E. Triantafillou, H. Larochelle, R. Zemel, and V. Dumoulin. paper

  14. GP-Tree: A gaussian process classifier for few-shot incremental learning, in ICML, 2021. I. Achituve, A. Navon, Y. Yemini, G. Chechik, and E. Fetaya. paper code

  15. Addressing catastrophic forgetting in few-shot problems, in ICML, 2021. P. Yap, H. Ritter, and D. Barber. paper code

  16. Few-shot conformal prediction with auxiliary tasks, in ICML, 2021. A. Fisch, T. Schuster, T. Jaakkola, and R. Barzilay. paper code

  17. Few-shot lifelong learning, in AAAI, 2021. P. Mazumder, P. Singh, and P. Rai. paper

  18. Few-shot class-incremental learning via relation knowledge distillation, in AAAI, 2021. S. Dong, X. Hong, X. Tao, X. Chang, X. Wei, and Y. Gong. paper

  19. Few-shot one-class classification via meta-learning, in AAAI, 2021. A. Frikha, D. Krompass, H. Koepken, and V. Tresp. paper code

  20. Practical one-shot federated learning for cross-silo setting, in IJCAI, 2021. Q. Li, B. He, and D. Song. paper code

  21. Incremental few-shot text classification with multi-round new classes: Formulation, dataset and system, in NAACL-HLT, 2021. C. Xia, W. Yin, Y. Feng, and P. S. Yu. paper

  22. Continual few-shot learning for text classification, in EMNLP, 2021. R. Pasunuru, V. Stoyanov, and M. Bansal. paper code

  23. Self-training with few-shot rationalization, in EMNLP, 2021. M. M. Bhat, A. Sordoni, and S. Mukherjee. paper

  24. Diverse distributions of self-supervised tasks for meta-learning in NLP, in EMNLP, 2021. T. Bansal, K. P. Gunasekaran, T. Wang, T. Munkhdalai, and A. McCallum. paper

  25. Generalized and incremental few-shot learning by explicit learning and calibration without forgetting, in ICCV, 2021. A. Kukleva, H. Kuehne, and B. Schiele. paper

  26. Meta learning on a sequence of imbalanced domains with difficulty awareness, in ICCV, 2021. Z. Wang, T. Duan, L. Fang, Q. Suo, and M. Gao. paper code

  27. Synthesized feature based few-shot class-incremental learning on a mixture of subspaces, in ICCV, 2021. A. Cheraghian, S. Rahman, S. Ramasinghe, P. Fang, C. Simon, L. Petersson, and M. Harandi. paper

  28. Few-shot and continual learning with attentive independent mechanisms, in ICCV, 2021. E. Lee, C. Huang, and C. Lee. paper code

  29. Low-shot validation: Active importance sampling for estimating classifier performance on rare categories, in ICCV, 2021. F. Poms, V. Sarukkai, R. T. Mullapudi, N. S. Sohoni, W. R. Mark, D. Ramanan, and K. Fatahalian. paper

  30. Overcoming catastrophic forgetting in incremental few-shot learning by finding flat minima, in NeurIPS, 2021. G. SHI, J. CHEN, W. Zhang, L. Zhan, and X. Wu. paper

  31. Variational continual Bayesian meta-learning, in NeurIPS, 2021. Q. Zhang, J. Fang, Z. Meng, S. Liang, and E. Yilmaz. paper

  32. LFPT5: A unified framework for lifelong few-shot language learning based on prompt tuning of T5, in ICLR, 2022. C. Qin, and S. Joty. paper code

  33. Subspace regularizers for few-shot class incremental learning, in ICLR, 2022. A. F. Akyürek, E. Akyürek, D. Wijaya, and J. Andreas. paper code

  34. Meta discovery: Learning to discover novel classes given very limited data, in ICLR, 2022. H. Chi, F. Liu, W. Yang, L. Lan, T. Liu, B. Han, G. Niu, M. Zhou, and M. Sugiyama. paper

  35. Topological transduction for hybrid few-shot learning, in WWW, 2022. J. Chen, and A. Zhang. paper

  36. Continual few-shot relation learning via embedding space regularization and data augmentation, in ACL, 2022. C. Qin, and S. Joty. paper code

  37. Few-shot class-incremental learning for named entity recognition, in ACL, 2022. R. Wang, T. Yu, H. Zhao, S. Kim, S. Mitra, R. Zhang, and R. Henao. paper

  38. Task-adaptive negative envision for few-shot open-set recognition, in CVPR, 2022. S. Huang, J. Ma, G. Han, and S. Chang. paper code

  39. Forward compatible few-shot class-incremental learning, in CVPR, 2022. D. Zhou, F. Wang, H. Ye, L. Ma, S. Pu, and D. Zhan. paper code

  40. Sylph: A hypernetwork framework for incremental few-shot object detection, in CVPR, 2022. L. Yin, J. M. Perez-Rua, and K. J. Liang. paper

  41. Constrained few-shot class-incremental learning, in CVPR, 2022. M. Hersche, G. Karunaratne, G. Cherubini, L. Benini, A. Sebastian, and A. Rahimi. paper

  42. iFS-RCNN: An incremental few-shot instance segmenter, in CVPR, 2022. K. Nguyen, and S. Todorovic. paper

  43. MetaFSCIL: A meta-learning approach for few-shot class incremental learning, in CVPR, 2022. Z. Chi, L. Gu, H. Liu, Y. Wang, Y. Yu, and J. Tang. paper

  44. Few-shot incremental learning for label-to-image translation, in CVPR, 2022. P. Chen, Y. Zhang, Z. Li, and L. Sun. paper

  45. Revisiting learnable affines for batch norm in few-shot transfer learning, in CVPR, 2022. M. Yazdanpanah, A. A. Rahman, M. Chaudhary, C. Desrosiers, M. Havaei, E. Belilovsky, and S. E. Kahou. paper

  46. Few-shot learning with noisy labels, in CVPR, 2022. K. J. Liang, S. B. Rangrej, V. Petrovic, and T. Hassner. paper

  47. Improving adversarially robust few-shot image classification with generalizable representations, in CVPR, 2022. J. Dong, Y. Wang, J. Lai, and X. Xie. paper

  48. Geometer: Graph few-shot class-incremental learning via prototype representation, in KDD, 2022. B. Lu, X. Gan, L. Yang, W. Zhang, L. Fu, and X. Wang. paper code

  49. Few-shot heterogeneous graph learning via cross-domain knowledge transfer, in KDD, 2022. Q. Zhang, X. Wu, Q. Yang, C. Zhang, and X. Zhang. paper

  50. Few-shot adaptation of pre-trained networks for domain shift, in IJCAI, 2022. W. Zhang, L. Shen, W. Zhang, and C.-S. Foo. paper

  51. MemREIN: Rein the domain shift for cross-domain few-shot learning, in IJCAI, 2022. Y. Xu, L. Wang, Y. Wang, C. Qin, Y. Zhang, and Y. FU. paper

  52. Continual few-shot learning with transformer adaptation and knowledge regularization, in WWW, 2023. X. Wang, Y. Liu, J. Fan, W. Wen, H. Xue, and W. Zhu. paper

  53. DENSE: Data-free one-shot federated learning, in NeurIPS, 2022. J. Zhang, C. Chen, B. Li, L. Lyu, S. Wu, S. Ding, C. Shen, and C. Wu. paper

  54. Towards practical few-shot query sets: Transductive minimum description length inference, in NeurIPS, 2022. S. T. Martin, M. Boudiaf, E. Chouzenoux, J.-C. Pesquet, and I. B. Ayed. paper code

  55. Task-level differentially private meta learning, in NeurIPS, 2022. X. Zhou, and R. Bassily. paper code

  56. FiT: Parameter efficient few-shot transfer learning for personalized and federated image classification, in ICLR, 2023. A. Shysheya, J. F. Bronskill, M. Patacchiola, S. Nowozin, and R. E. Turner. paper code

  57. Towards addressing label skews in one-shot federated learning, in ICLR, 2023. Y. Diao, Q. Li, and B. He. paper code

  58. Data-free one-shot federated learning under very high statistical heterogeneity, in ICLR, 2023. C. E. Heinbaugh, E. Luz-Ricca, and H. Shao. paper code

  59. Contrastive meta-learning for partially observable few-shot learning, in ICLR, 2023. A. Jelley, A. Storkey, A. Antoniou, and S. Devlin. paper code

  60. On the soft-subnetwork for few-shot class incremental learning, in ICLR, 2023. H. Kang, J. Yoon, S. R. H. Madjid, S. J. Hwang, and C. D. Yoo. paper [code](https://github.com/ihaeyong/ SoftNet-FSCIL)

  61. Warping the space: Weight space rotation for class-incremental few-shot learning, in ICLR, 2023. D.-Y. Kim, D.-J. Han, J. Seo, and J. Moon. paper code

  62. Neural collapse inspired feature-classifier alignment for few-shot class-incremental learning, in ICLR, 2023. Y. Yang, H. Yuan, X. Li, Z. Lin, P. Torr, and D. Tao. paper code

  63. Learning with fantasy: Semantic-aware virtual contrastive constraint for few-shot class-incremental learning, in CVPR, 2023. Z. Song, Y. Zhao, Y. Shi, P. Peng, L. Yuan, and Y. Tian. paper code

  64. Few-shot class-incremental learning via class-aware bilateral distillation, in CVPR, 2023. L. Zhao, J. Lu, Y. Xu, Z. Cheng, D. Guo, Y. Niu, and X. Fang. paper code

  65. GKEAL: Gaussian kernel embedded analytic learning for few-shot class incremental task, in CVPR, 2023. H. Zhuang, Z. Weng, R. He, Z. Lin, and Z. Zeng. paper

  66. Glocal energy-based learning for few-shot open-set recognition, in CVPR, 2023. H. Wang, G. Pang, P. Wang, L. Zhang, W. Wei, and Y. Zhang. paper code

  67. Open-set likelihood maximization for few-shot learning, in CVPR, 2023. M. Boudiaf, E. Bennequin, M. Tami, A. Toubhans, P. Piantanida, C. Hudelot, and I. B. Ayed. paper code

  68. Federated few-shot learning, in KDD, 2023. S. Wang, X. Fu, K. Ding, C. Chen, H. Chen, and J. Li. paper code

  69. LFS-GAN: Lifelong few-shot image generation, in ICCV, 2023. J. Seo, J. Kang, and G. Park. paper code

  70. Domain adaptive few-shot open-set learning, in ICCV, 2023. D. Pal, D. More, S. Bhargav, D. Tamboli, V. Aggarwal, and B. Banerjee. paper code

  71. Few-shot continual infomax learning, in ICCV, 2023. Z. Gu, C. Xu, J. Yang, and Z. Cui. paper

  72. Prototypical kernel learning and open-set foreground perception for generalized few-shot semantic segmentation, in ICCV, 2023. K. Huang, F. Wang, Y. Xi, and Y. Gao. paper

  73. DETA: Denoised task adaptation for few-shot learning, in ICCV, 2023. J. Zhang, L. Gao, X. Luo, H. Shen, and J. Song. paper code

  74. Few-shot class-incremental learning via training-free prototype calibration, in NeurIPS, 2023. Q. Wang, D. Zhou, Y. Zhang, D. Zhan, and H. Ye. paper code

  75. Alignment with human representations supports robust few-shot learning, in NeurIPS, 2023. I. Sucholutsky, and T. L. Griffiths. paper

  76. Incremental-DETR: Incremental few-shot object detection via self-supervised learning, in AAAI, 2023. N. Dong, Y. Zhang, M. Ding, and G. H. Lee. paper code

  77. Bayesian cross-modal alignment learning for few-shot out-of-distribution generalization, in AAAI, 2023. L. Zhu, X. Wang, C. Zhou, and N. Ye. paper code

  78. High-level semantic feature matters few-shot unsupervised domain adaptation, in AAAI, 2023. L. Yu, W. Yang, S. Huang, L. Wang, and M. Yang. paper

  79. Enhancing one-shot federated learning through data and ensemble co-boosting, in ICLR, 2024. R. Dai, Y. Zhang, A. Li, T. Liu, X. Yang, and B. Han. paper

  1. FewRel: A large-scale supervised few-shot relation classification dataset with state-of-the-art evaluation, in EMNLP, 2018. X. Han, H. Zhu, P. Yu, Z. Wang, Y. Yao, Z. Liu, and M. Sun. paper code

  2. Meta-World: A benchmark and evaluation for multi-task and meta reinforcement learning, arXiv preprint, 2019. T. Yu, D. Quillen, Z. He, R. Julian, K. Hausman, C. Finn, and S. Levine. paper code

  3. The Omniglot challenge: A 3-year progress report, in Current Opinion in Behavioral Sciences, 2019. B. M. Lake, R. Salakhutdinov, and J. B. Tenenbaum. paper code

  4. FewRel 2.0: Towards more challenging few-shot relation classification, in EMNLP-IJCNLP, 2019. T. Gao, X. Han, H. Zhu, Z. Liu, P. Li, M. Sun, and J. Zhou. paper code

  5. META-DATASET: A dataset of datasets for learning to learn from few examples, in ICLR, 2020. E. Triantafillou, T. Zhu, V. Dumoulin, P. Lamblin, U. Evci, K. Xu, R. Goroshin, C. Gelada, K. Swersky, P. Manzagol, and H. Larochelle. paper code

  6. Few-shot object detection with attention-rpn and multi-relation detector, in CVPR, 2020. Q. Fan, W. Zhuo, C.-K. Tang, Y.-W. Tai. paper code

  7. FSS-1000: A 1000-class dataset for few-shot segmentation, in CVPR, 2020. X. Li, T. Wei, Y. P. Chen, Y.-W. Tai, and C.-K. Tang. paper code

  8. Impact of base dataset design on few-shot image classification, in ECCV, 2020. O. Sbai, C. Couprie, and M. Aubry. paper code

  9. A unified few-shot classification benchmark to compare transfer and meta learning approaches, in NeurIPS, 2021. V. Dumoulin, N. Houlsby, U. Evci, X. Zhai, R. Goroshin, S. Gelly, and H. Larochelle. paper

  10. Few-shot learning evaluation in natural language understanding, in NeurIPS, 2021. S. Mukherjee, X. Liu, G. Zheng, S. Hosseini, H. Cheng, G. Yang, C. Meek, A. H. Awadallah, and J. Gao. paper code

  11. FS-Mol: A few-shot learning dataset of molecules, in NeurIPS, 2021. M. Stanley, J. Bronskill, K. Maziarz, H. Misztela, J. Lanini, M. H. S. Segler, N. Schneider, and M. Brockschmidt. paper code

  12. RAFT: A real-world few-shot text classification benchmark, in NeurIPS, 2021. N. Alex, E. Lifland, L. Tunstall, A. Thakur, P. Maham, C. J. Riedel, E. Hine, C. Ashurst, P. Sedille, A. Carlier, M. Noetel, and A. Stuhlmüller. paper code

  13. A large-scale benchmark for few-shot program induction and synthesis, in ICML, 2021. F. Alet, J. Lopez-Contreras, J. Koppel, M. Nye, A. Solar-Lezama, T. Lozano-Perez, L. Kaelbling, and J. Tenenbaum. paper code

  14. FEW-NERD: A few-shot named entity recognition dataset, in ACL-IJCNLP, 2021. N. Ding, G. Xu, Y. Chen, X. Wang, X. Han, P. Xie, H. Zheng, and Z. Liu. paper code

  15. CrossFit: A few-shot learning challenge for cross-task generalization in NLP, in EMNLP, 2021. Q. Ye, B. Y. Lin, and X. Ren. paper code

  16. ORBIT: A real-world few-shot dataset for teachable object recognition, in ICCV, 2021. D. Massiceti, L. Zintgraf, J. Bronskill, L. Theodorou, M. T. Harris, E. Cutrell, C. Morrison, K. Hofmann, and S. Stumpf. paper code

  17. FLEX: Unifying evaluation for few-shot NLP, in NeurIPS, 2021. J. Bragg, A. Cohan, K. Lo, and I. Beltagy. paper

  18. Two sides of meta-learning evaluation: In vs. out of distribution, in NeurIPS, 2021. A. Setlur, O. Li, and V. Smith. paper

  19. Realistic evaluation of transductive few-shot learning, in NeurIPS, 2021. O. Veilleux, M. Boudiaf, P. Piantanida, and I. B. Ayed. paper

  20. Meta-Album: Multi-domain meta-dataset for few-shot image classification, in NeurIPS, 2022. I. Ullah, D. Carrión-Ojeda, S. Escalera, I. Guyon, M. Huisman, F. Mohr, J. N. v. Rijn, H. Sun, J. Vanschoren, and P. A. Vu. paper code

  21. Geoclidean: Few-shot generalization in euclidean geometry, in NeurIPS, 2022. J. Hsu, J. Wu, and N. D. Goodman. paper code

  22. FewNLU: Benchmarking state-of-the-art methods for few-shot natural language understanding, in ACL, 2022. Y. Zheng, J. Zhou, Y. Qian, M. Ding, C. Liao, L. Jian, R. Salakhutdinov, J. Tang, S. Ruder, and Z. Yang. paper code

  23. Bongard-HOI: Benchmarking few-shot visual reasoning for human-object interactions, in CVPR, 2022. H. Jiang, X. Ma, W. Nie, Z. Yu, Y. Zhu, and A. Anandkumar. paper code

  24. Hard-Meta-Dataset++: Towards understanding few-shot performance on difficult tasks, in ICLR, 2023. S. Basu, M. Stanley, J. F. Bronskill, S. Feizi, and D. Massiceti. paper

  25. MEWL: Few-shot multimodal word learning with referential uncertainty, in ICML, 2023. G. Jiang, M. Xu, S. Xin, W. Liang, Y. Peng, C. Zhang, and Y. Zhu. paper code

  26. UNISUMM and SUMMZOO: Unified model and diverse benchmark for few-shot summarization, in ACL, 2023. Y. Chen, Y. Liu, R. Xu, Z. Yang, C. Zhu, M. Zeng, and Y. Zhang. paper code

  27. EHRSHOT: An EHR benchmark for few-shot evaluation of foundation models, in NeurIPS, 2023. M. Wornow, R. Thapa, E. Steinberg, J. A. Fries, and N. Shah. paper code

  28. CORE: A few-shot company relation classification dataset for robust domain adaptation, in EMNLP, 2023. P. Borchert, J. D. Weerdt, K. Coussement, A. D. Caigny, and M.-F. Moens. paper code

  29. The COT collection: Improving zero-shot and few-shot learning of language models via chain-of-thought fine-tuning, in EMNLP, 2023. S. Kim, S. Joo, D. Kim, J. Jang, S. Ye, J. Shin, and M. Seo. paper code

  30. JASMINE: Arabic GPT models for few-shot learning, in EMNLP, 2023. E. M. B. Nagoudi, M. Abdul-Mageed, A. Elmadany, A. Inciarte, and M. T. I. Khondaker. paper

  31. Fine-tuned LLMs know more, hallucinate less with few-shot sequence-to-sequence semantic parsing over Wikidata, in EMNLP, 2023. S. Xu, S. Liu, T. Culhane, E. Pertseva, M.-H. Wu, S. Semnani, and M. Lam. paper code

  32. MetaCoCo: A new few-shot classification benchmark with spurious correlation, in ICLR, 2024. M. Zhang, H. Li, F. Wu, and K. Kuang. paper

  33. Bongard-OpenWorld: Few-shot reasoning for free-form visual concepts in the real world, in ICLR, 2024. R. Wu, X. Ma, Q. Li, Z. Zhang, W. Wang, S.-C. Zhu, and Y. Wang. paper

  1. PaddleFSL, a library for few-shot learning written in PaddlePaddle. link

  2. Torchmeta, a library for few-shot learning & meta-learning written in PyTorch. link

  3. learn2learn, a library for meta-learning written in PyTorch. link

  4. keras-fsl, a library for few-shot learning written in Tensorflow. link