- Hannah Kirk, Bertie Vidgen, and Scott Hale. 2022.
Is More Data Better? Re-thinking the Importance of Efficiency in Abusive Language Detection with Transformers-Based Active Learning.
In Proceedings of the Third Workshop on Threat, Aggression and Cyberbullying (TRAC 2022), pages 52–61, Gyeongju, Republic of Korea. Association for Computational Linguistics. - Julius Gonsior, Christian Falkenberg, Silvio Magino, Anja Reusch, Maik Thiele, and Wolfgang Lehner. 2022.
To Softmax, or not to Softmax: that is the question when applying Active Learning for Transformer Models.
ArXiv, abs/2210.03005. - Julia Romberg and Tobias Escher. 2022.
Automated Topic Categorisation of Citizens’ Contributions: Reducing Manual Labelling Efforts Through Active Learning.
In EGOV 2022: Electronic Government, pages 369–385, Cham. Springer International Publishing. - Christopher Schröder, Andreas Niekler, and Martin Potthast. 2022.
Revisiting Uncertainty-based Query Strategies for Active Learning with Transformers.
In Findings of the Association for Computational Linguistics: ACL 2022, pages 2194–2203.