I am Maciej Chrabąszcz, a PhD student at Warsaw University of Technology (WUT) and an AI Safety researcher at NASK - National Research Institute.
My research in AI Safety centers on developing efficient techniques to both enhance model safety and detect anomalous behavior. I am particularly focused on approaches that utilize the internal latent representations of generative models to achieve these objectives.
Should you wish to discuss my research or have any inquiries, feel free to contact me at: maciej.chrabaszcz.dokt@pw.edu.pl/maciej.chrabaszcz@nask.pl
-
Mamba Goes HoME: Hierarchical Soft Mixture-of-Experts for 3D Medical Image Segmentation
Szymon Plotka, Gizem Mert, Maciej Chrabąszcz, Ewa Szczurek, Arkadiusz Sitek
NeurIPS 2025 -
PLLuM-Align: Polish Preference Dataset for Large Language Model Alignment
Karolina Seweryn, Anna Kołos, Agnieszka Karlińska, Katarzyna Lorenc, Katarzyna Dziewulska, Maciej Chrabąszcz, Aleksandra Krasnodebska, Paula Betscher, Zofia Cieślińska, Katarzyna Kowol, Julia Moska, Dawid Motyka, Paweł Walkowiak, Bartosz Żuk, Arkadiusz Janz
EMNLP 2025 Main -
Rainbow-Teaming for the Polish Language: A Reproducibility Study
Aleksandra Krasnodębska, Maciej Chrabąszcz, Wojciech Kusa
TrustNLP 2025 -
Maybe I Should Not Answer That, but... Do LLMs Understand The Safety of Their Inputs?
Maciej Chrabąszcz, Filip Szatkowski, Bartosz Wójcik, Jan Dubiński, Tomasz Trzciński
ICLR 2025 Workshop Building Trust in LLMs and LLM Applications -
Aggregated Attributions for Explanatory Analysis of 3D Segmentation Models
Maciej Chrabąszcz*, Hubert Baniecki*, Piotr Komorowski, Szymon Płotka, Przemyslaw Biecek
WACV 2025 Oral Presentation -
Swin SMT: Global Sequential Modeling in 3D Medical Image Segmentation
Szymon Płotka*, Maciej Chrabąszcz*, Przemyslaw Biecek
MICCAI 2024 Oral Presentation -
Be Careful When Evaluating Explanations Regarding Ground Truth
Hubert Baniecki*, Maciej Chrabąszcz*, Andreas Holzinger, Bastian Pfeifer, Anna Saranti, Przemyslaw Biecek
arXiv 2023 Preprint


