Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
22 changes: 22 additions & 0 deletions _posts/papers/2025-01-01-10.48550-arXiv.2501.10387.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
---
title: 'Online Influence Campaigns: Strategies and Vulnerabilities'
venue: arXiv.org
names: Andreea Musulan, Veronica Xia, Ethan Kosak-Hine, Tom Gibbs, Vidya Sujaya, Reihaneh
Rabbany, J. Godbout, Kellin Pelrine
tags:
- arXiv.org
link: https://doi.org/10.48550/arXiv.2501.10387
author: Andreea Musulan
categories: Publications

---

*{{ page.names }}*

**{{ page.venue }}**

{% include display-publication-links.html pub=page %}

## Abstract

None
22 changes: 22 additions & 0 deletions _posts/papers/2025-02-21-2502.15210.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
---
title: 'PairBench: A Systematic Framework for Selecting Reliable Judge VLMs'
venue: ''
names: Aarash Feizi, Sai Rajeswar, Adriana Romero-Soriano, Reihaneh Rabbany, Spandana
Gella, Valentina Zantedeschi, Joao Monteiro
tags:
- ''
link: https://arxiv.org/abs/2502.15210
author: Aarash Feizi
categories: Publications

---

*{{ page.names }}*

**{{ page.venue }}**

{% include display-publication-links.html pub=page %}

## Abstract

As large vision language models (VLMs) are increasingly used as automated evaluators, understanding their ability to effectively compare data pairs as instructed in the prompt becomes essential. To address this, we present PairBench, a low-cost framework that systematically evaluates VLMs as customizable similarity tools across various modalities and scenarios. Through PairBench, we introduce four metrics that represent key desiderata of similarity scores: alignment with human annotations, consistency for data pairs irrespective of their order, smoothness of similarity distributions, and controllability through prompting. Our analysis demonstrates that no model, whether closed- or open-source, is superior on all metrics; the optimal choice depends on an auto evaluator's desired behavior (e.g., a smooth vs. a sharp judge), highlighting risks of widespread adoption of VLMs as evaluators without thorough assessment. For instance, the majority of VLMs struggle with maintaining symmetric similarity scores regardless of order. Additionally, our results show that the performance of VLMs on the metrics in PairBench closely correlates with popular benchmarks, showcasing its predictive power in ranking models.
2 changes: 2 additions & 0 deletions records/semantic_paper_ids_ignored.json
Original file line number Diff line number Diff line change
Expand Up @@ -100,6 +100,7 @@
"493b9ad05b6baba4298ac8533268273aa187039d",
"49d2ca68962595e53283b049ca2e11a81fd681f4",
"49ebaefd64b48d071bfb0b0c5b1ec4df306f1a35",
"4a25ac06c7536aac81c009a35641fdf410d594df",
"4aed16d2d4266ceaa9d7d7e1e2ce4636e40f82f9",
"4c6f53097829872734aa11de5ba6788fd992ce50",
"4dc005ea288c50d57222122903edf87f21689781",
Expand Down Expand Up @@ -260,6 +261,7 @@
"bcb40efaf8296033fc9a45e80f556226cf6b6a11",
"bcb651d73447d96be58db5fac6fb13324842b351",
"be249c78272e69f4f4d90ac5392fa0f2ce1b8621",
"be839b75205330562737f3337b77cdaf35969222",
"be9baccab3b2625e728bf8cf7cc9f717cae7103e",
"c25245af4128a115a1056f7aa82d1cd0f883652f",
"c2fe18041e08ba360f21240e17a15f7b140660e9",
Expand Down