Skip to content

Commit 81654ad

Browse files
committed
edit : minor fixes applied.
1 parent 5303591 commit 81654ad

File tree

1 file changed

+19
-21
lines changed

1 file changed

+19
-21
lines changed

_pages/about.md

Lines changed: 19 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ redirect_from:
88
- /about.html
99
---
1010

11-
Hey stranger, this is Sadra. I’m currently studying my PhD in Computer Science at USC, working at the intersection of Human-Computer Interaction (HCI) and Natural Language Processing (NLP)i.e., making LLMs better friends of humans.
11+
Hey stranger, this is Sadra. I’m currently studying my PhD in Computer Science at USC, working at the intersection of Human-Computer Interaction (HCI) and Natural Language Processing (NLP), i.e., making LLMs better friends of humans.
1212
My current research focuses on helping people make better decisions using LLMs.
1313
On the side (and honestly, all the time), I build and maintain scientific software tools with a great team of open-source enthusiasts.
1414
I’m always looking for ways to make technology and science more accessible, and fun—believing that open-source software is an ideal contribution to scientific communities that value transparency and reproducibility.
@@ -17,9 +17,7 @@ I'm always curious to meet new people and hear about their journeys, so shoot me
1717

1818

1919
<details>
20-
<summary>**CS PhD @ USC ✌️**</summary>
21-
Peek a boo!
22-
</details>
20+
<summary><b>CS PhD @ USC ✌️</b></summary>
2321

2422
The main problem I'm trying to solve is the integration of AI systems into human workflows—specifically, answering the question: "What is the core part of a task that AI cannot do, and how can AI assist humans in doing that?"
2523
Helping humans tackle the hardest parts of their jobs—with AI as a consultant—is the overarching meta-goal of my current research.
@@ -30,34 +28,34 @@ To address this, I've explored several domains where large language models (LLMs
3028

3129
I'm currently in my second year and looking forward to exploring more domains to develop a taxonomy of these challenges and a framework that identifies the right interaction patterns and integration points for AI.
3230
Throughout this journey, I've had the great opportunity to work with the Adaptive Computing Experience (ACE) Lab (Souti Chattopadhyay’s lab @ GCS) and [CUTE LAB NAME] (Jonathan May’s lab @ ISI).
31+
3332
You can find my publications below:
3433

35-
<details>
36-
<summary>[ICSE25] *Trust dynamics in AI-assisted development: Definitions, factors, and implications,* **Sadra Sabouri**, Philipp Eibl, Xinyi Zhou, Morteza Ziyadi, Nenad Medvidovic, Lars Lindemann, Souti Chattopadhyay</summary>
37-
<a href="https://www.amazon.science/publications/trust-dynamics-in-ai-assisted-development-definitions-factors-and-implications" style="text-decoration: none;"><div style="display: inline-block;padding: 6px 12px;background-color: #007BFF;color: white;border-radius: 4px;font-size: 14px;text-align: center;cursor: pointer;">PDF</div></a>
38-
39-
Software developers increasingly rely on AI code generation utilities. To ensure that “good” code is accepted into the code base and “bad” code is rejected, developers must know when to trust an AI suggestion. Understanding how developers build this intuition is crucial to enhancing developer-AI collaborative programming. In this paper, we seek to understand how developers (1) define and (2) evaluate the trustworthiness of a code suggestion and (3) how trust evolves when using AI code assistants. To answer these questions, we conducted a mixed method study consisting of an in-depth exploratory survey with (n= 29) developers followed by an observation study (n= 10). We found that comprehensibility and perceived correctness were the most frequently used factors to evaluate code suggestion trustworthiness. However, the gap in developers’ definition and evaluation of trust points to a lack of support for evaluating trustworthy code in real-time. We also found that developers often alter their trust decisions, keeping only 52% of original suggestions. Based on these findings, we extracted four guidelines to enhance developer-AI interactions. We validated the guidelines through a survey with (n= 7) domain experts and survey members (n= 8). We discuss the validated guidelines, how to apply them, and tools to help adopt them.
40-
</details>
41-
<details>
42-
<summary>[ACL25] *ELI-Why: Evaluating the Pedagogical Utility of Language Model Explanations,* Brihi Joshi, Keyu He, Sahana Ramnath, **Sadra Sabouri**, Kaitlyn Zhou, Souti Chattopadhyay, Swabha Swayamdipta, Xiang Ren</summary>
43-
<span class="link-block"><a href="https://arxiv.org/pdf/2506.14200" class="external-link button is-normal is-rounded is-dark"><span class="icon"><i class="fas fa-file-pdf"></i></span><span>Paper</span></a>
44-
<span class="link-block"><a href="https://github.com/INK-USC/ELI-Why" class="external-link button is-normal is-rounded is-dark"><span class="icon"><i class="fab fa-github"></i></span><span>Code</span></a></span>
45-
<span class="link-block"><a href="https://huggingface.co/collections/INK-USC/eli-why-6849086c86556f7a2dd7c686" class="external-link button is-normal is-rounded is-dark"><span class="icon"><img src="https://huggingface.co/front/assets/huggingface_logo-noborder.svg" alt="Hugging Face" style="height: 1em;"></span><span>Data</span></a></span>
46-
47-
Language models today are widely used in education, yet their ability to tailor responses for learners with varied informational needs and knowledge backgrounds remains under-explored. To this end, we introduce ELI-Why, a benchmark of 13.4K "Why" questions to evaluate the pedagogical capabilities of language models. We then conduct two extensive human studies to assess the utility of language model-generated explanatory answers (explanations) on our benchmark, tailored to three distinct educational grades: elementary, high-school and graduate school. In our first study, human raters assume the role of an "educator" to assess model explanations' fit to different educational grades. We find that GPT-4-generated explanations match their intended educational background only 50% of the time, compared to 79% for lay human-curated explanations. In our second study, human raters assume the role of a learner to assess if an explanation fits their own informational needs. Across all educational backgrounds, users deemed GPT-4-generated explanations 20% less suited on average to their informational needs, when compared to explanations curated by lay people. Additionally, automated evaluation metrics reveal that explanations generated across different language model families for different informational needs remain indistinguishable in their grade-level, limiting their pedagogical effectiveness.
48-
</details>
34+
<details>
35+
<summary>[ICSE25] <i>Trust dynamics in AI-assisted development: Definitions, factors, and implications,</i> <b>Sadra Sabouri</b>, Philipp Eibl, Xinyi Zhou, Morteza Ziyadi, Nenad Medvidovic, Lars Lindemann, Souti Chattopadhyay</summary>
36+
<a href="https://www.amazon.science/publications/trust-dynamics-in-ai-assisted-development-definitions-factors-and-implications" style="text-decoration: none;"><div style="display: inline-block;padding: 6px 12px;background-color: #007BFF;color: white;border-radius: 4px;font-size: 14px;text-align: center;cursor: pointer;">Paper</div></a>
37+
</br>
38+
We investigate how developers define, evaluate, and evolve trust in AI-generated code suggestions through a mixed-method study involving surveys and observations. We found that while comprehensibility and perceived correctness are key to trust decisions, developers often revise their choices, accepting only 52% of AI suggestions, highlighting the need for better real-time support and offering four validated guidelines to improve developer-AI collaboration.
39+
</details>
40+
<details>
41+
<summary>[ACL25] <i>ELI-Why: Evaluating the Pedagogical Utility of Language Model Explanations,</i> Brihi Joshi, Keyu He, Sahana Ramnath, <b>Sadra Sabouri</b>, Kaitlyn Zhou, Souti Chattopadhyay, Swabha Swayamdipta, Xiang Ren</summary>
42+
<a href="https://arxiv.org/pdf/2506.14200" style="text-decoration: none;"><div style="display: inline-block;padding: 6px 12px;background-color: #007BFF;color: white;border-radius: 4px;font-size: 14px;text-align: center;cursor: pointer;">Paper</div></a>
43+
<a href="https://github.com/INK-USC/ELI-Why" style="text-decoration: none;"><div style="display: inline-block;padding: 6px 12px;background-color: #007BFF;color: white;border-radius: 4px;font-size: 14px;text-align: center;cursor: pointer;">Code</div></a>
44+
<a href="https://huggingface.co/collections/INK-USC/eli-why-6849086c86556f7a2dd7c686" style="text-decoration: none;"><div style="display: inline-block;padding: 6px 12px;background-color: #007BFF;color: white;border-radius: 4px;font-size: 14px;text-align: center;cursor: pointer;">Data</div></a>
45+
</br>
46+
We investigate how well language models adapt explanations to learners with varying educational backgrounds using ELI-Why, a benchmark of 13.4K "Why" questions. Through two human studies, we found that GPT-4 explanations align with intended grade levels only 50% of the time and are rated 20% less suitable for learners’ needs compared to layperson-curated responses, revealing limitations in their pedagogical adaptability.
47+
</details>
4948

5049
Always happy to chat, collaborate, or just hear what you're working on; feel free to reach out!
5150

52-
5351
<!-- <details>
5452
<summary>**Open World Developer 🌐**</summary>
5553
In my free time, I become an open-source software developer! I'm an advocate for collaboration and shared knowledge. You'll find more about my open-source activities on my GitHub profile. Following that I co-founded [OpenSciLab](https://openscilab.com/) as a community for open science.
56-
54+
-->
5755
</details>
5856

5957
### News
6058

6159
Jan 2025: My paper [Trust dynamics in AI-assisted development: Definitions, factors, and implications](https://www.amazon.science/publications/trust-dynamics-in-ai-assisted-development-definitions-factors-and-implications) got accepted into International Conference on Software Engineering (ICSE) 2025. I will present my work remotely in searly May.
6260

63-
Sep 2024: I was awarded a [Trelis AI Grant](https://trelis.com/trelis-ai-grants/) for developing a RESTful API for PyCM, enhancing accessibility to machine learning statistical post-processing tools. -->
61+
Sep 2024: I was awarded a [Trelis AI Grant](https://trelis.com/trelis-ai-grants/) for developing a RESTful API for PyCM, enhancing accessibility to machine learning statistical post-processing tools.

0 commit comments

Comments
 (0)