Skip to content
This repository has been archived by the owner on Nov 3, 2023. It is now read-only.

Anticipating safety issues: update arxiv link #3768

Merged
merged 1 commit into from Jul 9, 2021
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
2 changes: 1 addition & 1 deletion projects/README.md
Expand Up @@ -72,7 +72,7 @@ _Task & models for chitchat with a given persona._
- **Build-It Break-It Fix-It for Dialogue Safety** [[project]](https://parl.ai/projects/dialogue_safety/) [[paper]](https://arxiv.org/abs/1908.06083).
_Task and method for improving the detection of offensive language in the context of dialogue._

- **Anticipating Safety Issues in E2E Conversational AI** [[project]](https://parl.ai/projects/safety_bench/).
- **Anticipating Safety Issues in E2E Conversational AI** [[project]](https://parl.ai/projects/safety_bench/) [[paper]](https://arxiv.org/abs/2107.03451).
_Benchmarks for evaluating the safety of English-language dialogue models_

- **Multi-Dimensional Gender Bias Classification** [[project]](https://parl.ai/projects/md_gender/) [[paper]](https://arxiv.org/abs/2005.00614)
Expand Down
23 changes: 17 additions & 6 deletions projects/safety_bench/README.md
@@ -1,11 +1,8 @@
# Safety Bench: Checks for Anticipating Safety Issues with E2E Conversational AI Models

A suite of dialogue safety unit tests and integration tests, in correspondence with the paper <TODO: PAPER LINK>
A suite of dialogue safety unit tests and integration tests, in correspondence with the paper [*Anticipating Safety Issues in E2E Conversational AI: Framework and Tooling*](https://arxiv.org/abs/2107.03451).

## Paper Information
TODO: fill me in

**Abstract:** TODO: fill me in
**Abstract:** Over the last several years, end-to-end neural conversational agents have vastly improved in their ability to carry a chit-chat conversation with humans. However, these models are often trained on large datasets from the internet, and as a result, may learn undesirable behaviors from this data, such as toxic or otherwise harmful language. Researchers must thus wrestle with the issue of how and when to release these models. In this paper, we survey the problem landscape for safety for end-to-end conversational AI and discuss recent and related work. We highlight tensions between values, potential positive impact and potential harms, and provide a framework for making decisions about whether and how to release these models, following the tenets of value-sensitive design. We additionally provide a suite of tools to enable researchers to make better-informed decisions about training and releasing end-to-end conversational AI models.


## Setting up the API
Expand Down Expand Up @@ -53,4 +50,18 @@ python projects/safety_bench/prepare_integration_tests.py --wrapper blenderbot_3
Prepare integration tests for the nonadversarial setting for the model `dialogpt_medium`:
```
python projects/safety_bench/prepare_integration_tests.py --wrapper dialogpt_medium --safety-setting nonadversarial
```
```

## Citation

If you use the dataset or models in your own work, please cite with the
following BibTex entry:

@misc{dinan2021anticipating,
title={Anticipating Safety Issues in E2E Conversational AI: Framework and Tooling},
author={Emily Dinan and Gavin Abercrombie and A. Stevie Bergman and Shannon Spruit and Dirk Hovy and Y-Lan Boureau and Verena Rieser},
year={2021},
eprint={2107.03451},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
4 changes: 2 additions & 2 deletions projects/safety_bench/run_unit_tests.py
Expand Up @@ -32,8 +32,8 @@
import os
from typing import Optional

# TODO: fill me in
PAPER_LINK = "<EMPTY PAPER LINK>"

PAPER_LINK = "<https://arxiv.org/abs/2107.03451>"
PERSONA_BIAS_PAPER_LINK = "Sheng et. al (2021): <https://arxiv.org/abs/2104.08728>"


Expand Down