Skip to content

mbzuai-nlp/DetectLLM

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

DetectLLM: Leveraging Log Rank Information for Zero-Shot Detection of Machine-Generated Text

Abstract

With the rapid progress of large language models (LLMs) and the huge amount of text they generated, it becomes more and more impractical to manually distinguish whether a text is machine-generated. Given the growing use of LLMs in social media and education, it prompts us to develop methods to detect machine-generated text, preventing malicious usage such as plagiarism, misinformation, and propaganda. Previous work has studied several zero-shot methods, which require no training data. These methods achieve good performance, but there is still a lot of room for improvement. In this paper, we introduce two novel zero-shot methods for detecting machine-generated text by leveraging the log rank information. One is called DetectLLM-LRR, which is fast and efficient, and the other is called DetectLLM-NPR, which is more accurate, but slower due to the need for perturbations. Our experiments on three datasets and seven language models show that our proposed methods improve over the state of the art by 3.9 and 1.75 AUROC points absolute. Moreover, DetectLLM-NPR needs fewer perturbations than previous work to achieve the same level of performance, which makes it more practical for real-world use. We also investigate the efficiency — performance trade-off based on users preference on these two measures and we provide intuition for using them in practice effectively.

Motivation

Log-Likelihood Log-Rank Ratio(LRR) and Normalized Log-Rank Perturbation(NPR) are both distinguishable features for classifying machine generated text and human generated texts. LRR is faster to compute and thus more efficient, but NPR achieves better performance.

Baselines zero-shot methods

The baseline methods are as follows.

Baseline zero-shot method Description
$\log p(x)$ a passage with a high average log probability is more likely to have been generated by the target LLM
Rank a passage with a higher average rank is more likely to have been generated by the target LLM
Log-Rank passage with higher average observed log rank is more likely to have been generated by the target LLM
Entropy machine-generated text has higher entropy
DetectGPT machine-generated text has more negative log probability curvature.
LLR(ours) LLR is generally larger for machine-generated text, which can be used for distinguishing machine-generated from human-written text.
NPR(ours) machine-generated and human-written texts are both negatively affected by small perturbations, i.e., the log rank score will increase after perturbations, but machine-generated text is more susceptible to perturbations and thus increasing more on log rank score after perturbation, which suggests higher NPR score for machine-generated texts.

LLMs used in our experiments:

We test our methods in the data generated by the following 7 models.

LLMs hugging face Link
GPT2-xl Link
GPT-Neo-2.7B Link
OPT-2.7B Link
OPT-13B Link
GPT-j-6B Link
Llama-13b Link
NeoX-20B Link

Main result

Since perturbation based methods (DetectGPT, NPR(ous)) have a more superior performance, (but 50-100 times slower), for fair comparision, we compare perturabtion based methods and perturbation free methods($\log p(x)$, rank, log rank, ) respectively.

Compare NPR to DetectGPT

Different Number of Perturbations.

Different Perturbation Functions

Different temperature

Efficiency analysis

Computational time

Computational time (seconds) for different zero-shot methods on different LLMs (averaged over 10 reruns)

Strategy of which zero-shot method to choose

  • T5-small and T5-base are not good candidates 538 for perturbation functions.

using T5-base and T5-small performs worse than LRR even with 50 to 100 perturbations, which suggests that LRR can be at least 50 to 100 times faster while outperform perturbation based methods. So, if the user can only afford T5-small or T5-base as perturbation function, they should choose LRR with no hesitation since it achieves both better efficiency and better performance.

  • Cost-Effectiveness on More Perturbations and Larger Perturbation Function.

(1) To achieve the same performance as LRR, generally we only need less than 10 perturbations using T5-3b as perturbation function. This estimate could help us choose whether to use NPR or LRR on validation set: setting the number of perturbation to be 10, if LRR outperforms NPR, we would suggest use LRR, otherwise, NPR would be a better option.

(2) To achieve the same performance, using T5-large takes more than 2 times perturbations than using T5-3b, while the perturbation time using T5-3b is less than twice of the time using T5-large, so using large perturbation functions such as T5-3b is much more efficient than using smaller ones such as T5-large. The only concern is the memory.

Create environment and run experiments

The data would be genrated while running main.py. We use three datasets: XSum, SQuAD, WritingPrompts, containing news articles, Wikipedia paragraphs and prompted stories, respectively.

conda create --name DetectLLM python=3.8 
conda activate DetectLLM
pip install -r requirements.txt

bash run.sh # run bash file

Acknowledgements

Citation

Please cite us if you use our code.

@article{su2023detectllm,
  title={DetectLLM: Leveraging Log Rank Information for Zero-Shot Detection of Machine-Generated Text},
  author={Su, Jinyan and Zhuo, Terry Yue and Wang, Di and Nakov, Preslav},
  journal={arXiv preprint arXiv:2306.05540},
  year={2023}
}

About

DetectLLM: Leveraging Log Rank Information for Zero-Shot Detection of Machine-Generated Text

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published