Skip to content
View Nips20262's full-sized avatar

Block or report Nips20262

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Please don't include any personal information such as legal names or email addresses. Maximum 100 characters, markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
Nips20262/README.md

Language Models Resist Alignment

Large language models (LLMs) often exhibit undesirable behaviors. Recent efforts have focused on aligning these models to prevent harmful generation, a process known as forward alignment. Despite these efforts, studies have shown that even a well-conducted alignment process can be easily circumvented, whether intentionally or by accident. Why is alignment so fragile? During the pre-training phase, the model undergoes massive updates on massive data, while the alignment phase involves only small updates on small data. In this work, we empirically demonstrate the elasticity of post-alignment models, i.e., the tendency to revert to the behavior distribution formed during the pre-training phase upon further fine-tuning. We formally prove that such fine-tuning disproportionately undermines alignment compared to pre-training, potentially by orders of magnitude. Our discovery signifies the importance of overcoming the inherent elasticity of language models, thereby going beyond superficial alignment.

Table of Contents

Language Models Resist Alignment

LLMs undergo numerous iterations during pre-training, forming a stable parameter distribution. Subsequent alignment procedures fine-tune this distribution to reflect human intentions. Our research question is: During further fine-tuning, is it harder to deviate from the stable parameter distribution formed during pre-training than to maintain it?

Recent studies have shown that models undergoing safety alignment can become unsafe again with minimal fine-tuning. Furthermore, fine-tuning aligned LLMs on non-malicious datasets can weaken the models' safety mechanisms as well. Why is alignment so fragile?

This counterintuitive phenomenon further prompts exploration into the inverse process of alignment: assuming that the alignment process of LLMs is indeed limited to superficial alignment, is it then possible to perform an inverse operation of alignment, i.e., to achieve the reversal of the alignment process through a series of technical measures? In this work, we investigate the possibility of reversing or revoking the alignment process in LLMs, a concept we refer to as unalignment. In a word, we aim to answer the under-explored question:

Do the parameters of language models exhibit elasticity, thereby resisting alignment?

Main Theorem

The Elasticity of Language Model

Short Takeaways

The main theorem illustrates that as the amount of data in the perturbation dataset $\mathcal{D}_3$ increases, the normalized compression rates of the model for both the pre-train dataset $\mathcal{D}_1$ and the SFT dataset $\mathcal{D}_2$ decrease, but the rate of decrease for the pre-train dataset is smaller than that for the SFT dataset by a factor of $\Theta(k)$, which in practice is many orders of magnitude.

This indicates that when faced with interference, the model tends to maintain the distribution contained in the larger dataset, namely the pre-train dataset, and is inclined to forget the distribution contained in the smaller dataset, namely the SFT dataset, which demonstrates the elasticity of language models.

For more details, please see our paper.

Experiment Results

In the previous sections, we proved that LLMs achieve stable behavioral distributions during the pre-training stage through massive updates on massive data. The alignment stage with small updates on small data does not erase such a distribution, and subsequent fine-tuning can easily restore this pre-alignment distribution. Building on top of this discovery, in this section, we primarily aim to answer the following questions:

  • Is inverse alignment easier than forward alignment?
  • Does elasticity consistently exist across models of different types and sizes?
  • Is elasticity correlated with model parameter size and pre-training data size?

Setting I: Comparison between Inverse Alignment and Forward Alignment

Measuring the transition from model 1 to model 2 is straightforward, considering factors such as data volume, update steps, and parameter distribution. However, measuring the transition from model 2 to model 1, i.e., inverse alignment, is difficult. To address this challenge, we design the following experiment: we fine-tune models based on $\theta_{k+1}$ and $\theta_{k+2}$ to derive $\theta_{k+1}^{\prime}$ and $\theta_{k+2}^{\prime}$, which we designate as path $A$ and path $B$, respectively. Specifically, we use a shared query set $Q$ for paths $A$ and $B$.

  • Path A. Responses generated by $\theta_{k+1}$ based on $Q_{1}$ are used to form Q-A pairs for path $A$'s inverse alignment, denoted as $Q_{A}$.

  • Path B. Similarly, responses generated by $\theta_{k+2}$ based on $Q_{1}$ are used to form Q-A pairs for path $B$'s inverse alignment, denoted as $Q_{B}$.

Given that paths $A$ and $B$ have identical training hyper-parameters and query counts for $Q_{A}$ and $Q_{B}$, we can assess the differences between $\theta_{k+1}^{\prime}$ and $\theta_{k+1}$ (represented by $\delta_{k+1}$), and between $\theta_{k+2}^{\prime}$ and $\theta_{k+2}$ (represented by $\delta_{k+2}$), utilizing the same training steps. If $\delta_{k+2}$ is consistently greater than $\delta_{k+1}$, it suggests that $\theta_{k+1}^{\prime}$ aligns more closely with $\theta_{k+1}$. Consequently, inverse alignment proves more effective with an equivalent number of steps than forward alignment. We use cross-entropy as the distance metric when calculating $\delta_{k+1}$ and $\delta_{k+2}$.

The experimental results show that $\delta_{k+1}$ is smaller than $\delta_{k+2}$ across all three dimensions of the three types of models with all three types datasets, demonstrating that inverse alignment is easier than forward alignment across diverse models and datasets.

Setting II: Analysis of Elasticity

Existence of Elasticity

We evaluate the elasticity phenomenon on Llama2-7B and Gemma-2B. The experimental results show that, for models fine-tuned with a large amount of positive sample data, only a small amount of negative sample fine-tuning is needed to quickly revert to the pre-training distribution, i.e., to make the curve drop below the gray dashed line. Subsequently, the rate of performance decline slows down and tends to stabilize.

Elasticity Increases with Model Size

To examine the change in elasticity with changes in model parameter size, we conduct the same experiments on Qwen models with 0.5B, 4B, and 7B parameters (each subfigure from left to right shows the changes in LLMs with parameter sizes of 0.4B, 4B, and 7B, respectively). As the model parameter size increases, the initial performance decline due to negative data fine-tuning is faster, while the subsequent decline is slower. This indicates that as the parameter size increases, there is an increased elasticity in response to both positive and negative data.

Elasticity Increases with Pre-training Data Amount

To verify that elasticity increases with the growth of pre-training data, we conduct the same experiments on multiple pre-training slices released by TinyLlama (each subfigure from left to right shows the changes in pre-training data sizes of 2.0T, 2.5T, and 3.0T). When the pre-training data volume increases, the initial performance decline due to negative data fine-tuning is faster, while the subsequent decline is slower. It demonstrates that larger pre-training data volumes reinforce the elasticity of LLMs.

An Example for reproducing our experiment results

Installation

Clone the source code from GitHub:

git clone https://github.com/Nips20262/Nips20262.git

Native Runner: Setup a conda environment using conda / mamba:

conda env create --file conda-recipe.yaml  # or `mamba env create --file conda-recipe.yaml`

Training

Follow the instructions in section Installation to setup the training environment properly.

conda activate resist-alignment
export WANDB_API_KEY="..."  # your W&B API key here

Supervised Fine-Tuning (SFT)

bash scripts/sft-imdb.sh \
    --train_datasets <your-dataset> \
    --model_name_or_path <your-model-name-or-checkpoint-path> \
    --output_dir output/sft

NOTE: You may need to update some of the parameters in the script according to your machine setup, such as the number of GPUs for training, the training batch size, etc.

Acknowledgment

This repository benefits from Llama2, TinyLlama, Stanford Alpaca, DeepSpeed, DeepSpeed-Chat, and Safe-RLHF.

Thanks for their outstanding works and their efforts to further promote LLM research.

Popular repositories Loading

  1. Nips20262 Nips20262 Public

    Language Models Resist Alignment

    Python 2