This repository contains the code for the paper "LLM-VA: Resolving the Jailbreak-Overrefusal Trade-off via Vector Alignment" (ACL 2026 Main Conference).
conda create -n llmva python=3.12.8 -y
conda activate llmvapip install -r requirements.txt"flash-attn==2.8.2" needs to be installed separately in Flash-Attn.
Setup server:
python src/server_answer.pyIn another terminal, run the client (Use CUDA_VISIBLE_DEVICES to specify which GPUs to use):
python src/run/llmva_run.py