Benchmarking Llama, Mistral, Gemma, DeepSeek and GPT for factuality, toxicity, bias, instruction following, avoiding jailbreaks and propensity for hallucinations
| LLM Safety Benchmark | Paper | 39 Safety Datasets | Red teaming OSS Tool |
|---|---|---|---|
![]() |
![]() |
![]() |
![]() |
Note
UPDATED August 18th, 2025:
- Comparing latest Qwen, DeepSeek, Llama, Phi, Gemma, Olmo, Mistral
- Benchmarking OpenAI open source model: gpt-oss-20b
- 3 new state-of-the-art safety datasets (rt3 series)
UPDATED March 12th, 2025:
- Added Qwen2.5-7B
UPDATED February 10th, 2025:
- Benchmarking latest open source LLMs including DeepSeek, OLMo-2, etc.
- All datasets revised: 100s of ground truth corrections.
- Model tested are all 'small LLMs' (7B-12B parameters) except for GPT-4o added to the benchmark as 'upper limit'.
UPDATED August 19th, 2024:
- Benchmarking latest open source LLMs: Gemma-2, Llama3 & 3.1, Mistral v0.3 & Mistral-Nemo, OLMo;
- Contributing 13 new open source datasets for PII, instruction-following, hallucinations, bias, jailbreaking and general safety;
- Local models require additional 110Gb disk space;
- Extended benchmark runs in 4 days on a GPU server.
We ran the benchmark on a server with 1 x NVIDIA A100 80GB.
Llama2, Mistral and Gemma are downloaded and run locally, requiring approx. 90Gb disk space.
python3.11 -m venv .venv
. .venv/bin/activate
pip install wheel pip -U
pip install -r requirements.txt(Works on Python3.10 as well.)
In order to download Huggingface datasets and models you need a token.
Benchmark uses 14 datasets, 3 of which are gated and you need to request access here, here and here.
Llama2 is gated model, you need to request access.
Gemma is gated model, you need to request access.
In order to call OpenAI API, you need a key.
Export secret keys to environment variables:
export HF_TOKEN=xyz
export OPENAI_API_KEY=xyzWhen running a benchmark, first declare the folder where the data will be stored, for instance:
export REDLITE_DATA_DIR=./dataThe following script does it all:
python run_all.pyOriginal benchmark runs in ~24 hours on a GPU server.
Once completed, you can launch a local web app to visualize the benchmark:
redlite server


