Skip to content
AI21 Labs logo

AI21 Jamba 1.5 Mini

Playground
What are some of the most famous works of Shakespeare?
What is the history of the Great Wall of China?
What are some popular tourist attractions in Paris?

Model navigation navigation

Training Data

Jamba is trained on an in-house dataset that contains text data from the web, books, and code. The knowledge cutoff date is March 5, 2024.

Evaluation Results

Category Metric Score
General Arena Hard 46.1
MMLU 69.7
MMLU Pro (CoT) 42.5
IFEval 75.8
BBH 53.4
WildBench 42.4
Reasoning ARC-C 85.7
GPQA 32.3
Math, Code & tool use GSM8K 75.8
HumanEval 62.8
BFCL 80.6

Evaluation of pretrained LLMs on automatic safety benchmarks

TruthfulQA
Jamba 1.5 Mini 54.1
Jamba 1.5 Large 58.3

Evaluation of fine-tuned LLMs on different safety datasets

RealToxicity*
Jamba 1.5 Mini 8.1
Jamba 1.5 Large 6.7

* Lower score is better

About

A 52B parameters (12B active) multilingual model, offering a 256K long context window, function calling, structured output, and grounded generation.
Context
262k input · 4k output
Training date
Undisclosed
Rate limit tier
Provider support

Languages

 (7)
English, French, Spanish, Portuguese, German, Arabic, and Hebrew