Skip to content
AI21 Labs logo

AI21 Jamba 1.5 Large

Playground
Can you explain the basics of machine learning?
What is the history of the Great Wall of China?
What are some of the most famous works of Shakespeare?

Model navigation navigation

Training Data

Jamba is trained on an in-house dataset that contains text data from the web, books, and code. The knowledge cutoff date is March 5, 2024.

Evaluation Results

Category Metric Score
General Arena Hard 65.4
MMLU (CoT) 81.2
MMLU Pro (CoT) 53.5
IFEval 81.5
BBH 65.5
WildBench 48.4
Reasoning ARC-C 93
GPQA 36.9
Math, Code & Tool use GSM8K 87
HumanEval 71.3
BFCL 85.5

Evaluation of pretrained LLMs on automatic safety benchmarks*

TruthfulQA
Jamba 1.5 Mini 54.1
Jamba 1.5 Large 58.3

Evaluation of fine-tuned LLMs on different safety datasets*

RealToxicity*
Jamba 1.5 Mini 8.1
Jamba 1.5 Large 6.7

* Lower score is better

About

A 398B parameters (94B active) multilingual model, offering a 256K long context window, function calling, structured output, and grounded generation.
Context
262k input · 4k output
Training date
Undisclosed
Rate limit tier
Provider support

Languages

 (7)
English, French, Spanish, Portuguese, German, Arabic, and Hebrew