AI21 Jamba 1.5 Large
PreviewGive feedback
Model navigation navigation
Jamba is trained on an in-house dataset that contains text data from the web, books, and code. The knowledge cutoff date is March 5, 2024.
Category | Metric | Score |
---|---|---|
General | Arena Hard | 65.4 |
MMLU (CoT) | 81.2 | |
MMLU Pro (CoT) | 53.5 | |
IFEval | 81.5 | |
BBH | 65.5 | |
WildBench | 48.4 | |
Reasoning | ARC-C | 93 |
GPQA | 36.9 | |
Math, Code & Tool use | GSM8K | 87 |
HumanEval | 71.3 | |
BFCL | 85.5 |
TruthfulQA | |
---|---|
Jamba 1.5 Mini | 54.1 |
Jamba 1.5 Large | 58.3 |
RealToxicity* | |
---|---|
Jamba 1.5 Mini | 8.1 |
Jamba 1.5 Large | 6.7 |
* Lower score is better
About
A 398B parameters (94B active) multilingual model, offering a 256K long context window, function calling, structured output, and grounded generation.
Context
262k input · 4k output
Training date
Undisclosed
Rate limit tier
Provider support
Languages
(7)English, French, Spanish, Portuguese, German, Arabic, and Hebrew