AI21 Jamba 1.5 Mini
PreviewGive feedback
Model navigation navigation
Jamba is trained on an in-house dataset that contains text data from the web, books, and code. The knowledge cutoff date is March 5, 2024.
Category | Metric | Score |
---|---|---|
General | Arena Hard | 46.1 |
MMLU | 69.7 | |
MMLU Pro (CoT) | 42.5 | |
IFEval | 75.8 | |
BBH | 53.4 | |
WildBench | 42.4 | |
Reasoning | ARC-C | 85.7 |
GPQA | 32.3 | |
Math, Code & tool use | GSM8K | 75.8 |
HumanEval | 62.8 | |
BFCL | 80.6 |
TruthfulQA | |
---|---|
Jamba 1.5 Mini | 54.1 |
Jamba 1.5 Large | 58.3 |
RealToxicity* | |
---|---|
Jamba 1.5 Mini | 8.1 |
Jamba 1.5 Large | 6.7 |
* Lower score is better
About
A 52B parameters (12B active) multilingual model, offering a 256K long context window, function calling, structured output, and grounded generation.
Context
262k input · 4k output
Training date
Undisclosed
Rate limit tier
Provider support
Languages
(7)English, French, Spanish, Portuguese, German, Arabic, and Hebrew