Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement gpt2 (BPE) GGUF tokenizer conversion #397

Merged
merged 24 commits into from
Jun 10, 2024
Merged
Changes from 3 commits
Commits
Show all changes
24 commits
Select commit Hold shift + click to select a range
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
38 changes: 33 additions & 5 deletions mistralrs-core/src/pipeline/gguf_tokenizer.rs
Original file line number Diff line number Diff line change
@@ -1,10 +1,10 @@
use std::sync::atomic::Ordering;
use std::{collections::HashMap, sync::atomic::Ordering};

use anyhow::Result;
use candle_core::quantized::gguf_file::Content;
use tokenizers::{
decoders::{self, byte_fallback::ByteFallback, fuse::Fuse, strip::Strip},
models::unigram::Unigram,
models::{bpe::BpeBuilder, unigram::Unigram},
normalizers::{self, Prepend, Replace},
AddedToken, DecoderWrapper, ModelWrapper, NormalizerWrapper, Tokenizer,
};
Expand Down Expand Up @@ -77,7 +77,7 @@

let bos_str = tokens[bos as usize].clone();
let eos_str = tokens[eos as usize].clone();
let unk_str;
let mut unk_str = None;

let (tokenizer, ty) = match model.as_str() {
"llama" | "replit" => {
Expand All @@ -92,7 +92,7 @@

// Unigram (sentencepiece) default UNK is 0
let unk = unk.map(|x| x as usize).unwrap_or(0);
unk_str = tokens[unk].clone();
unk_str = Some(tokens[unk].clone());

let unigram = Unigram::from(vocab, Some(unk), true).map_err(anyhow::Error::msg)?;
let mut tokenizer = Tokenizer::new(ModelWrapper::Unigram(unigram));
Expand All @@ -113,6 +113,34 @@

(tokenizer, "unigram")
}
"gpt2" => {
// This is a `bpe` tokenizer
let merges = merges
.as_ref()
.expect("Expect `tokenizer.ggml.merges` for `llama` unigram tokeizer.")
.into_iter()

Check failure on line 121 in mistralrs-core/src/pipeline/gguf_tokenizer.rs

View workflow job for this annotation

GitHub Actions / Clippy

this `.into_iter()` call is equivalent to `.iter()` and will not consume the `Vec`
.map(|merges| {
let res = merges.splitn(2, ' ').collect::<Vec<_>>();
(res[0].to_string(), res[1].to_string())
})
.collect::<Vec<_>>();
let mut vocab = HashMap::new();
for (i, token) in tokens.iter().enumerate() {
vocab.insert(token.clone(), i as u32);

Check failure on line 129 in mistralrs-core/src/pipeline/gguf_tokenizer.rs

View workflow job for this annotation

GitHub Actions / Clippy

casting `usize` to `u32` may truncate the value on targets with 64-bit wide pointers
}

let bpe = BpeBuilder::new()
.vocab_and_merges(vocab, merges)
.build()
.map_err(anyhow::Error::msg)?;
let mut tokenizer = Tokenizer::new(ModelWrapper::BPE(bpe));
tokenizer.with_decoder(decoders::byte_level::ByteLevel::new(true, true, true));

tokenizer.add_special_tokens(&[AddedToken::from(tokens[bos as usize].clone(), true)]);
tokenizer.add_special_tokens(&[AddedToken::from(tokens[eos as usize].clone(), true)]);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just curious, BPE has support for setting unk in it's builder variant, is it not relevant for some reason? (I know very little about these things)

Copy link
Owner Author

@EricLBuehler EricLBuehler Jun 8, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@polarathene the GGUF file I am testing with (QuantFactory/Meta-Llama-3-8B-Instruct-GGUF) does not have a unk token in the metadata, so I left it out here.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, but what about when they do? I assume that's possible since the tokenizer builder for BPE does support setting unk? There is no check for this, so if there was it'd just ignore it and introduce a bug?

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I wasn't sure if this is guaranteed to be not provided, but just in case I added 8d4dba5 and 763241e.


(tokenizer, "bpe")
}
other => {
anyhow::bail!("Tokenizer model `{other}` not supported.");
}
Expand All @@ -132,7 +160,7 @@
tokenizer,
bos: Some(bos_str),
eos: Some(eos_str),
unk: Some(unk_str),
unk: unk_str,
})
}

Expand Down
Loading