Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adding needed Tokenizer's APIs #7047

Merged
merged 7 commits into from Mar 7, 2024

Conversation

tarekgh
Copy link
Member

@tarekgh tarekgh commented Mar 4, 2024

Fixes #7043

The change here is adding the following Tokenizer's APIs:

  • Enable the creation of Tiktoken tokenizer with streaming capability to avoid on-demand downloading of vocabulary files.
  • Introduce an API to facilitate encoding up to a specified maximum token count.

@tarekgh
Copy link
Member Author

tarekgh commented Mar 4, 2024

Copy link

codecov bot commented Mar 5, 2024

Codecov Report

Attention: Patch coverage is 85.09804% with 38 lines in your changes are missing coverage. Please review.

Project coverage is 68.82%. Comparing base (164fde0) to head (1d89506).
Report is 1 commits behind head on main.

Additional details and impacted files
@@            Coverage Diff             @@
##             main    #7047      +/-   ##
==========================================
+ Coverage   68.81%   68.82%   +0.01%     
==========================================
  Files        1255     1255              
  Lines      250248   250358     +110     
  Branches    25533    25550      +17     
==========================================
+ Hits       172197   172304     +107     
+ Misses      71442    71441       -1     
- Partials     6609     6613       +4     
Flag Coverage Δ
Debug 68.82% <85.09%> (+0.01%) ⬆️
production 63.25% <79.78%> (+<0.01%) ⬆️
test 88.51% <100.00%> (+0.01%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

Files Coverage Δ
src/Microsoft.ML.Tokenizers/Model/BPE.cs 63.81% <ø> (+1.94%) ⬆️
...rc/Microsoft.ML.Tokenizers/Model/EnglishRoberta.cs 79.63% <ø> (ø)
src/Microsoft.ML.Tokenizers/Model/Model.cs 38.46% <ø> (ø)
...rc/Microsoft.ML.Tokenizers/PreTokenizer/Roberta.cs 57.14% <100.00%> (ø)
test/Microsoft.ML.Tokenizers.Tests/BpeTests.cs 86.93% <100.00%> (+0.06%) ⬆️
...crosoft.ML.Tokenizers.Tests/EnglishRobertaTests.cs 95.71% <100.00%> (+0.03%) ⬆️
test/Microsoft.ML.Tokenizers.Tests/TitokenTests.cs 100.00% <100.00%> (ø)
...st/Microsoft.ML.Tokenizers.Tests/TokenizerTests.cs 100.00% <100.00%> (ø)
src/Microsoft.ML.Tokenizers/Tokenizer.cs 84.41% <95.00%> (-3.96%) ⬇️
src/Microsoft.ML.Tokenizers/Model/Tiktoken.cs 67.72% <77.84%> (+5.55%) ⬆️

... and 9 files with indirect coverage changes

// '⭐' U-2B50 is mapped to IDs [2928, 99834] in the Tiktoken model.
// In other words, the character '⭐' with UTF-8 code point 0xE2, 0xAD, 0x90 will be mapped by Tiktoken as follows: 0xE2 to [2928]
// and 0xAD, 0x90 to [99834]. Decoding 2928 and 99834 individually won't reconstruct the original UTF-16 string '⭐' U-2B50;
// decoding all IDs together is required to get the expected result.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I could imagine someone wanting an API like IEnumerable<byte> Decode(IEnumerable<ids> ids, ...). Presumably if that was desired we could always add it in the future.

@tarekgh
Copy link
Member Author

tarekgh commented Mar 5, 2024

@stephentoub I have addressed all feedback, please let me know if you have any more feedback. Thanks!

@@ -233,7 +311,7 @@ private static (Dictionary<StringSpanOrdinalKey, int>?, Dictionary<int, string>?
/// <param name="text">The text to encode.</param>
/// <param name="isSpecialToken">Indicate if the token is a special token.</param>
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Existing: what token does this refer to? The only other thing specified is text

Copy link
Member Author

@tarekgh tarekgh Mar 5, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

token is the string word. can be any word like dog or it can be special token like <|endoftext|>. I can change Indicate if the token is a special token. to Indicate if the text represent a special token. or similar.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is that the parameter text, and does that text need to represent a single token? Or does it refer to all tokens within text?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the text can represent multiple tokens or represent one special token.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, so how does isSpecialToken apply to 'text` in the case it is multiple tokens?

Copy link
Member Author

@tarekgh tarekgh Mar 6, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is a flag telling if the input text is representing a special token or not so the encoder can treat it differently. Here is some example how this is used

if (isSpecialToken && _specialTokensEncoder is not null)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, so we should probably refer to text in the docs.

Here's my attempt:

Indicates if the <paramRef name="text"/> in it's entirety is a special token.  This method will throw if <paramRef name="isSpecialToken"/> is `true` and the specified <paramRef name="text"/> is not a special token.

Similarly it looks like the Count and EncodeToIds just return default values ignoring the text if it's not a special token so they could get a slightly different version of this.


return _specialTokensEncoder.TryGetValue(text, out _) ? 1 : 0;

I raised this issue because this parameter confused me when adopting the tokenizer.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm curious as to the use case for setting isSpecialToken to true...?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems like the only scenario is when someone wants to pass in a single special token string and get the value for that. If we're making them specify a parameter to this API to do that they might as well just call a different API to do it and avoid the confusing parameter on this API.

I wonder what happens if someone specifies a special token string but forgets to set isSpecialToken?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@tarekgh and I talked about this offline - we'll update the docs for these methods in a separate PR and discuss this during API review.

src/Microsoft.ML.Tokenizers/Tokenizer.cs Outdated Show resolved Hide resolved
src/Microsoft.ML.Tokenizers/Tokenizer.cs Outdated Show resolved Hide resolved
/// <exception cref="ArgumentNullException">The input text is null.</exception>
/// <exception cref="ArgumentOutOfRangeException">The maximum token count must be greater than 0.</exception>
/// <remarks>
/// If the tokenizer has a normalizer, the returned text will be the normalized text. Otherwise the returned text will be the input text.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it ever the case that someone might have pre-normalized text and want a method that doesn't do this normalization?

Copy link
Member Author

@tarekgh tarekgh Mar 5, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, when creating the tokenizer you have the option to provide the normalizer object. If you don't then the tokenizer will not do any normalization before processing the text.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just imagining that someone might want to call the normalizer once, then tell this method that they've already done the normalization and avoid double-normalization / allocation. My understanding of the use case for this API is to do a minimal amount of work so I was just asking myself "is there anything else that I can imagine someone might not want this method to do?" and normalization was the only thing I could imagine.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well written normalizer will return the original text without new allocation if there is no change from the original text. But I think processing time will be counted.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, users can create a copy of the tokenizer without the normalization and can be used in such scenario too.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right, but I was more concerned with this API doing two things where someone might want it to do just one. It's the same feeling @stephentoub shared offline

The normalization stuff sneaking in here still rubs me the wrong way

I don't feel like it needs to be solved now, but it may be a topic during API review.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The normalization is part of tokenization. It is optional for some scenario but important to other scenarios. So the API is not really doing two things more than communicating the encoding results including the change in the text. Anyway, I am open to any better suggestion that can make us avoid any confusion or to get cleaner API shape.

@@ -177,7 +177,7 @@ private Bpe(Stream vocabStream, Stream? mergesStream, string? unknownToken, stri
/// Encode a text string to a list of tokens.
/// </summary>
/// <param name="text">The text to encode.</param>
/// <param name="isSpecialToken">Indicate if the token is a special token.</param>
/// <param name="isSpecialToken">Indicate if the text is a special token.</param>
/// <returns>The list of tokens generated from the text tokenization.</returns>
public override IReadOnlyList<Token> Encode(string text, bool isSpecialToken = false)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

EncodeToIds takes a span but Encode takes a string. Is that ok? Are there places it'd be valuable for Encode to also take a span, or are there benefits to it being a string?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IIRC we chatted about that before. Encode need to return tokens, if we pass ReadOnlySpan here we'll need to call ToString on it again. EncodeToIds doesn't need return tokens and that is why we are ok to use spans there. I am open to any suggestion if we have a better ideas.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm. Is that because we expect this text to frequently be a single token?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tokenizers using Regex to pre-tokenize and produces a smaller or partial words is possible to be a single token. So it depends on the pre-tokenization and the vocabulary of the tokenizer. You had another idea which we decided not go with but we can reconsider it which is passing both string and span. This can help in the future if we expose Span apis on Tokenizer class.

@tarekgh tarekgh merged commit bad8298 into dotnet:main Mar 7, 2024
25 checks passed
@github-actions github-actions bot locked and limited conversation to collaborators Apr 7, 2024
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Add more Tokenizer functionality - Create without download sync/async ; Trim APIs
3 participants