We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
No description provided.
The text was updated successfully, but these errors were encountered:
How about something like this?
open_clip/src/open_clip/factory.py
Line 73 in 3e3e8a0
def get_tokenizer(model_name): config = get_model_config(model_name) context_length = config['text_cfg'].get("context_length", 77) if 'hf_tokenizer_name' in config['text_cfg']: tokenizer = HFTokenizer(config['text_cfg']['hf_tokenizer_name'], context_length) else: tokenizer = lambda texts: tokenize(texts, context_length) return tokenizer
Sorry, something went wrong.
sure why not
btw, I'm not sure if this is urgent so I've been putting it off. If people need this soon I can push it up the priority queue
we noticed that some models that use relative positional embeddings (eg mt5) do not have a limited context length.
That's an edge case to take into account here
No branches or pull requests
No description provided.
The text was updated successfully, but these errors were encountered: