You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe.
Current candle binding supports bert embedding models that usually have limited sequence length (512).
While the Embedding Gemma supports 2K and Qwen3 0.6B embedding models supports up to 32K
Describe the solution you'd like
We should add more embedding models in candle-binding