- Supports synchronous usage. No dependency on Tokio.
- Uses @pykeio/ort for performant ONNX inference.
- Uses @huggingface/tokenizers for fast encodings.
- Supports batch embeddings generation with parallelism using @rayon-rs/rayon.
The default model is Flag Embedding, which is top of the MTEB leaderboard.
- Python π: fastembed
- Go π³: fastembed-go
- JavaScript π: fastembed-js
- BAAI/bge-base-en-v1.5
- BAAI/bge-small-en-v1.5 - Default
- BAAI/bge-large-en-v1.5
- BAAI/bge-small-zh-v1.5
- sentence-transformers/all-MiniLM-L6-v2
- sentence-transformers/all-MiniLM-L12-v2
- sentence-transformers/paraphrase-MiniLM-L12-v2
- sentence-transformers/paraphrase-multilingual-mpnet-base-v2
- nomic-ai/nomic-embed-text-v1
- nomic-ai/nomic-embed-text-v1.5
- intfloat/multilingual-e5-small
- intfloat/multilingual-e5-base
- intfloat/multilingual-e5-large
- mixedbread-ai/mxbai-embed-large-v1
- Alibaba-NLP/gte-base-en-v1.5
- Alibaba-NLP/gte-large-en-v1.5
- prithivida/Splade_PP_en_v1 - Default
Run the following command in your project directory:
cargo add fastembed
Or add the following line to your Cargo.toml:
[dependencies]
fastembed = "3"
use fastembed::{TextEmbedding, InitOptions, EmbeddingModel};
// With default InitOptions
let model = TextEmbedding::try_new(Default::default())?;
// With custom InitOptions
let model = TextEmbedding::try_new(InitOptions {
model_name: EmbeddingModel::AllMiniLML6V2,
show_download_progress: true,
..Default::default()
})?;
let documents = vec![
"passage: Hello, World!",
"query: Hello, World!",
"passage: This is an example passage.",
// You can leave out the prefix but it's recommended
"fastembed-rs is licensed under Apache 2.0"
];
// Generate embeddings with the default batch size, 256
let embeddings = model.embed(documents, None)?;
println!("Embeddings length: {}", embeddings.len()); // -> Embeddings length: 4
println!("Embedding dimension: {}", embeddings[0].len()); // -> Embedding dimension: 384
use fastembed::{TextRerank, RerankInitOptions, RerankerModel};
let model = TextRerank::try_new(RerankInitOptions {
model_name: RerankerModel::BGERerankerBase,
show_download_progress: true,
..Default::default()
})
.unwrap();
let documents = vec![
"hi",
"The giant panda (Ailuropoda melanoleuca), sometimes called a panda bear, is a bear species endemic to China.",
"panda is animal",
"i dont know",
"kind of mammal",
];
// Rerank with the default batch size
let results = model.rerank("what is panda?", documents, true, None);
println!("Rerank result: {:?}", results);
Alternatively, raw .onnx
files can be loaded through the UserDefinedEmbeddingModel
struct (for "bring your own" text embedding models) using TextEmbedding::try_new_from_user_defined(...)
. Similarly, "bring your own" reranking models can be loaded using the UserDefinedRerankingModel
struct and TextRerank::try_new_from_user_defined(...)
. For example:
macro_rules! local_model {
($folder:literal) => {
UserDefinedEmbeddingModel {
onnx_file: include_bytes!(concat!($folder, "/model.onnx")).to_vec(),
tokenizer_files: TokenizerFiles {
tokenizer_file: include_bytes!(concat!($folder, "/tokenizer.json")).to_vec(),
config_file: include_bytes!(concat!($folder, "/config.json")).to_vec(),
special_tokens_map_file: include_bytes!(concat!($folder, "/special_tokens_map.json")).to_vec(),
tokenizer_config_file: include_bytes!(concat!($folder, "/tokenizer_config.json")).to_vec(),
},
}
};
}
let user_def_model_data = local_model!("path/to/model");
let user_def_model = TextEmbedding::try_new_from_user_defined(user_def_model, Default::default()).unwrap();
It's important we justify the "fast" in FastEmbed. FastEmbed is fast because:
- Quantized model weights
- ONNX Runtime which allows for inference on CPU, GPU, and other dedicated runtimes
- No hidden dependencies via Huggingface Transformers
- Better than OpenAI Ada-002
- Top of the Embedding leaderboards e.g. MTEB
Apache 2.0 Β© 2024