-
-
Notifications
You must be signed in to change notification settings - Fork 459
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Converting to embeddings #3
Comments
If I understand correctly, you're looking for something like: let pipe = await pipeline('embeddings', 'bert-base-multilingual-cased');
let features = await pipe('this is text');
console.log(features); // of shape [1, 768] Full example<!DOCTYPE html>
<html lang="en">
<head>
<script src="https://cdn.jsdelivr.net/npm/@xenova/transformers/dist/transformers.min.js"></script>
</head>
<body>
<script>
document.addEventListener('DOMContentLoaded', async () => {
let pipe = await pipeline('embeddings', 'bert-base-multilingual-cased');
let features = await pipe('this is text');
console.log(features)
});
</script>
</body>
</html> If this isn't what you're looking for, could you provide the corresponding Python code? |
Wow. Thanks for your prompt response @xenova ! The python code can be found in huggingface's example. from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('bert-base-multilingual-cased')
model = TFBertModel.from_pretrained("bert-base-multilingual-cased")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input) |
Okay sure! Then the code is very similar: // use online model
let model_path = 'https://huggingface.co/Xenova/transformers.js/resolve/main/quantized/bert-base-multilingual-cased/default'
// or local model...
// let model_path = './models/onnx/quantized/bert-base-multilingual-cased/default'
let tokenizer = await AutoTokenizer.from_pretrained(model_path)
let model = await AutoModel.from_pretrained(model_path)
let text = "Replace me by any text you'd like."
let encoded_input = await tokenizer(text)
let output = await model(encoded_input)
console.log(output)
// output.last_hidden_state is probably what you want, which has dimensions: [1, 13, 768] Full output
For memory-efficiency purposes, ONNX returns multi-dimension arrays as a single, flat In addition to adding a few That behaviour might change in the future though. |
Ok. Let me give this a try and report back. My goal is to index text embeddings right within a nodejs app and send the 768 embedding to Elasticsearch. I'm inclined to using python (create a fastapi app that solely embeds incoming text using this tokenizer) however it'd be nice if I can keep my app primarily written in JS. If I use python, then locally I'd use fastapi but in production I'd use an AWS lambda function (with API gateway) to serve this embedding task. Thoughts? Does it make sense to use python in production or can I still have a production-grade strategy by keeping it in JS i.e. get the embeddings re: your example and store the float32array as is in Elasticsearch? Thanks! |
Cool idea! However, if your app is intended to be run entirely on a server, it might be best to use the Python library. The main goal of this project is to bring that functionality to the browser. That said, many users have found quantized ONNX models to have competitive performance to their PyTorch counterparts... but you would need to test to see which is best for your use-case. And of course, the added benefit of running entirely in JS may be a good enough reason to use this library. Keep me updated though! I'd love to see how the library is used in real applications! 😄 |
(feel free to reopen if needed!) |
Thanks @xenova ! |
Hello @xenova . Thanks again for your help! When I run your example in nodejs, I get the following error (alongside a lot of compiled JS): TypeError [ERR_WORKER_PATH]: The worker script or module filename must be an absolute path or a relative path starting with './' or '../'. Received "blob:nodedata:262343bf-f735-4bbc-b506-34b7fad27351"
at new NodeError (node:internal/errors:393:5)
at new Worker (node:internal/worker:165:15)
at Object.yc (C:\Projects\iTestify\node_modules\onnxruntime-web\dist\ort-web.node.js:6:7890)
at Object.Cc (C:\Projects\iTestify\node_modules\onnxruntime-web\dist\ort-web.node.js:6:7948)
at lt (C:\Projects\iTestify\node_modules\onnxruntime-web\dist\ort-web.node.js:6:5690)
at Et (C:\Projects\iTestify\node_modules\onnxruntime-web\dist\ort-web.node.js:6:9720)
at wasm://wasm/025ff5d6:wasm-function[10917]:0x7f0507
at wasm://wasm/025ff5d6:wasm-function[1580]:0xf3ecb
at wasm://wasm/025ff5d6:wasm-function[2786]:0x1d31ee
at wasm://wasm/025ff5d6:wasm-function[5903]:0x49713d {
code: 'ERR_WORKER_PATH'
} Even so, it takes a little too long to output that error (I suppose it'll take the same time to output tokens). const tokenize = async () => {
// use online model
let model_path = 'https://huggingface.co/Xenova/transformers.js/resolve/main/quantized/bert-base-multilingual-cased/default'
// or local model...
// let model_path = './models/onnx/quantized/bert-base-multilingual-cased/default'
let tokenizer = await AutoTokenizer.from_pretrained(model_path)
let model = await AutoModel.from_pretrained(model_path)
let text = "Replace me by any text you'd like."
let encoded_input = await tokenizer(text)
let output = await model(encoded_input)
return output.last_hidden_state
} |
Hi again. Yes this is a bug in onnx runtime. See here for more information: #4 TL;DR: // 1. Fix "ReferenceError: self is not defined" bug when running directly with node
// https://github.com/microsoft/onnxruntime/issues/13072
global.self = global;
const { pipeline, env } = require('@xenova/transformers')
// 2. Disable spawning worker threads for testing.
// This is done by setting numThreads to 1
env.onnx.wasm.numThreads = 1
// 3. Continue as per usual:
// ... |
Thanks @xenova ! It works as you stated! Thank you! Any tips? As a way to optimize for speed, shall I 'download the model and place it in the ./models/onnx/quantized folder (or another location, provided you set env.localURL)' as in your comment here #4? Thanks, |
I made a utility function to help "reshape" the outputs: transformers.js/src/pipelines.js Lines 412 to 438 in cef0f51
so, you can use that 👍 I haven't exposed the method from the module - I could add that in the next update perhaps. In the meantime, you can just copy past the code :)
The biggest bottleneck will undoubtedly be the "redownloading" of the model each time you request. I haven't implemented local caching yet... but it should be as simple as downloading the model to some cache directory.
Definitely! If you're running locally, there's no good reason to have to download the model each time ;) |
Hello, is there new documentation on this? I am seeing that task = "embeddings" is no longer a possible option. Feature-extraction task doesn't seem to produce what I am expecting either, generating resulting arrays that are 2-5x larger than they should be. |
Now you can use 'feature-extraction' pipeline to generate embeddings! |
Hello team,
How do I simply output tokens / embeddings from a model like "bert-base-multilingual-cased" using this library?
Thanks.
The text was updated successfully, but these errors were encountered: