-
Notifications
You must be signed in to change notification settings - Fork 745
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
TypeError: Failed to fetch dynamically imported module #748
Comments
I get the same issue, even when trying to pull in from a local checkout of onnx: import { env, AutoModelForCausalLM, AutoTokenizer } from '@xenova/transformers'
env.backends.onnx.wasm.wasmPaths = '/onnxruntime-web/'
env.allowRemoteModels = false
env.allowLocalModels = true
const model_id = '../model';
const tokenizer = await AutoTokenizer.from_pretrained(model_id, {
legacy: true
}) I have copied the contents of |
This is because the demo uses an unreleased version of onnxruntime-web v1.18.0, which I have mentioned a few times when I've linked to the source code. When it is released, I will update the source code so that it works correctly. Thanks for understanding! |
Thanks for the feedback. Looking forward to the release. |
System Info
@xenova/transformers 3.0.0-alpha.0
Chrome: Version 124.0.6367.93 (Official Build) (arm64)
OS: macOS 14.4.1 (23E224)
Environment/Platform
Description
I ran
pnpm run dev
in the examplewebgpt-chat
. I can download the model on http://localhost:5173. But it's not ready for chat due to the error reported in the console:May I query if any setting required to make it work?
Btw, I can chat with the model on https://huggingface.co/spaces/Xenova/experimental-phi3-webgpu
Reproduction
Load Model
buttonThe text was updated successfully, but these errors were encountered: