-
Notifications
You must be signed in to change notification settings - Fork 785
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature request: JS CDN file for web LLM #258
Comments
You can try this one: |
Check this out @AshD: https://www.jsdelivr.com/package/npm/@mlc-ai/web-llm |
@gaiborjosue can you give the minimal js example using that? not sure what to import or call. that file starts with "import require$$3 from"perf_hooks";" but i don't have any 'perf_hooks' module. To clarify, people get stuff from cdn, so that they don't require a build step. I should just import something from a cdn url and then call the functions it exports, in pure plain javascript. For something so fundamental as an LLM, this makes perfect sense to bundle as a library you just import and call. edit:
Now i can run the example on the site:
|
I've ran into the same issue where I found it surprisingly difficult integrate WebLLM in a project. I wrote a cargo-cult-ish modification of the source code to get around this. Your solution would be cleaner.
I can't find that function in the file? Where should I modify that? |
Oh wow, thanks so much @AwokeKnowing! I was running into the same issue in the morning. Thanks for posting the solution:) |
Hi @flatsiedatsie the file @AwokeKnowing is talking about is: https://cdn.jsdelivr.net/npm/@mlc-ai/web-llm@0.2.35/lib/index.js Edit: We have hosted the edited CDN at: https://mpsych.github.io/cdn/web-llm/web-llm-cdn.js so you can use that directly without the need of manually downloading and updating the code :) |
Replace `import require$$3 from 'perf_hooks';` with `const require$$3 = "MLC_DUMMY_REQUIRE_VAR"` in `index.js`. We use a dummy string because we should not reach to [this branch in tvmjs](https://github.com/apache/tvm/blob/a5862a5c696a3237f644f31bc312aae303213f3f/web/src/compact.ts#L29) which is for nodejs. This should address #258 and #127
Thanks for all your work and contribution! A similar solution is included in npm 0.2.36, and the following code snippet in https://jsfiddle.net/ should work with no setups: // See https://www.npmjs.com/package/@mlc-ai/web-llm documentation.
import * as webllm from 'https://esm.run/@mlc-ai/web-llm';
async function main() {
const initProgressCallback = (report) => {
console.log(report.text);
};
const selectedModel = "TinyLlama-1.1B-Chat-v0.4-q4f16_1-1k";
const engine = await webllm.CreateEngine(
selectedModel,
{
initProgressCallback: initProgressCallback
}
);
const reply0 = await engine.chat.completions.create({
messages: [{
"role": "user",
"content": "Tell me about Pittsburgh."
}]
});
console.log(reply0);
console.log(await engine.runtimeStatsText());
}
main(); |
I just re-implemented WebLLM in my project for the third time :-) But this time without any hacks, and with an easier upgrade path. Everything works as advertised. Thank you! |
For projects that don't use NPM, Is it possible to create a Javascript version that is deployed on a CDN?
This will make it easier to integrate with ASP.NET projects that don't use NPM.
Thanks,
Ash
The text was updated successfully, but these errors were encountered: