Skip to content
/ Llama Public

Llama 3.2 running on a Web browser

Notifications You must be signed in to change notification settings

lrusso/Llama

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

32 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Llama

Llama 3.2 running on a Web browser. Designed in JavaScript and WebAssembly. The model may take several minutes to download (708 MB) and your Web browser must support WebGPU because it's necessary to run the WebLLM engine. The model has 1.23 billion parameters and supports English, German, French, Italian, Portuguese, Hindi, Spanish and Thai.

Web:

https://lrusso.github.io/Llama/Llama.htm

How to run Llama 3.2 locally:

How to run the server in the background:

  1. Open the Terminal.
  2. Install Forever: npm install -g forever
  3. Go to the project folder.
  4. Start the server: forever start -a -l /dev/null -c node server.js > /dev/null 2>&1
  5. Stop the server: forever stop -a -l /dev/null -c node server.js > /dev/null 2>&1

Based on the work of:

https://github.com/mlc-ai/web-llm

Releases

No releases published

Packages

No packages published