Skip to content

bryanmylee/rust-leptos-llama

Repository files navigation

leptos

Leptos with Llama

An experiment on the feasibility of building a full-stack application purely in Rust.

The front-end is built with Leptos, and the server-side capabilities are powered by Actix Web.

The backend uses the Rustformers LLM crate to run a Llama large language model (LLM).

Running your project

Download and install a Llama model. For this project, we use the Wizard-Vicuna-7B-Uncensored-GGML model.

Create .env and set the path of the model as MODEL_PATH.

Run cargo leptos watch.

Then access the project at http://localhost:8000

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages