An experiment on the feasibility of building a full-stack application purely in Rust.
The front-end is built with Leptos, and the server-side capabilities are powered by Actix Web.
The backend uses the Rustformers LLM crate to run a Llama large language model (LLM).
Download and install a Llama model. For this project, we use the Wizard-Vicuna-7B-Uncensored-GGML model.
Create .env
and set the path of the model as MODEL_PATH
.
Run cargo leptos watch
.
Then access the project at http://localhost:8000