Blue Moon Release #138
pwgit-create
announced in
Announcements
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Release Notes 1.7.2
There have been three additional AI /ollama configurations that can be edited within the same file. Those are:
Install AppWish Ollama
Start Appwish Ollama
Do you have less than 8gb of ram?
If you plan on running the Linux AMD x64 version with less than 8GB of ram, you may experience slow response times from the AI models. In order to achieve faster speeds, consider running a lighter model (llama3 8b). The raspberry pie version already has that model defaulted to.
How can I change the model to LLama 3 for the Linux AMD X64 version?
Run the install script that installed ollama before typing
ollama pull llama3:latest
in your terminal.Edit the text in the file using the path:
src/main/resources/ollama_model.props
From
MODEL_NAME=codestral:22b
Into
MODEL_NAME=llama3:latest
Helper script for Windows Subsystem for Linux (WSL)
This script is designed to help you run Appwish Ollama using WSL (wsl_helper_script.sh).
If you have a decent Nvidia GPU, you can run NvidiaCUDA with WSL without much setup. If you're looking for a very fast app generation, this option is a good choice.
Running this script is not recommended if you have no intention of using WSL.
Have fun with the release and generate apps in a responsible manner. 🐲 🔮 🌌
This discussion was created from the release Blue Moon Release.
Beta Was this translation helpful? Give feedback.
All reactions