You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The parameters such as - ins - f in the main cpp cannot be set in the go code of this project.
In the llama.cpp project, compile the main runtime input parameters directly, for example: **/ main -m /home/oem/GPT/alpace/golang/go-llama.cpp/examples/zh-models/7B/ggml-model-f16.bin --color -f /home/oem/GPT/alpace/golang/go-llama.cpp/examples/prompts/alpaca.txt -ins -c 2048 --temp 0.3 -n 1024 --repeat_ penalty 1.3**
Where - f represents the path to the prompts file and - ins represents the conversation mode of chatGPT.
But there is no function to set these two parameters in the go language
The following are all parameters in main, in common. h:
bool interactive = true; // interactive mode
bool interactive = true; // interactive mode
std::string prompt = "Below is an instruction that describes a task. Write a response that appropriately completes the request.";
struct gpt_params {
int32_t seed = -1; // RNG seed
int32_t n_threads = std::min(4, (int32_t) std::thread::hardware_concurrency());
int32_t n_predict = 256; // new tokens to predict
int32_t repeat_last_n = 128; // last n tokens to penalize
int32_t n_parts = -1; // amount of model parts (-1 = determine from model dimensions)
int32_t n_ctx = 1024; // context size
int32_t n_batch = 8; // batch size for prompt processing
int32_t n_keep = 21; // number of tokens to keep from initial prompt
// sampling parameters
int32_t top_k = 40;
float top_p = 0.95f;
float temp = 0.40f;
float repeat_penalty = 1.10f;
std::string model = "models/lamma-7B/ggml-model.bin"; // model path
std::string prompt = "Below is an instruction that describes a task. Write a response that appropriately completes the request.";
std::string input_prefix = ""; // string to prefix user inputs with
std::vector<std::string> antiprompt; // string upon seeing which more user input is prompted
bool memory_f16 = true; // use f16 instead of f32 for memory kv
bool random_prompt = false; // do not randomize prompt if none provided
bool use_color = true; // use color to distinguish generations and inputs
bool interactive = true; // interactive mode
bool embedding = false; // get only sentence embedding
bool interactive_start = false; // wait for user input immediately
bool instruct = true; // instruction mode (used for Alpaca models)
bool ignore_eos = false; // do not stop generating after eos
bool perplexity = false; // compute perplexity over the prompt
bool use_mlock = false; // use mlock to keep model in memory
bool mem_test = false; // compute maximum memory usage
bool verbose_prompt = false; // print prompt tokens before generation
};
The text was updated successfully, but these errors were encountered:
This can be achieved directly by calling the go code - the prompt is nothing much a text that can be added to the input. I didn't felt it should belong to the golang library. Similarly, Interactive mode was stripped off on purpose as well, as doesn't sound something should be part of a library.
The parameters such as - ins - f in the main cpp cannot be set in the go code of this project.
In the llama.cpp project, compile the main runtime input parameters directly, for example:
**/ main -m /home/oem/GPT/alpace/golang/go-llama.cpp/examples/zh-models/7B/ggml-model-f16.bin --color -f /home/oem/GPT/alpace/golang/go-llama.cpp/examples/prompts/alpaca.txt -ins -c 2048 --temp 0.3 -n 1024 --repeat_ penalty 1.3**
Where - f represents the path to the prompts file and - ins represents the conversation mode of chatGPT.
But there is no function to set these two parameters in the go language
The following are all parameters in main, in common. h:
bool interactive = true; // interactive mode
bool interactive = true; // interactive mode
std::string prompt = "Below is an instruction that describes a task. Write a response that appropriately completes the request.";
The text was updated successfully, but these errors were encountered: