Tags: go-skynet/go-llama.cpp
Toggle pre-gguf's commit message
Bump llama.cpp from `1f0bccb` to `dadbed9` (#179 )
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Toggle llama.cpp-before-remove-bit-shuffling's commit message
Bump llama.cpp from `e6a46b0` to `b608b55` (#48 )
Toggle llama.cpp-b608b55's commit message
Bump llama.cpp from `e6a46b0` to `b608b55` (#48 )
Toggle llama.cpp-f4cef87's commit message
Toggle llama.cpp-7f15c5c's commit message
Bump llama.cpp from `0b2da20` to `7f15c5c` (#30 )
Toggle llama.cpp-0b2da20's commit message
Bump llama.cpp from `859fee6` to `0b2da20` (#29 )
Toggle llama.cpp-25d7abb's commit message
evaluate tokens in batches after swapping context (#23 )
Toggle llama.cpp-8687c1f's commit message
Toggle llama.cpp-9ff334f's commit message
Bump llama.cpp from `5ecff35` to `9ff334f` (#19 )
Toggle llama.cpp-5ecff35's commit message
Optimize ggml compilation (#17 )
You can’t perform that action at this time.