-
-
Notifications
You must be signed in to change notification settings - Fork 1.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
chore: add function calling template for llama 3.1 models #3010
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
❌ Deploy Preview for localai failed.
|
|
mudler
force-pushed
the
llama3.1-functioncall
branch
from
July 25, 2024 17:22
9beac98
to
5f2127a
Compare
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
mudler
force-pushed
the
llama3.1-functioncall
branch
from
July 25, 2024 17:24
5f2127a
to
16befcc
Compare
truecharts-admin
referenced
this pull request
in truecharts/public
Jul 28, 2024
…9.3 by renovate (#24494) This PR contains the following updates: | Package | Update | Change | |---|---|---| | [docker.io/localai/localai](https://togithub.com/mudler/LocalAI) | patch | `v2.19.2-aio-cpu` -> `v2.19.3-aio-cpu` | | [docker.io/localai/localai](https://togithub.com/mudler/LocalAI) | patch | `v2.19.2-aio-gpu-nvidia-cuda-11` -> `v2.19.3-aio-gpu-nvidia-cuda-11` | | [docker.io/localai/localai](https://togithub.com/mudler/LocalAI) | patch | `v2.19.2-aio-gpu-nvidia-cuda-12` -> `v2.19.3-aio-gpu-nvidia-cuda-12` | | [docker.io/localai/localai](https://togithub.com/mudler/LocalAI) | patch | `v2.19.2-cublas-cuda11-ffmpeg-core` -> `v2.19.3-cublas-cuda11-ffmpeg-core` | | [docker.io/localai/localai](https://togithub.com/mudler/LocalAI) | patch | `v2.19.2-cublas-cuda11-core` -> `v2.19.3-cublas-cuda11-core` | | [docker.io/localai/localai](https://togithub.com/mudler/LocalAI) | patch | `v2.19.2-cublas-cuda12-ffmpeg-core` -> `v2.19.3-cublas-cuda12-ffmpeg-core` | | [docker.io/localai/localai](https://togithub.com/mudler/LocalAI) | patch | `v2.19.2-cublas-cuda12-core` -> `v2.19.3-cublas-cuda12-core` | | [docker.io/localai/localai](https://togithub.com/mudler/LocalAI) | patch | `v2.19.2-ffmpeg-core` -> `v2.19.3-ffmpeg-core` | | [docker.io/localai/localai](https://togithub.com/mudler/LocalAI) | patch | `v2.19.2` -> `v2.19.3` | --- > [!WARNING] > Some dependencies could not be looked up. Check the Dependency Dashboard for more information. --- ### Release Notes <details> <summary>mudler/LocalAI (docker.io/localai/localai)</summary> ### [`v2.19.3`](https://togithub.com/mudler/LocalAI/releases/tag/v2.19.3) [Compare Source](https://togithub.com/mudler/LocalAI/compare/v2.19.2...v2.19.3) <!-- Release notes generated using configuration in .github/release.yml at master --> ##### What's Changed ##### Bug fixes 🐛 - fix(gallery): do not attempt to delete duplicate files by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3031](https://togithub.com/mudler/LocalAI/pull/3031) - fix(gallery): do clear out errors once displayed by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3033](https://togithub.com/mudler/LocalAI/pull/3033) ##### Exciting New Features 🎉 - feat(grammar): add llama3.1 schema by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3015](https://togithub.com/mudler/LocalAI/pull/3015) ##### 🧠 Models - models(gallery): add llama3.1-claude by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3005](https://togithub.com/mudler/LocalAI/pull/3005) - models(gallery): add darkidol llama3.1 by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3008](https://togithub.com/mudler/LocalAI/pull/3008) - models(gallery): add gemmoy by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3009](https://togithub.com/mudler/LocalAI/pull/3009) - chore: add function calling template for llama 3.1 models by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3010](https://togithub.com/mudler/LocalAI/pull/3010) - chore: models(gallery): ⬆️ update checksum by [@​localai-bot](https://togithub.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/3013](https://togithub.com/mudler/LocalAI/pull/3013) - models(gallery): add mistral-nemo by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3019](https://togithub.com/mudler/LocalAI/pull/3019) - models(gallery): add llama3.1-8b-fireplace2 by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3018](https://togithub.com/mudler/LocalAI/pull/3018) - models(gallery): add lumimaid-v0.2-12b by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3020](https://togithub.com/mudler/LocalAI/pull/3020) - models(gallery): add darkidol-llama-3.1-8b-instruct-1.1-uncensored-iq… by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3021](https://togithub.com/mudler/LocalAI/pull/3021) - models(gallery): add meta-llama-3.1-8b-instruct-abliterated by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3022](https://togithub.com/mudler/LocalAI/pull/3022) - models(gallery): add llama-3.1-70b-japanese-instruct-2407 by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3023](https://togithub.com/mudler/LocalAI/pull/3023) - models(gallery): add llama-3.1-8b-instruct-fei-v1-uncensored by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3024](https://togithub.com/mudler/LocalAI/pull/3024) - models(gallery): add openbuddy-llama3.1-8b-v22.1-131k by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3025](https://togithub.com/mudler/LocalAI/pull/3025) - models(gallery): add lumimaid-8b by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3026](https://togithub.com/mudler/LocalAI/pull/3026) - models(gallery): add llama3 with enforced functioncall with grammars by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3027](https://togithub.com/mudler/LocalAI/pull/3027) - chore(model-gallery): ⬆️ update checksum by [@​localai-bot](https://togithub.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/3036](https://togithub.com/mudler/LocalAI/pull/3036) ##### 👒 Dependencies - chore: ⬆️ Update ggerganov/llama.cpp by [@​localai-bot](https://togithub.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/3003](https://togithub.com/mudler/LocalAI/pull/3003) - chore: ⬆️ Update ggerganov/llama.cpp by [@​localai-bot](https://togithub.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/3012](https://togithub.com/mudler/LocalAI/pull/3012) - chore: ⬆️ Update ggerganov/llama.cpp by [@​localai-bot](https://togithub.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/3016](https://togithub.com/mudler/LocalAI/pull/3016) - chore: ⬆️ Update ggerganov/llama.cpp by [@​localai-bot](https://togithub.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/3030](https://togithub.com/mudler/LocalAI/pull/3030) - chore: ⬆️ Update ggerganov/whisper.cpp by [@​localai-bot](https://togithub.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/3029](https://togithub.com/mudler/LocalAI/pull/3029) - chore: ⬆️ Update ggerganov/llama.cpp by [@​localai-bot](https://togithub.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/3034](https://togithub.com/mudler/LocalAI/pull/3034) ##### Other Changes - docs: ⬆️ update docs version mudler/LocalAI by [@​localai-bot](https://togithub.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/3002](https://togithub.com/mudler/LocalAI/pull/3002) - refactor: break down json grammar parser in different files by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3004](https://togithub.com/mudler/LocalAI/pull/3004) - fix: PR title tag for checksum checker script workflow by [@​dave-gray101](https://togithub.com/dave-gray101) in [https://github.com/mudler/LocalAI/pull/3014](https://togithub.com/mudler/LocalAI/pull/3014) **Full Changelog**: mudler/LocalAI@v2.19.2...v2.19.3 </details> --- ### Configuration 📅 **Schedule**: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined). 🚦 **Automerge**: Enabled. ♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox. 🔕 **Ignore**: Close this PR and you won't be reminded about these updates again. --- - [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check this box --- This PR has been generated by [Renovate Bot](https://togithub.com/renovatebot/renovate). <!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiIzOC44LjMiLCJ1cGRhdGVkSW5WZXIiOiIzOC44LjMiLCJ0YXJnZXRCcmFuY2giOiJtYXN0ZXIiLCJsYWJlbHMiOlsiYXV0b21lcmdlIiwidXBkYXRlL2RvY2tlci9nZW5lcmFsL25vbi1tYWpvciJdfQ==-->
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Description
This PR adds function call capabilities to llama3.1 models by following https://github.com/meta-llama/llama-agentic-system/blob/ced0661761fc6529b23ac44ba1f19968ac5ad376/llama_agentic_system/system_prompt.py#L64
The function are returned in this syntax, for instance to call
get_weather
: