Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

.Net: Fix a bunch of function calling issues #4258

Merged
merged 5 commits into from
Dec 15, 2023

Conversation

stephentoub
Copy link
Member

@stephentoub stephentoub commented Dec 14, 2023

  1. A recent change caused the execution settings passed into the chat completion service to be null, causing automatic function calling to stop working. This fixes that.
  2. We were providing the original execution settings to child invocations. This isn't appropriate, as the settings were known to be relevant to the parent invocation but not necessarily to the child invocation. It's also possibly dangerous, since the settings are known to enable function calling, and that could lead to recursive function invocation ad nauseum. For now, better to just explicitly not pass down the execution settings.
  3. We weren't validating that a function the model requested was actually one we told it about. We should be doing so; otherwise, it could end up hallucinating and invoking functions that do exist in the kernel but that we didn't ask it to use.
  4. We weren't validating with EnableFunctions + auto-invoke that the supplied functions were actually in the kernel; if they're not, we're setting ourselves up for failure, as we won't be able to invoke such a function when the model requests it.
  5. We weren't sending back error messages to the model. Every tool request needs a response.
  6. We weren't protecting against runaway cases where the invocation of a prompt function triggered auto-invocation of a prompt function ad nauseum. With (2), it's now hard to get into such a situation, but if we do, this adds a backstop that limits the length of that chain and disables auto-invocation if it's hit.
  7. We were logging the function requests after validating them rather than before, but from a logging perspective, it's better to get that information promptly.
  8. The "required" function calling support (where you can force the model to request a particular function's invocation) isn't behaving as expected: it's sending back a tool request but with a "stop" finish reason and it then balks at attempts to send back a function response. Until we have a better understanding of why this is happening and the right recourse, I've marked the RequireFunction entrypoint to it as experimental.

@stephentoub stephentoub requested a review from a team as a code owner December 14, 2023 05:09
@shawncal shawncal added .NET Issue or Pull requests regarding .NET code kernel Issues or pull requests impacting the core kernel labels Dec 14, 2023
@github-actions github-actions bot changed the title Fix a few function calling issues .Net: Fix a few function calling issues Dec 14, 2023
1. A recent change caused the execution settings passed into the chat completion service to be null, causing automatic function calling to stop working. This fixes that.
2. We were providing the original execution settings to child invocations. This isn't appropriate, as the settings were known to be relevant to the parent invocation but not necessarily to the child invocation. It's also possibly dangerous, since the settings are known to enable function calling, and that could lead to recursive function invocation ad nauseum. For now, better to just explicitly not pass down the execution settings.
3. We weren't validating that a function the model requested was actually one we told it about. We should be doing so; otherwise, it could end up hallucinating and invoking functions that do exist in the kernel but that we didn't ask it to use.
4. We weren't protecting against runaway cases where the invocation of a prompt function triggered auto-invocation of a prompt function ad nauseum. With (2), it's now hard to get into such a situation, but if we do, this adds a backstop that limits the length of that chain and disables auto-invocation if it's hit.
5. We were logging the function requests after validating them rather than before, but from a logging perspective, it's better to get that information promptly.
6. The "required" function calling support (where you can force the model to request a particular function's invocation) isn't behaving as expected: it's sending back a tool request but with a "stop" finish reason and it then balks at attempts to send back a function response. Until we have a better understanding of why this is happening and the right recourse, I've marked the RequireFunction entrypoint to it as experimental.
@markwallace-microsoft markwallace-microsoft added the v1.0.1 Required for the Semantic Kernel v1.0.1 release label Dec 14, 2023
There's no additional work being done at the call site, so we don't need the same guard the subsequent LogXx check will do.
@stephentoub stephentoub added this pull request to the merge queue Dec 15, 2023
@github-merge-queue github-merge-queue bot removed this pull request from the merge queue due to failed status checks Dec 15, 2023
- Ensure that EnableFunctions functions are available in the kernel if auto-invocation was requested.
- Ensure we always send back a response to a tool call, even if it's an error. Otherwise, the chat history is invalid. This also provides built-in error recovery, e.g. if the model hallucinates and requests an invalid tool, we'll now tell it that.
@stephentoub stephentoub changed the title .Net: Fix a few function calling issues .Net: Fix a bunch of function calling issues Dec 15, 2023
@stephentoub stephentoub added this pull request to the merge queue Dec 15, 2023
Merged via the queue into microsoft:main with commit b619e84 Dec 15, 2023
18 checks passed
@stephentoub stephentoub deleted the functioncallingtweaks branch December 15, 2023 14:14
Kevdome3000 pushed a commit to Kevdome3000/semantic-kernel that referenced this pull request Dec 15, 2023
1. A recent change caused the execution settings passed into the chat
completion service to be null, causing automatic function calling to
stop working. This fixes that.
2. We were providing the original execution settings to child
invocations. This isn't appropriate, as the settings were known to be
relevant to the parent invocation but not necessarily to the child
invocation. It's also possibly dangerous, since the settings are known
to enable function calling, and that could lead to recursive function
invocation ad nauseum. For now, better to just explicitly not pass down
the execution settings.
3. We weren't validating that a function the model requested was
actually one we told it about. We should be doing so; otherwise, it
could end up hallucinating and invoking functions that do exist in the
kernel but that we didn't ask it to use.
4. We weren't validating with EnableFunctions + auto-invoke that the
supplied functions were actually in the kernel; if they're not, we're
setting ourselves up for failure, as we won't be able to invoke such a
function when the model requests it.
5. We weren't sending back error messages to the model. Every tool
request needs a response.
6. We weren't protecting against runaway cases where the invocation of a
prompt function triggered auto-invocation of a prompt function ad
nauseum. With (2), it's now hard to get into such a situation, but if we
do, this adds a backstop that limits the length of that chain and
disables auto-invocation if it's hit.
7. We were logging the function requests after validating them rather
than before, but from a logging perspective, it's better to get that
information promptly.
8. The "required" function calling support (where you can force the
model to request a particular function's invocation) isn't behaving as
expected: it's sending back a tool request but with a "stop" finish
reason and it then balks at attempts to send back a function response.
Until we have a better understanding of why this is happening and the
right recourse, I've marked the RequireFunction entrypoint to it as
experimental.

(cherry picked from commit b619e84)
zengin pushed a commit to microsoftgraph/semantic-kernel that referenced this pull request Jan 5, 2024
1. A recent change caused the execution settings passed into the chat
completion service to be null, causing automatic function calling to
stop working. This fixes that.
2. We were providing the original execution settings to child
invocations. This isn't appropriate, as the settings were known to be
relevant to the parent invocation but not necessarily to the child
invocation. It's also possibly dangerous, since the settings are known
to enable function calling, and that could lead to recursive function
invocation ad nauseum. For now, better to just explicitly not pass down
the execution settings.
3. We weren't validating that a function the model requested was
actually one we told it about. We should be doing so; otherwise, it
could end up hallucinating and invoking functions that do exist in the
kernel but that we didn't ask it to use.
4. We weren't validating with EnableFunctions + auto-invoke that the
supplied functions were actually in the kernel; if they're not, we're
setting ourselves up for failure, as we won't be able to invoke such a
function when the model requests it.
5. We weren't sending back error messages to the model. Every tool
request needs a response.
6. We weren't protecting against runaway cases where the invocation of a
prompt function triggered auto-invocation of a prompt function ad
nauseum. With (2), it's now hard to get into such a situation, but if we
do, this adds a backstop that limits the length of that chain and
disables auto-invocation if it's hit.
7. We were logging the function requests after validating them rather
than before, but from a logging perspective, it's better to get that
information promptly.
8. The "required" function calling support (where you can force the
model to request a particular function's invocation) isn't behaving as
expected: it's sending back a tool request but with a "stop" finish
reason and it then balks at attempts to send back a function response.
Until we have a better understanding of why this is happening and the
right recourse, I've marked the RequireFunction entrypoint to it as
experimental.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kernel Issues or pull requests impacting the core kernel .NET Issue or Pull requests regarding .NET code v1.0.1 Required for the Semantic Kernel v1.0.1 release
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants