Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Option to bypass function excecution return msg, only show next TEXT return msg #5

Closed
rzc331 opened this issue Jun 20, 2023 · 4 comments · Fixed by #13
Closed

Option to bypass function excecution return msg, only show next TEXT return msg #5

rzc331 opened this issue Jun 20, 2023 · 4 comments · Fixed by #13
Labels
enhancement New feature or request

Comments

@rzc331
Copy link

rzc331 commented Jun 20, 2023

if 'function_call' in reply_msg:
fc = reply_msg['function_call']
args = json.loads(fc['arguments'])
call_ret = self._call_function(fc['name'], args)
append_msg['role'] = 'system'
append_msg['content'] = "(Function {} called, returned: {})".format(
fc['name'],
call_ret
)
ret = {
"type": "function_call",
"func": fc['name'].replace('-', '.'),
"value": call_ret,
}

These lines of code will now excecute the corresponding function and print out the return value from the function, which is great.

I wonder if we could make a step even further: if a function call is chosen by ChatGPT, we excecute the function, and directly send back a msg to ChatGPT again with an appended msg format like this:

{
"role": "function",
"name": name_of_the_function_excecuted
"content": function_return_value
}

and we only return text reply from ChatGPT for session.ask(). (Possibly, ChatGPT can call multiple functions in a row, and the user only wants the final answer)

@RockChinQ
Copy link
Owner

yeah, this feature has already been planned (in my mind), but maybe it should go after #4 .

@RockChinQ RockChinQ added the enhancement New feature or request label Jun 20, 2023
@RockChinQ
Copy link
Owner

Check resp_chain.py for the PoC of continuely function calling.

@rzc331
Copy link
Author

rzc331 commented Jun 20, 2023

Check resp_chain.py for the PoC of continuely function calling.

I followed your example and added an option 'fc_chain' in Session.ask, it does the trick, but may be a bit verbose:

class Session:

    namespace: Namespace = None

    messages: list[dict] = []

    model: str = "gpt-3.5-turbo-0613"

    def __init__(self, modules: list, model: str = "gpt-3.5-turbo-0613", **kwargs):
        self.namespace = Namespace(modules)
        self.model = model
        self.resp_log = []
        self.args = {
            "model": self.model,
            "messages": self.messages,
            **kwargs
        }
        if len(self.namespace.functions_list) > 0:
            self.args['functions'] = self.namespace.functions_list
            self.args['function_call'] = "auto"

    def ask(self, msg: str, fc_chain: bool = False) -> dict:
        self.messages.append(
            {
                "role": "user",
                "content": msg
            }
        )

        resp = openai.ChatCompletion.create(
            **self.args
        )
        self.resp_log.append(resp)

        logging.debug("Response: {}".format(resp))
        reply_msg = resp["choices"][0]['message']

        ret = {}

        if fc_chain:
            while 'function_call' in reply_msg:
                resp = self.fc_chain(reply_msg['function_call'])
                reply_msg = resp["choices"][0]['message']
            ret = {
                "type": "message",
                "value": reply_msg['content'],
            }

            self.messages.append({
                "role": "assistant",
                "content": reply_msg['content']
            })

            return ret

        else:
            if 'function_call' in reply_msg:

                fc = reply_msg['function_call']
                args = json.loads(fc['arguments'])
                call_ret = self._call_function(fc['name'], args)

                self.messages.append({
                    "role": "function",
                    "name": fc['name'],
                    "content": str(call_ret)
                })

                ret = {
                    "type": "function_call",
                    "func": fc['name'].replace('-', '.'),
                    "value": call_ret,
                }
            else:
                ret = {
                    "type": "message",
                    "value": reply_msg['content'],
                }

                self.messages.append({
                    "role": "assistant",
                    "content": reply_msg['content']
                })

            return ret

    def fc_chain(self, fc_cmd: dict):
        """
        Excecute the function call and return the result to ChatGPT.

        Args:
            fc_cmd(dict): The function call command.

        Returns:
            dict: The response from ChatGPT.
        """
        fc_args = json.loads(fc_cmd['arguments'])
        call_ret = self._call_function(fc_cmd['name'], fc_args)

        self.messages.append({
            "role": "function",
            "name": fc_cmd['name'],
            "content": f'function successfully called with return value: {str(call_ret)}'
        })
        resp = openai.ChatCompletion.create(
            **self.args
        )
        self.resp_log.append(resp)

        return resp

    def _call_function(self, function_name: str, args: dict):
        return self.namespace.call_function(function_name, args)

One thing to note here is that, if we only put str(call_ret) in the content, it may enter an infinite loop calling the same function if the call_ret is not clearly expressing a function call "success". After I added the prefix "function successfully called with return value: ", this issue was much relieved.

@rzc331
Copy link
Author

rzc331 commented Jun 21, 2023

"content": f'function successfully called with return value: {str(call_ret)}'
noticed even this prompt can suffer multi-calling the same function issue.

"content": f'function successfully called with return value: {str(call_ret)}, please go to next step.'
this prompt works much better

@RockChinQ RockChinQ linked a pull request Jul 8, 2023 that will close this issue
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants