
Example of Issue
The following code is part of a pull request made by pixie to a setup script in one of my projects after a correctly triggering an AddRequestsTimeouts Codemod.
versions = requests.get(versions_url, timeout=60).json(timeout=60)
This clearly would result in a TypeError since "timeout" is not a valid kwarg for JSONDecoder.__init__.
Potential Cause
I assume that this issue is caused by the OpenAI function call outputs that are used to create code edits not being properly linted on the backend before the resulting code is included in a pull request.
Potential Solution
A possible solution could be adding a valid function and argument parsing class to file_parsers that gathers the valid function names, argument names and types by checking the args in function definitions, typeing info and calls to kwargs.get/pop in each function for each file in the repo.
Then each function call in the AI written code could be linted for valid arguments+types and an error could be raised that triggers a subsequent OpenAI call requesting a revision and providing the valid args+types to the model to insure code is valid before including it a pull request.
As a backup in cases where multiple revisions produce invalid code, the valid args for functions in a given code snippet could be specified as an enum in the OpenAI function/tool definitions used to edit code.
Alternatively an Assistants API instance with the code_interpreter tool enabled could be prompted to test code snippets to prevent this issue but this would of course increase costs.
Good luck with this project absolutely awesome idea!
Example of Issue
The following code is part of a pull request made by pixie to a setup script in one of my projects after a correctly triggering an AddRequestsTimeouts Codemod.
This clearly would result in a
TypeErrorsince"timeout"is not a valid kwarg forJSONDecoder.__init__.Potential Cause
I assume that this issue is caused by the OpenAI function call outputs that are used to create code edits not being properly linted on the backend before the resulting code is included in a pull request.
Potential Solution
A possible solution could be adding a valid function and argument parsing class to file_parsers that gathers the valid function names, argument names and types by checking the args in function definitions, typeing info and calls to kwargs.get/pop in each function for each file in the repo.
Then each function call in the AI written code could be linted for valid arguments+types and an error could be raised that triggers a subsequent OpenAI call requesting a revision and providing the valid args+types to the model to insure code is valid before including it a pull request.
As a backup in cases where multiple revisions produce invalid code, the valid args for functions in a given code snippet could be specified as an enum in the OpenAI function/tool definitions used to edit code.
Alternatively an Assistants API instance with the code_interpreter tool enabled could be prompted to test code snippets to prevent this issue but this would of course increase costs.
Good luck with this project absolutely awesome idea!