A single-file, test-driven-development feedback loop with GPT 🌀
Given command which runs a test and a file to edit , gptdd
feeds the test results to GPT4, requests a fix, and offers the user the option to apply the fix.
💭 It would be amazing to build this in at the test runner/IDE-level, but in the interest of it being language and test-runner agnostic, it's a standalone script. If you're interested in building an IDE plugin, please reach out!
npx gptdd \
--fileToFix lib/myFunc.ts \
--testToRun "pnpm vitest run lib/myFunc.test.ts" \
--apiKey "sk-..."
Option | Description |
---|---|
--fileToFix, -f |
The file to edit. |
--testToRun, -t |
The command to run once to get the initial test results. |
--apiKey, -a |
Your OpenAI API key. |
The following examples specific to your language/test-runner. If you don't see what you're looking for, please contribute!
npx gptdd \
-f lib/myFunc.ts \
-t "pnpm vitest run lib/myFunc.test.ts" \
-a "sk-..."
npx gptdd \
-f lib/myFunc.ts \
-t "pnpm jest examples/myFunc.test.ts" \
-a "sk-..."
We recommend using pnpm
. Clone the repository, run pnpm install
. Then run pnpm link --global
to make the gptdd
command available globally. From there, you can make tweaks and test them out by running gptdd
in a directory with a test and file to fix.
We strongly welcome contributions of any kind- simply open a PR explaining what you've changed and why and we'll go from there.