A WIP Emacs interface to OpenAI’s language model REST API.
This is intended to be used as a library for building interfaces to the language models.
NOTE: Usage requires an OpenAI API Key. Codex models (for generating code) currently require private beta access.
use-package example:
(use-package openai-api
:straight (openai-api :type git :host github :repo "dangirsh/openai-api")
:config
;; required
(setq openai-api-secret-key <token>) ; https://beta.openai.com/account/api-keys
;; optional
(setq openai-api-engine "davinci-codex") ; *-codex models require private beta access
(setq openai-api-completion-params '((max_tokens . 100)
(temperature . 0.0)
(frequency_penalty . 0)
(presence_penalty . 0)
(n . 1))))
Currently, there’s only a basic interface that uses the active region as the prompt. Use like this:
- Set
openai-api-completion-params
as necessary - In any buffer, select the text you’d like to send as a prompt.
- Run
openai-api-complete-region
(bind to a key for convenience).
If only one completion is returned (n = 1), it is inserted below the region.
If there are multiple completions returned (n > 1), the built-in completing-read
mechanism is used. I recommend trying consult for an improved completing-read
interface, which includes live previews.
(openai-api-get-engines)
("ada" "babbage" "content-filter-alpha" "curie" "curie-instruct-beta" "cushman-codex" "davinci" "davinci-codex" "davinci-instruct-beta")
(let ((openai-api-completion-params '((max_tokens . 2)
(temperature . 0.0)
(frequency_penalty . 0)
(presence_penalty . 0)
(n . 1))))
(car (openai-api-get-completions "2 + 2 =")))
"4"
- I’ve only tested this on GNU Emacs 28.0.50.
- This API tries to prevent unnecessarily wasteful requests. Right now, that means preventing requests that have n>1 and temperature=0.0 (multiple identical results).
- debanjum/codex-completion: Generate, Complete Code in Emacs using Op…
- semiosis/pen.el: pen.el is a package for prompt engineering in emacs…
(some of these should go in separate libraries)
- Generate prompt headers. E.g. Inject a comment with
mode-name
to indicate the language. - Better ways to tweak model parameters.
- e.g. add a prefix arg to
openai-api-complete-region
to specify number of completion resultsn
.C-u <n> openai-api-complete-region
- e.g. re-run previous request, but with larger
max-token
parameter
-
- e.g. add a prefix arg to
- Add async / streaming interface
- Integrate with jrosdahl/fancy-dabbrev for greyed-out inline completions (similar to Copilot interface)
- Allow specification of unit tests for generated function
- filter results based on test results