Skip to content

Commit

Permalink
fix: add missing jsdoc
Browse files Browse the repository at this point in the history
  • Loading branch information
lucgagan committed Jun 25, 2023
1 parent 311d97d commit e159af7
Show file tree
Hide file tree
Showing 2 changed files with 45 additions and 2 deletions.
9 changes: 7 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,6 +39,13 @@ import { createChat } from "completions";
* The total length of input tokens and generated tokens is limited by the model's context length.
* @property model - ID of the model to use. See the model endpoint compatibility table for
* details on which models work with the Chat API.
* @property functionCall - Controls how the model responds to function calls.
* "none" means the model does not call a function, and responds to the end-user.
* "auto" means the model can pick between an end-user or calling a function.
* Specifying a particular function via {"name":\ "my_function"} forces the model to call that function.
* "none" is the default when no functions are present.
* "auto" is the default if functions are present.
* @property functions - A list of functions the model may generate JSON inputs for.
* @property n - How many chat completion choices to generate for each input message.
* @property presencePenalty - Number between -2.0 and 2.0. Positive values penalize new
* tokens based on whether they appear in the text so far, increasing the model's
Expand All @@ -54,8 +61,6 @@ import { createChat } from "completions";
* We generally recommend altering this or temperature but not both.
* @property user - A unique identifier representing your end-user, which can help OpenAI
* to monitor and detect abuse.
* @property functionCall - Whether or not the model is allowed to call a function.
* @property functions - Specifications for functions which the model can call.
*/
const chat = createChat({
apiKey: process.env.OPENAI_API_KEY,
Expand Down
38 changes: 38 additions & 0 deletions src/createChat.ts
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,41 @@ import {
import { retry } from "./retry";
import { omit } from "./omit";

/**
* @property apiKey - OpenAI API key.
* @property frequencyPenalty - Number between -2.0 and 2.0. Positive values penalize new
* tokens based on their existing frequency in the text so far, decreasing the model's
* likelihood to repeat the same line verbatim.
* @property logitBias - Number between -2.0 and 2.0. Positive values penalize new tokens
* based on their existing frequency in the text so far, decreasing the model's likelihood to
* repeat the same line verbatim.
* @property maxTokens – The maximum number of tokens to generate in the chat completion.
* The total length of input tokens and generated tokens is limited by the model's context length.
* @property model - ID of the model to use. See the model endpoint compatibility table for
* details on which models work with the Chat API.
* @property functionCall - Controls how the model responds to function calls.
* "none" means the model does not call a function, and responds to the end-user.
* "auto" means the model can pick between an end-user or calling a function.
* Specifying a particular function via {"name":\ "my_function"} forces the model to call that function.
* "none" is the default when no functions are present.
* "auto" is the default if functions are present.
* @property functions - A list of functions the model may generate JSON inputs for.
* @property n - How many chat completion choices to generate for each input message.
* @property presencePenalty - Number between -2.0 and 2.0. Positive values penalize new
* tokens based on whether they appear in the text so far, increasing the model's
* likelihood to talk about new topics.
* @property stop - Up to 4 sequences where the API will stop generating further tokens.
* @property temperature - What sampling temperature to use, between 0 and 2. Higher values
* like 0.8 will make the output more random, while lower values like 0.2 will make it
* more focused and deterministic.
* We generally recommend altering this or top_p but not both.
* @property topP - An alternative to sampling with temperature, called nucleus sampling,
* where the model considers the results of the tokens with top_p probability mass.
* So 0.1 means only the tokens comprising the top 10% probability mass are considered.
* We generally recommend altering this or temperature but not both.
* @property user - A unique identifier representing your end-user, which can help OpenAI
* to monitor and detect abuse.
*/
export const createChat = (
options: Omit<Omit<Omit<CompletionsOptions, "messages">, "n">, "onMessage">
) => {
Expand Down Expand Up @@ -59,7 +94,10 @@ export const createChat = (
messages.push(message);
};

const createFunction = (name: string) => {};

return {
createFunction,
addMessage,
getMessages: () => messages,
sendMessage,
Expand Down

0 comments on commit e159af7

Please sign in to comment.