Releases: vercel/modelfusion
v0.137.0
v0.136.0
Added
-
FileCache
for caching responses to disk. Thanks @jakedetels for the feature! Example:import { generateText, openai } from "modelfusion"; import { FileCache } from "modelfusion/node"; const cache = new FileCache(); const text1 = await generateText({ model: openai .ChatTextGenerator({ model: "gpt-3.5-turbo", temperature: 1 }) .withTextPrompt(), prompt: "Write a short story about a robot learning to love", logging: "basic-text", cache, }); console.log({ text1 }); const text2 = await generateText({ model: openai .ChatTextGenerator({ model: "gpt-3.5-turbo", temperature: 1 }) .withTextPrompt(), prompt: "Write a short story about a robot learning to love", logging: "basic-text", cache, }); console.log({ text2 }); // same text
v0.135.1
v0.135.0
v0.135.0 - 2024-01-29
Added
ObjectGeneratorTool
: a tool to create synthetic or fictional structured data usinggenerateObject
. DocsjsonToolCallPrompt.instruction()
: Create a instruction prompt for tool calls that uses JSON.
Changed
jsonToolCallPrompt
automatically enables JSON mode or grammars when supported by the model.
v0.134.0
Added
-
Added prompt function support to
generateText
,streamText
,generateObject
, andstreamObject
. You can create prompt functions for text, instruction, and chat prompts usingcreateTextPrompt
,createInstructionPrompt
, andcreateChatPrompt
. Prompt functions allow you to load prompts from external sources and improve the prompt logging. Example:const storyPrompt = createInstructionPrompt( async ({ protagonist }: { protagonist: string }) => ({ system: "You are an award-winning author.", instruction: `Write a short story about ${protagonist} learning to love.`, }) ); const text = await generateText({ model: openai .ChatTextGenerator({ model: "gpt-3.5-turbo" }) .withInstructionPrompt(), prompt: storyPrompt({ protagonist: "a robot", }), });
Changed
- Refactored build to use
tsup
.
v0.133.0
v0.132.0
v0.131.1
v0.131.0
Added
-
ObjectStreamResponse
andObjectStreamFromResponse
serialization functions for using server-generated object streams in web applications.Server example:
export async function POST(req: Request) { const { myArgs } = await req.json(); const objectStream = await streamObject({ // ... }); // serialize the object stream to a response: return new ObjectStreamResponse(objectStream); }
Client example:
const response = await fetch("/api/stream-object-openai", { method: "POST", body: JSON.stringify({ myArgs }), }); // deserialize (result object is simpler than the full response) const stream = ObjectStreamFromResponse({ schema: itinerarySchema, response, }); for await (const { partialObject } of stream) { // do something, e.g. setting a React state }
Changed
-
breaking change: rename
generateStructure
togenerateObject
andstreamStructure
tostreamObject
. Related names have been changed accordingly. -
breaking change: the
streamObject
result stream contains additional data. You need to usestream.partialObject
or destructuring to access it:const objectStream = await streamObject({ // ... }); for await (const { partialObject } of objectStream) { console.clear(); console.log(partialObject); }
-
breaking change: the result from successful
Schema
validations is stored in thevalue
property (before:data
).