diff --git a/docs/models.md b/docs/models.md index c1731469d0..da6ce7327a 100644 --- a/docs/models.md +++ b/docs/models.md @@ -88,29 +88,16 @@ you've already done this. Otherwise, see the [Getting Started](get-started) guide or the individual plugin's documentation and follow the steps there before continuing. -### The generate() function {:#generate} +### The generate() method {:#generate} In Genkit, the primary interface through which you interact with generative AI -models is the `generate()` function. +models is the `generate()` method. The simplest `generate()` call specifies the model you want to use and a text prompt: ```ts -import { generate } from '@genkit-ai/ai'; -import { configureGenkit } from '@genkit-ai/core'; -import { gemini15Flash } from '@genkit-ai/googleai'; - -configureGenkit(/* ... */); - -(async () => { - const llmResponse = await generate({ - model: gemini15Flash, - prompt: 'Invent a menu item for a pirate themed restaurant.', - }); - - console.log(await llmResponse.text); -})(); +{% includecode github_path="firebase/genkit/js/doc-snippets/src/models/minimal.ts" region_tag="minimal" adjust_indentation="auto" %} ``` When you run this brief example, it will print out some debugging information @@ -134,14 +121,20 @@ adventure. Run the script again and you'll get a different output. -The preceding code sample specified the model using a model reference exported -by the model plugin. You can also specify the model using a string identifier: +The preceding code sample sent the generation request to the default model, +which you specified when you configured the Genkit instance. + +You can also specify a model for a single `generate()` call: + +```ts +{% includecode github_path="firebase/genkit/js/doc-snippets/src/models/index.ts" region_tag="ex01" adjust_indentation="auto" %} +``` + +This example uses a model reference exported by the model plugin. Another option +is to specify the model using a string identifier: ```ts -const llmResponse = await generate({ - model: 'googleai/gemini-1.5-flash-latest', - prompt: 'Invent a menu item for a pirate themed restaurant.', -}); +{% includecode github_path="firebase/genkit/js/doc-snippets/src/models/index.ts" region_tag="ex02" adjust_indentation="auto" %} ``` A model string identifier looks like `providerid/modelid`, where the provider ID @@ -151,16 +144,9 @@ plugin-specific string identifier for a specific version of a model. Some model plugins, such as the Ollama plugin, provide access to potentially dozens of different models and therefore do not export individual model references. In these cases, you can only specify a model to `generate()` using -its string identifier: +its string identifier. -```ts -const llmResponse = await generate({ - model: 'ollama/gemma2', - prompt: 'Invent a menu item for a pirate themed restaurant.', -}); -``` - -All of the preceding examples also illustrate an important point: when you use +These examples also illustrate an important point: when you use `generate()` to make generative AI model calls, changing the model you want to use is simply a matter of passing a different value to the model parameter. By using `generate()` instead of the native model SDKs, you give yourself the @@ -171,23 +157,27 @@ So far you have only seen examples of the simplest `generate()` calls. However, `generate()` also provides an interface for more advanced interactions with generative models, which you will see in the sections that follow. +### System prompts {:#system} + +Some models support providing a _system prompt_, which gives the model +instructions as to how you want it to respond to messages from the user. You can +use the system prompt to specify a persona you want the model to adopt, the tone +of its responses, the format of its responses, and so on. + +If the model you're using supports system prompts, you can provide one with the +`system` parameter: + +```ts +{% includecode github_path="firebase/genkit/js/doc-snippets/src/models/index.ts" region_tag="ex03" adjust_indentation="auto" %} +``` + ### Model parameters {:#model-parameters} The `generate()` function takes a `config` parameter, through which you can specify optional settings that control how the model generates content: ```ts -const llmResponse = await generate({ - prompt: "Suggest an item for the menu of a pirate themed restaurant", - model: gemini15Flash, - config: { - maxOutputTokens: 400, - stopSequences: ["", ""], - temperature: 1.2, - topP: 0.4, - topK: 50, - }, -}); +{% includecode github_path="firebase/genkit/js/doc-snippets/src/models/index.ts" region_tag="ex04" adjust_indentation="auto" %} ``` The exact parameters that are supported depend on the individual model and model @@ -296,24 +286,11 @@ In Genkit, you can request structured output from a model by specifying a schema when you call `generate()`: ```ts -import { z } from "zod"; +{% includecode github_path="firebase/genkit/js/doc-snippets/src/models/index.ts" region_tag="importZod" adjust_indentation="auto" %} ``` ```ts -const MenuItemSchema = z.object({ - name: z.string(), - description: z.string(), - calories: z.number(), - allergens: z.array(z.string()), -}); - -const llmResponse = await generate({ - prompt: "Suggest an item for the menu of a pirate themed restaurant", - model: gemini15Flash, - output: { - schema: MenuItemSchema, - }, -}); +{% includecode github_path="firebase/genkit/js/doc-snippets/src/models/index.ts" region_tag="ex05" adjust_indentation="auto" %} ``` Model output schemas are specified using the [Zod](https://zod.dev/){:.external} @@ -334,35 +311,16 @@ scenes: - Verifies that the output conforms with the schema. To get structured output from a successful generate call, use the response -object's `output()` method: +object's `output` property: ```ts -type MenuItem = z.infer; - -const output: MenuItem | null = llmResponse.output; +{% includecode github_path="firebase/genkit/js/doc-snippets/src/models/index.ts" region_tag="ex06" adjust_indentation="auto" %} ``` #### Handling errors -Note in the prior example that the output method can return `null`. This can -happen when the model fails to generate output that conforms to the schema. You -can also detect this condition by catching the `ValidationError` -exception thrown by generate: - -```ts -import { ValidationError } from "genkit/schema"; -``` - -```ts -try { - llmResponse = await generate(/* ... */); -} catch (e) { - if (e instanceof ValidationError) { - // Output doesn't conform to schema. - } -} -``` - +Note in the prior example that the `output` property can be `null`. This can +happen when the model fails to generate output that conforms to the schema. The best strategy for dealing with such errors will depend on your exact use case, but here are some general hints: @@ -377,17 +335,12 @@ case, but here are some general hints: Zod should try to coerce non-conforming types into the type specified by the schema. If your schema includes primitive types other than strings, using Zod coercion can reduce the number of `generate()` failures you experience. The - following version of MenuItemSchema uses type conversion to automatically + following version of `MenuItemSchema` uses type coercion to automatically correct situations where the model generates calorie information as a string instead of a number: ```ts - const MenuItemSchema = z.object({ - name: z.string(), - description: z.string(), - calories: z.coerce.number(), - allergens: z.array(z.string()), - }); + {% includecode github_path="firebase/genkit/js/doc-snippets/src/models/index.ts" region_tag="ex07" adjust_indentation="auto" %} ``` - **Retry the generate() call**. If the model you've chosen only rarely fails to @@ -400,80 +353,55 @@ case, but here are some general hints: When generating large amounts of text, you can improve the experience for your users by presenting the output as it's generated—streaming the output. A familiar example of streaming in action can be seen in most LLM chat apps: users -can read the model's response to their messages as it's being generated, which +can read the model's response to their message as it's being generated, which improves the perceived responsiveness of the application and enhances the illusion of chatting with an intelligent counterpart. -In Genkit, you can stream output using the `generateStream()` function. Its -syntax is similar to the `generate()` function: - -```ts -import { generateStream } from "@genkit-ai/ai"; -import { GenerateResponseChunk } from "@genkit-ai/ai/lib/generate"; -``` +In Genkit, you can stream output using the `generateStream()` method. Its +syntax is similar to the `generate()` method: ```ts -const llmResponseStream = await generateStream({ - prompt: 'Suggest a complete menu for a pirate themed restaurant', - model: gemini15Flash, -}); +{% includecode github_path="firebase/genkit/js/doc-snippets/src/models/index.ts" region_tag="ex08" adjust_indentation="auto" %} ``` -However, this function returns an asynchronous iterable of response chunks. -Handle each of these chunks as they become available: +The response object has a `stream` property, which you can use to iterate over +the streaming output of the request as it's generated: ```ts -for await (const responseChunkData of llmResponseStream.stream()) { - const responseChunk = responseChunkData as GenerateResponseChunk; - console.log(responseChunk.text); -} +{% includecode github_path="firebase/genkit/js/doc-snippets/src/models/index.ts" region_tag="ex09" adjust_indentation="auto" %} ``` -You can still get the entire response at once: +You can also get the complete output of the request, as you can with a +non-streaming request: ```ts -const llmResponse = await llmResponseStream.response(); +{% includecode github_path="firebase/genkit/js/doc-snippets/src/models/index.ts" region_tag="ex10" adjust_indentation="auto" %} ``` Streaming also works with structured output: ```ts -const MenuSchema = z.object({ - starters: z.array(MenuItemSchema), - mains: z.array(MenuItemSchema), - desserts: z.array(MenuItemSchema), -}); -type Menu = z.infer; - -const llmResponseStream = await generateStream({ - prompt: "Suggest a complete menu for a pirate themed restaurant", - model: gemini15Flash, - output: { schema: MenuSchema }, -}); - -for await (const responseChunkData of llmResponseStream.stream()) { - const responseChunk = responseChunkData as GenerateResponseChunk; - // output() returns an object representing the entire output so far - const output: Menu | null = responseChunk.output; - console.log(output); -} +{% includecode github_path="firebase/genkit/js/doc-snippets/src/models/index.ts" region_tag="ex11" adjust_indentation="auto" %} ``` -Streaming structured output works a little differently from streaming text. When -you call the `output()` method of a response chunk, you get an object -constructed from the accumulation of the chunks that have been produced so far, -rather than an object representing a single chunk (which might not be valid on -its own). **Every chunk of structured output in a sense supersedes the chunk -that came before it**. +Streaming structured output works a little differently from streaming text: the +`output` property of a response chunk is an object constructed from the +accumulation of the chunks that have been produced so far, rather than an object +representing a single chunk (which might not be valid on its own). **Every chunk +of structured output in a sense supersedes the chunk that came before it**. -For example, here are the first five outputs from the prior example: +For example, here's what the first five outputs from the prior example might +look like: ```none null + { starters: [ {} ] } + { starters: [ { name: "Captain's Treasure Chest", description: 'A' } ] } + { starters: [ { @@ -483,6 +411,7 @@ null } ] } + { starters: [ { @@ -509,17 +438,11 @@ completely dependent on the model and its API. For example, the Gemini 1.5 series of models can accept images, video, and audio as prompts. To provide a media prompt to a model that supports it, instead of passing a -simple text prompt to generate, pass an array consisting of a media part and a +simple text prompt to `generate`, pass an array consisting of a media part and a text part: ```ts -const llmResponse = await generate({ - prompt: [ - { media: { url: 'https://example.com/photo.jpg' } }, - { text: 'Compose a poem about this image.' }, - ], - model: gemini15Flash, -}); +{% includecode github_path="firebase/genkit/js/doc-snippets/src/models/index.ts" region_tag="ex12" adjust_indentation="auto" %} ``` In the above example, you specified an image using a publicly-accessible HTTPS @@ -527,20 +450,11 @@ URL. You can also pass media data directly by encoding it as a data URL. For example: ```ts -import { readFile } from 'node:fs/promises'; +{% includecode github_path="firebase/genkit/js/doc-snippets/src/models/index.ts" region_tag="importReadFileAsync" adjust_indentation="auto" %} ``` ```ts -const b64Data = await readFile('output.png', { encoding: 'base64url' }); -const dataUrl = `data:image/png;base64,${b64Data}`; - -const llmResponse = await generate({ - prompt: [ - { media: { url: dataUrl } }, - { text: 'Compose a poem about this image.' }, - ], - model: gemini15Flash, -}); +{% includecode github_path="firebase/genkit/js/doc-snippets/src/models/index.ts" region_tag="ex13" adjust_indentation="auto" %} ``` All models that support media input support both data URLs and HTTPS URLs. Some @@ -559,7 +473,7 @@ example, to generate an image using the Imagen2 model through Vertex AI: example uses the `data-urls` package from `jsdom`: ```posix-terminal - npm i data-urls + npm i --save data-urls npm i --save-dev @types/data-urls ``` @@ -568,84 +482,9 @@ example, to generate an image using the Imagen2 model through Vertex AI: image generation model and the media type of output format: ```ts - import { generate } from '@genkit-ai/ai'; - import { configureGenkit } from '@genkit-ai/core'; - import { vertexAI, imagen2 } from '@genkit-ai/vertexai'; - import parseDataURL from 'data-urls'; - - import { writeFile } from 'node:fs/promises'; - - configureGenkit({ - plugins: [vertexAI({ location: 'us-central1' })], - }); - - (async () => { - const mediaResponse = await generate({ - model: imagen2, - prompt: 'photo of a meal fit for a pirate', - output: { format: 'media' }, - }); - - const media = mediaResponse.media(); - if (media === null) throw new Error('No media generated.'); - - const data = parseDataURL(media.url); - if (data === null) throw new Error('Invalid ‘data:’ URL.'); - - await writeFile(`output.${data.mimeType.subtype}`, data.body); - })(); + {% includecode github_path="firebase/genkit/js/doc-snippets/src/models/imagen.ts" region_tag="imagen" adjust_indentation="auto" %} ``` -### Recording message history - -Many of your users will have interacted with large language models for the first -time through chatbots. Although LLMs are capable of much more than simulating -conversations, it remains a familiar and useful style of interaction. Even when -your users will not be interacting directly with the model in this way, the -conversational style of prompting is a powerful way to influence the output -generated by an AI model. - -To generate message history from a model response, call the `.messages` -method: - -```ts -let response = await generate({ - model: gemini15Flash, - prompt: "How do you say 'dog' in French?", -}); -let history = response.messages; -``` - -You can serialize this history and persist it in a database or session storage. -Then, pass the history along with the prompt on future calls to `generate()`: - -```ts -response = await generate({ - model: gemini15Flash, - prompt: 'How about in Spanish?', - history, -}); -history = response.messages; -``` - -If the model you're using supports the `system` role, you can use the initial -history to set the system message: - -```ts -import { MessageData } from "@genkit-ai/ai/model"; -``` - -```ts -let messages: MessageData[] = [ - { role: 'system', content: [{ text: 'Talk like a pirate.' }] }, -]; -let response = await generate({ - model: gemini15Flash, - prompt: "How do you say 'dog' in French?", - messages, -}); -``` - ### Next steps {:#next-steps} #### Learn more about Genkit diff --git a/js/doc-snippets/package.json b/js/doc-snippets/package.json index ea4036e58d..377da403e7 100644 --- a/js/doc-snippets/package.json +++ b/js/doc-snippets/package.json @@ -6,11 +6,14 @@ "author": "", "license": "ISC", "dependencies": { - "genkit": "workspace:*", - "@genkit-ai/googleai": "workspace:*" + "@genkit-ai/googleai": "workspace:*", + "@genkit-ai/vertexai": "workspace:*", + "data-urls": "^5.0.0", + "genkit": "workspace:*" }, "devDependencies": { "rimraf": "^6.0.1", - "typescript": "^5.3.3" + "typescript": "^5.3.3", + "@types/data-urls": "^3.0.4" } } diff --git a/js/doc-snippets/src/models/imagen.ts b/js/doc-snippets/src/models/imagen.ts new file mode 100644 index 0000000000..c96c9fbb45 --- /dev/null +++ b/js/doc-snippets/src/models/imagen.ts @@ -0,0 +1,42 @@ +/** + * Copyright 2024 Google LLC + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +// [START imagen] +import { imagen3Fast, vertexAI } from '@genkit-ai/vertexai'; +import parseDataURL from 'data-urls'; +import { genkit } from 'genkit'; + +import { writeFile } from 'node:fs/promises'; + +const ai = genkit({ + plugins: [vertexAI({ location: 'us-central1' })], +}); + +(async () => { + const { media } = await ai.generate({ + model: imagen3Fast, + prompt: 'photo of a meal fit for a pirate', + output: { format: 'media' }, + }); + + if (media === null) throw new Error('No media generated.'); + + const data = parseDataURL(media.url); + if (data === null) throw new Error('Invalid "data:" URL.'); + + await writeFile(`output.${data.mimeType.subtype}`, data.body); +})(); +// [END imagen] diff --git a/js/doc-snippets/src/models/index.ts b/js/doc-snippets/src/models/index.ts new file mode 100644 index 0000000000..5ebf74c7cc --- /dev/null +++ b/js/doc-snippets/src/models/index.ts @@ -0,0 +1,175 @@ +/** + * Copyright 2024 Google LLC + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +import { gemini15Flash, gemini15Pro, googleAI } from '@genkit-ai/googleai'; +import { genkit } from 'genkit'; + +const ai = genkit({ + plugins: [googleAI()], + model: gemini15Flash, +}); + +async function fn01() { + // [START ex01] + const { text } = await ai.generate({ + model: gemini15Pro, + prompt: 'Invent a menu item for a pirate themed restaurant.', + }); + // [END ex01] +} + +async function fn02() { + // [START ex02] + const { text } = await ai.generate({ + model: 'googleai/gemini-1.5-pro-latest', + prompt: 'Invent a menu item for a pirate themed restaurant.', + }); + // [END ex02] +} + +async function fn03() { + // [START ex04] + const { text } = await ai.generate({ + prompt: 'Invent a menu item for a pirate themed restaurant.', + config: { + maxOutputTokens: 400, + stopSequences: ['', ''], + temperature: 1.2, + topP: 0.4, + topK: 50, + }, + }); + // [END ex04] +} + +// [START importZod] +import { z } from 'genkit'; // Import Zod, which is re-exported by Genkit. +// [END importZod] + +async function fn04() { + // [START ex05] + const MenuItemSchema = z.object({ + name: z.string(), + description: z.string(), + calories: z.number(), + allergens: z.array(z.string()), + }); + + const { output } = await ai.generate({ + prompt: 'Invent a menu item for a pirate themed restaurant.', + output: { schema: MenuItemSchema }, + }); + // [END ex05] + + // [START ex06] + if (output) { + const { name, description, calories, allergens } = output; + } + // [END ex06] +} + +function fn05() { + // [START ex07] + const MenuItemSchema = z.object({ + name: z.string(), + description: z.string(), + calories: z.coerce.number(), + allergens: z.array(z.string()), + }); + // [END ex07] +} + +async function fn06() { + // [START ex08] + const { response, stream } = await ai.generateStream( + 'Suggest a complete menu for a pirate themed restaurant.' + ); + // [END ex08] + + // [START ex09] + for await (const chunk of stream) { + console.log(chunk.text); + } + // [END ex09] + + // [START ex10] + const completeText = (await response).text; + // [END ex10] +} + +async function fn07() { + const MenuItemSchema = z.object({ + name: z.string(), + description: z.string(), + calories: z.coerce.number(), + allergens: z.array(z.string()), + }); + + // [START ex11] + const MenuSchema = z.object({ + starters: z.array(MenuItemSchema), + mains: z.array(MenuItemSchema), + desserts: z.array(MenuItemSchema), + }); + + const { response, stream } = await ai.generateStream({ + prompt: 'Suggest a complete menu for a pirate themed restaurant.', + output: { schema: MenuSchema }, + }); + + for await (const chunk of stream) { + // `output` is an object representing the entire output so far. + console.log(chunk.output); + } + + // Get the completed output. + const { output } = await response; + // [END ex11] +} + +async function fn08() { + // [START ex12] + const { text } = await ai.generate([ + { media: { url: 'https://example.com/photo.jpg' } }, + { text: 'Compose a poem about this image.' }, + ]); + // [END ex12] +} + +// [START importReadFileAsync] +import { readFile } from 'node:fs/promises'; +// [END importReadFileAsync] + +async function fn09() { + // [START ex13] + const b64Data = await readFile('photo.jpg', { encoding: 'base64url' }); + const dataUrl = `data:image/jpeg;base64,${b64Data}`; + + const { text } = await ai.generate([ + { media: { url: dataUrl } }, + { text: 'Compose a poem about this image.' }, + ]); + // [END ex13] +} + +async function fn10() { + // [START ex03] + const { text } = await ai.generate({ + system: 'You are a food industry marketing consultant.', + prompt: 'Invent a menu item for a pirate themed restaurant.', + }); + // [END ex03] +} diff --git a/js/doc-snippets/src/models/minimal.ts b/js/doc-snippets/src/models/minimal.ts new file mode 100644 index 0000000000..cd7190ad79 --- /dev/null +++ b/js/doc-snippets/src/models/minimal.ts @@ -0,0 +1,32 @@ +/** + * Copyright 2024 Google LLC + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +// [START minimal] +import { gemini15Flash, googleAI } from '@genkit-ai/googleai'; +import { genkit } from 'genkit'; + +const ai = genkit({ + plugins: [googleAI()], + model: gemini15Flash, +}); + +(async () => { + const { text } = await ai.generate( + 'Invent a menu item for a pirate themed restaurant.' + ); + console.log(text); +})(); +// [END minimal] diff --git a/js/pnpm-lock.yaml b/js/pnpm-lock.yaml index 6dade7767a..9aac0204ab 100644 --- a/js/pnpm-lock.yaml +++ b/js/pnpm-lock.yaml @@ -139,10 +139,19 @@ importers: '@genkit-ai/googleai': specifier: workspace:* version: link:../plugins/googleai + '@genkit-ai/vertexai': + specifier: workspace:* + version: link:../plugins/vertexai + data-urls: + specifier: ^5.0.0 + version: 5.0.0 genkit: specifier: workspace:* version: link:../genkit devDependencies: + '@types/data-urls': + specifier: ^3.0.4 + version: 3.0.4 rimraf: specifier: ^6.0.1 version: 6.0.1 @@ -3137,6 +3146,9 @@ packages: '@types/cors@2.8.17': resolution: {integrity: sha512-8CGDvrBj1zgo2qE+oS3pOCyYNqCPryMWY2bGfwA0dcfopWGgxs+78df0Rs3rc9THP4JkOhLsAa+15VdpAqkcUA==} + '@types/data-urls@3.0.4': + resolution: {integrity: sha512-XRY2WVaOFSTKpNMaplqY1unPgAGk/DosOJ+eFrB6LJcFFbRH3nVbwJuGqLmDwdTWWx+V7U614/kmrj1JmCDl2A==} + '@types/estree@1.0.5': resolution: {integrity: sha512-/kYRxGDLWzHOB7q+wtSUQlFrtcdUccpfy+X+9iMBpHK8QLLhx2wIPYuS5DYtR9Wa/YlZAbIovy7qVdB1Aq6Lyw==} @@ -3245,6 +3257,15 @@ packages: '@types/uuid@9.0.8': resolution: {integrity: sha512-jg+97EGIcY9AGHJJRaaPVgetKDsrTgbRjQ5Msgjh/DQKEFl0DtyRr/VCOyD1T2R1MNeWPK/u7JoGhlDZnKBAfA==} + '@types/webidl-conversions@7.0.3': + resolution: {integrity: sha512-CiJJvcRtIgzadHCYXw7dqEnMNRjhGZlYK05Mj9OyktqV8uVT8fD2BFOB7S1uwBE3Kj2Z+4UyPmFw/Ixgw/LAlA==} + + '@types/whatwg-mimetype@3.0.2': + resolution: {integrity: sha512-c2AKvDT8ToxLIOUlN51gTiHXflsfIFisS4pO7pDPoKouJCESkhZnEy623gwP9laCy5lnLDAw1vAzu2vM2YLOrA==} + + '@types/whatwg-url@11.0.5': + resolution: {integrity: sha512-coYR071JRaHa+xoEvvYqvnIHaVqaYrLPbsufM9BF63HkwI5Lgmy2QR8Q5K/lYDYo5AK82wOvSOS0UsLTpTG7uQ==} + '@types/yargs-parser@21.0.3': resolution: {integrity: sha512-I4q9QU9MQv4oEOz4tAHJtNz1cwuLxn2F3xcc2iV5WdqLPpUnj30aUuxt1mAxYTG+oe8CZMV/+6rU4S4gRDzqtQ==} @@ -3678,6 +3699,10 @@ packages: resolution: {integrity: sha512-0R9ikRb668HB7QDxT1vkpuUBtqc53YyAwMwGeUFKRojY/NWKvdZ+9UYtRfGmhqNbRkTSVpMbmyhXipFFv2cb/A==} engines: {node: '>= 12'} + data-urls@5.0.0: + resolution: {integrity: sha512-ZYP5VBHshaDAiVZxjbRVcFJpc+4xGgT0bK3vzy1HLN8jTO975HEbuYzZJcHoQEY5K1a0z8YayJkyVETa08eNTg==} + engines: {node: '>=18'} + data-view-buffer@1.0.1: resolution: {integrity: sha512-0lht7OugA5x3iJLOWFhWK/5ehONdprk0ISXqVFn/NFrDu+cuc8iADFrGQz5BnRK7LLU3JmkbXSxaqX+/mXYtUA==} engines: {node: '>= 0.4'} @@ -5920,6 +5945,10 @@ packages: tr46@1.0.1: resolution: {integrity: sha512-dTpowEjclQ7Kgx5SdBkqRzVhERQXov8/l9Ft9dVM9fmg0W0KQSVaXX9T4i6twCPNtYiZM53lpSSUAwJbFPOHxA==} + tr46@5.0.0: + resolution: {integrity: sha512-tk2G5R2KRwBd+ZN0zaEXpmzdKyOYksXwywulIX95MBODjSzMIuQnQ3m8JxgbhnL1LeVo7lqQKsYa1O3Htl7K5g==} + engines: {node: '>=18'} + tree-kill@1.2.2: resolution: {integrity: sha512-L0Orpi8qGpRG//Nd+H90vFB+3iHnue1zSSGmNOOCh1GLJ7rUKVwV2HvijphGQS2UmhUZewS9VgvxYIdgr+fG1A==} hasBin: true @@ -6144,6 +6173,10 @@ packages: webidl-conversions@4.0.2: resolution: {integrity: sha512-YQ+BmxuTgd6UXZW3+ICGfyqRyHXVlD5GtQr5+qjiNW7bF0cqrzX500HVXPBOvgXb5YnzDd+h0zqyv61KUD7+Sg==} + webidl-conversions@7.0.0: + resolution: {integrity: sha512-VwddBukDzu71offAQR975unBIGqfKZpM+8ZX6ySk8nYhVoo5CYaZyzt3YBvYtRtO+aoGlqxPg/B87NGVZ/fu6g==} + engines: {node: '>=12'} + websocket-driver@0.7.4: resolution: {integrity: sha512-b17KeDIQVjvb0ssuSDF2cYXSg2iztliJ4B9WdsuB6J952qCPKmnVq4DyW5motImXHDC1cBT/1UezrJVsKw5zjg==} engines: {node: '>=0.8.0'} @@ -6155,6 +6188,14 @@ packages: whatwg-fetch@3.6.20: resolution: {integrity: sha512-EqhiFU6daOA8kpjOWTL0olhVOF3i7OrFzSYiGsEMB8GcXS+RrzauAERX65xMeNWVqxA6HXH2m69Z9LaKKdisfg==} + whatwg-mimetype@4.0.0: + resolution: {integrity: sha512-QaKxh0eNIi2mE9p2vEdzfagOKHCcj1pJ56EEHGQOVxp8r9/iszLUUV7v89x9O1p/T+NlTM5W7jW6+cz4Fq1YVg==} + engines: {node: '>=18'} + + whatwg-url@14.0.0: + resolution: {integrity: sha512-1lfMEm2IEr7RIV+f4lUNPOqfFL+pO+Xw3fJSqmjX9AbXcXcYOkCe1P6+9VBZB6n94af16NfZf+sSk0JCBZC9aw==} + engines: {node: '>=18'} + whatwg-url@5.0.0: resolution: {integrity: sha512-saE57nupxk6v3HY35+jzBwYa0rKSy0XR8JSxZPwgLr7ys0IBzhGviA1/TUGJLmSVqs8pb9AnvICXEuOHLprYTw==} @@ -8028,6 +8069,11 @@ snapshots: dependencies: '@types/node': 20.16.9 + '@types/data-urls@3.0.4': + dependencies: + '@types/whatwg-mimetype': 3.0.2 + '@types/whatwg-url': 11.0.5 + '@types/estree@1.0.5': {} '@types/express-serve-static-core@4.17.43': @@ -8160,6 +8206,14 @@ snapshots: '@types/uuid@9.0.8': {} + '@types/webidl-conversions@7.0.3': {} + + '@types/whatwg-mimetype@3.0.2': {} + + '@types/whatwg-url@11.0.5': + dependencies: + '@types/webidl-conversions': 7.0.3 + '@types/yargs-parser@21.0.3': {} '@types/yargs@17.0.33': @@ -8657,6 +8711,11 @@ snapshots: data-uri-to-buffer@4.0.1: {} + data-urls@5.0.0: + dependencies: + whatwg-mimetype: 4.0.0 + whatwg-url: 14.0.0 + data-view-buffer@1.0.1: dependencies: call-bind: 1.0.7 @@ -11402,6 +11461,10 @@ snapshots: dependencies: punycode: 2.3.1 + tr46@5.0.0: + dependencies: + punycode: 2.3.1 + tree-kill@1.2.2: {} triple-beam@1.4.1: {} @@ -11615,6 +11678,8 @@ snapshots: webidl-conversions@4.0.2: {} + webidl-conversions@7.0.0: {} + websocket-driver@0.7.4: dependencies: http-parser-js: 0.5.8 @@ -11625,6 +11690,13 @@ snapshots: whatwg-fetch@3.6.20: {} + whatwg-mimetype@4.0.0: {} + + whatwg-url@14.0.0: + dependencies: + tr46: 5.0.0 + webidl-conversions: 7.0.0 + whatwg-url@5.0.0: dependencies: tr46: 0.0.3