From 10f4fd0a8334987576d96154b8319dd2fca8beed Mon Sep 17 00:00:00 2001 From: Genevieve Warren <24882762+gewarren@users.noreply.github.com> Date: Thu, 20 Jun 2024 15:47:03 -0700 Subject: [PATCH] capitalize rag --- docs/ai/conceptual/rag.md | 2 +- docs/ai/conceptual/understanding-openai-functions.md | 4 ++-- docs/ai/conceptual/understanding-tokens.md | 2 +- 3 files changed, 4 insertions(+), 4 deletions(-) diff --git a/docs/ai/conceptual/rag.md b/docs/ai/conceptual/rag.md index 5bbe33eb62f14..6f036e55c83c6 100644 --- a/docs/ai/conceptual/rag.md +++ b/docs/ai/conceptual/rag.md @@ -30,7 +30,7 @@ To perform RAG, you must process each data source that you want to use for retri 1. Store the converted data in a location that allows efficient access. Additionally, it's important to store relevant metadata for citations or references when the LLM provides responses. 1. Feed your converted data to LLMs in prompts. -:::image type="content" source="../media/rag/architecture.png" alt-text="Screenshot of a diagram of the technical overview of an LLM walking through rag steps."::: +:::image type="content" source="../media/rag/architecture.png" alt-text="Screenshot of a diagram of the technical overview of an LLM walking through RAG steps."::: - **Source data**: This is where your data exists. It could be a file/folder on your machine, a file in cloud storage, an Azure Machine Learning data asset, a Git repository, or an SQL database. - **Data chunking**: The data in your source needs to be converted to plain text. For example, word documents or PDFs need to be cracked open and converted to text. The text is then chunked into smaller pieces. diff --git a/docs/ai/conceptual/understanding-openai-functions.md b/docs/ai/conceptual/understanding-openai-functions.md index 46891f909392e..0f125eac1a7ec 100644 --- a/docs/ai/conceptual/understanding-openai-functions.md +++ b/docs/ai/conceptual/understanding-openai-functions.md @@ -2,7 +2,7 @@ title: "Understanding OpenAI Function Calling" description: "Understand how function calling enables you to integrate external tools with your OpenAI application." author: haywoodsloan -ms.topic: concept-article +ms.topic: concept-article ms.date: 05/14/2024 #customer intent: As a .NET developer, I want to understand OpenAI function calling so that I can integrate external tools with AI completions in my .NET project. @@ -11,7 +11,7 @@ ms.date: 05/14/2024 # Understand OpenAI function calling -Function calling is an OpenAI model feature that lets you describe functions and their arguments in prompts using JSON. Instead of invoking the function itself, the model returns a JSON output describing what functions should be called and the arguments to use. +*Function calling* is an OpenAI model feature that lets you describe functions and their arguments in prompts using JSON. Instead of invoking the function itself, the model returns a JSON output describing what functions should be called and the arguments to use. Function calling simplifies how you connect external tools to your AI model. First, you specify each tool's functions to the model. Then the model decides which functions should be called, based on the prompt question. The model uses the function call results to build a more accurate and consistent response. diff --git a/docs/ai/conceptual/understanding-tokens.md b/docs/ai/conceptual/understanding-tokens.md index ab40a970ef2d2..5bd95ed09b88c 100644 --- a/docs/ai/conceptual/understanding-tokens.md +++ b/docs/ai/conceptual/understanding-tokens.md @@ -9,7 +9,7 @@ ms.date: 05/14/2024 --- -# Understanding tokens +# Understand tokens Tokens are words, character sets, or combinations of words and punctuation that are used by large language models (LLMs) to decompose text into. Tokenization is the first step in training. The LLM analyzes the semantic relationships between tokens, such as how commonly they're used together or whether they're used in similar contexts. After training, the LLM uses those patterns and relationships to generate a sequence of output tokens based on the input sequence.