You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/module_guides/evaluating/root.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,7 +6,7 @@ Evaluation and benchmarking are crucial concepts in LLM development. To improve
6
6
7
7
LlamaIndex offers key modules to measure the quality of generated results. We also offer key modules to measure retrieval quality.
8
8
9
-
-**Response Evaluation**: Does the response match the retrieved context? Does it also match the query? Does it match the reference answer or guidelnes?
9
+
-**Response Evaluation**: Does the response match the retrieved context? Does it also match the query? Does it match the reference answer or guidelines?
10
10
-**Retrieval Evaluation**: Are the retrieved sources relevant to the query?
11
11
12
12
This section describes how the evaluation components within LlamaIndex work.
@@ -48,7 +48,7 @@ The core retrieval evaluation steps revolve around the following:
48
48
49
49
We also integrate with community evaluation tools.
Copy file name to clipboardExpand all lines: docs/understanding/understanding.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -10,11 +10,11 @@ If you've already read our [high-level concepts](/getting_started/concepts.md) p
10
10
11
11
There are a series of key steps involved in building any LLM-powered application, whether it's answering questions about your data, creating a chatbot, or an autonomous agent. Throughout our documentation, you'll notice sections are arranged roughly in the order you'll perform these steps while building your app. You'll learn about:
12
12
13
-
-**[Using LLMs](/understanding/using_llms/using_llms.md)**: whether it's OpenAI or any number of hosted LLMs or a locally-run model of your own, LLMs are used at every step of the way, from indexing and storing to querying and parsing your data. LlamaIndex comes with a huge number of reliable, tested prompts and we'll also show you how to customize your own.
13
+
-**[Using LLMs](/docs/understanding/using_llms/using_llms.md)**: whether it's OpenAI or any number of hosted LLMs or a locally-run model of your own, LLMs are used at every step of the way, from indexing and storing to querying and parsing your data. LlamaIndex comes with a huge number of reliable, tested prompts and we'll also show you how to customize your own.
14
14
15
-
-**[Loading](/understanding/loading/loading.md)**: getting your data from wherever it lives, whether that's unstructured text, PDFs, databases, or APIs to other applications. LlamaIndex has hundreds of connectors to every data source over at [LlamaHub](https://llamahub.ai/).
15
+
-**[Loading](/docs/understanding/loading/loading.md)**: getting your data from wherever it lives, whether that's unstructured text, PDFs, databases, or APIs to other applications. LlamaIndex has hundreds of connectors to every data source over at [LlamaHub](https://llamahub.ai/).
16
16
17
-
-**[Indexing](/understanding/indexing/indexing.md)**: once you've got your data there are an infinite number of ways to structure access to that data to ensure your applications is always working with the most relevant data. LlamaIndex has a huge number of these strategies built-in and can help you select the best ones.
17
+
-**[Indexing](/docs/understanding/indexing/indexing.md)**: once you've got your data there are an infinite number of ways to structure access to that data to ensure your applications is always working with the most relevant data. LlamaIndex has a huge number of these strategies built-in and can help you select the best ones.
18
18
19
19
-**[Storing](/understanding/storing/storing.md)**: you will probably find it more efficient to store your data in indexed form, or pre-processed summaries provided by an LLM, often in a specialized database known as a `Vector Store` (see below). You can also store your indexes, metadata and more.
Copy file name to clipboardExpand all lines: docs/use_cases/extraction.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -2,7 +2,7 @@
2
2
3
3
LLMs are capable of ingesting large amounts of unstructured data and returning it in structured formats, and LlamaIndex is set up to make this easy.
4
4
5
-
Using LlamaIndex, you can get an LLM to read natural language and identify semantically important details such as names, dates, address and figures, and return them in a consistent structured format regardless of the source format.
5
+
Using LlamaIndex, you can get an LLM to read natural language and identify semantically important details such as names, dates, addresses, and figures, and return them in a consistent structured format regardless of the source format.
6
6
7
7
This can be especially useful when you have unstructured source material like chat logs and conversation transcripts.
Copy file name to clipboardExpand all lines: docs/use_cases/multimodal.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -28,7 +28,7 @@ Image to Image Retrieval </examples/multi_modal/image_to_image_retrieval.ipynb>
28
28
29
29
### Retrieval-Augmented Image Captioning
30
30
31
-
Oftentimes understanding an image requires looking up information from a knowledge base. A flow here is retrieval-augmented image captioning - first caption the image with a multi-modal model, then refine the caption by retrieving from a text corpus.
31
+
Oftentimes understanding an image requires looking up information from a knowledge base. A flow here is retrieval-augmented image captioning - first caption the image with a multi-modal model, then refine the caption by retrieving it from a text corpus.
### Pydantic Program for Generating Structured Output for Multi-Modal LLMs
63
63
64
-
You can generate `structured` output with new OpenAI GPT4V via LlamaIndex. The user just needs to specify a Pydantic object to define the structure of output.
64
+
You can generate a `structured` output with the new OpenAI GPT4V via LlamaIndex. The user just needs to specify a Pydantic object to define the structure of the output.
65
65
66
66
Check out the guide below:
67
67
@@ -107,7 +107,7 @@ maxdepth: 1
107
107
/examples/multi_modal/ChromaMultiModalDemo.ipynb
108
108
```
109
109
110
-
### Multi-Modal RAG on PDF's with Tables using Microsoft `Table Transformer`
110
+
### Multi-Modal RAG on PDFs with Tables using Microsoft `Table Transformer`
111
111
112
112
One common challenge with RAG (Retrieval-Augmented Generation) involves handling PDFs that contain tables. Parsing tables in various formats can be quite complex.
113
113
@@ -120,7 +120,7 @@ The experiment is divided into the following parts and we compared those 4 optio
120
120
1. Retrieving relevant images (PDF pages) and sending them to GPT4-V to respond to queries.
121
121
2. Regarding every PDF page as an image, let GPT4-V do the image reasoning for each page. Build Text Vector Store index for the image reasonings. Query the answer against the `Image Reasoning Vector Store`.
122
122
3. Using Table Transformer to crop the table information from the retrieved images and then sending these cropped images to GPT4-V for query responses.
123
-
4. Applying OCR on cropped table images and send the data to GPT4/ GPT-3.5 to answer the query.
123
+
4. Applying OCR on cropped table images and sending the data to GPT4/ GPT-3.5 to answer the query.
0 commit comments