Skip to content

Commit 4269179

Browse files
authored
Fixing some more links and documentations (run-llama#9633)
1 parent de85bab commit 4269179

File tree

5 files changed

+11
-11
lines changed

5 files changed

+11
-11
lines changed

docs/module_guides/evaluating/root.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ Evaluation and benchmarking are crucial concepts in LLM development. To improve
66

77
LlamaIndex offers key modules to measure the quality of generated results. We also offer key modules to measure retrieval quality.
88

9-
- **Response Evaluation**: Does the response match the retrieved context? Does it also match the query? Does it match the reference answer or guidelnes?
9+
- **Response Evaluation**: Does the response match the retrieved context? Does it also match the query? Does it match the reference answer or guidelines?
1010
- **Retrieval Evaluation**: Are the retrieved sources relevant to the query?
1111

1212
This section describes how the evaluation components within LlamaIndex work.
@@ -48,7 +48,7 @@ The core retrieval evaluation steps revolve around the following:
4848

4949
We also integrate with community evaluation tools.
5050

51-
- [DeepEval](../../../community/integrations/deepeval.md)
51+
- [DeepEval](/docs/community/integrations/deepeval.md)
5252
- [Ragas](https://github.com/explodinggradients/ragas/blob/main/docs/howtos/integrations/llamaindex.ipynb)
5353

5454
## Usage Pattern

docs/understanding/understanding.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -10,11 +10,11 @@ If you've already read our [high-level concepts](/getting_started/concepts.md) p
1010

1111
There are a series of key steps involved in building any LLM-powered application, whether it's answering questions about your data, creating a chatbot, or an autonomous agent. Throughout our documentation, you'll notice sections are arranged roughly in the order you'll perform these steps while building your app. You'll learn about:
1212

13-
- **[Using LLMs](/understanding/using_llms/using_llms.md)**: whether it's OpenAI or any number of hosted LLMs or a locally-run model of your own, LLMs are used at every step of the way, from indexing and storing to querying and parsing your data. LlamaIndex comes with a huge number of reliable, tested prompts and we'll also show you how to customize your own.
13+
- **[Using LLMs](/docs/understanding/using_llms/using_llms.md)**: whether it's OpenAI or any number of hosted LLMs or a locally-run model of your own, LLMs are used at every step of the way, from indexing and storing to querying and parsing your data. LlamaIndex comes with a huge number of reliable, tested prompts and we'll also show you how to customize your own.
1414

15-
- **[Loading](/understanding/loading/loading.md)**: getting your data from wherever it lives, whether that's unstructured text, PDFs, databases, or APIs to other applications. LlamaIndex has hundreds of connectors to every data source over at [LlamaHub](https://llamahub.ai/).
15+
- **[Loading](/docs/understanding/loading/loading.md)**: getting your data from wherever it lives, whether that's unstructured text, PDFs, databases, or APIs to other applications. LlamaIndex has hundreds of connectors to every data source over at [LlamaHub](https://llamahub.ai/).
1616

17-
- **[Indexing](/understanding/indexing/indexing.md)**: once you've got your data there are an infinite number of ways to structure access to that data to ensure your applications is always working with the most relevant data. LlamaIndex has a huge number of these strategies built-in and can help you select the best ones.
17+
- **[Indexing](/docs/understanding/indexing/indexing.md)**: once you've got your data there are an infinite number of ways to structure access to that data to ensure your applications is always working with the most relevant data. LlamaIndex has a huge number of these strategies built-in and can help you select the best ones.
1818

1919
- **[Storing](/understanding/storing/storing.md)**: you will probably find it more efficient to store your data in indexed form, or pre-processed summaries provided by an LLM, often in a specialized database known as a `Vector Store` (see below). You can also store your indexes, metadata and more.
2020

docs/use_cases/extraction.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22

33
LLMs are capable of ingesting large amounts of unstructured data and returning it in structured formats, and LlamaIndex is set up to make this easy.
44

5-
Using LlamaIndex, you can get an LLM to read natural language and identify semantically important details such as names, dates, address and figures, and return them in a consistent structured format regardless of the source format.
5+
Using LlamaIndex, you can get an LLM to read natural language and identify semantically important details such as names, dates, addresses, and figures, and return them in a consistent structured format regardless of the source format.
66

77
This can be especially useful when you have unstructured source material like chat logs and conversation transcripts.
88

docs/use_cases/multimodal.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -28,7 +28,7 @@ Image to Image Retrieval </examples/multi_modal/image_to_image_retrieval.ipynb>
2828

2929
### Retrieval-Augmented Image Captioning
3030

31-
Oftentimes understanding an image requires looking up information from a knowledge base. A flow here is retrieval-augmented image captioning - first caption the image with a multi-modal model, then refine the caption by retrieving from a text corpus.
31+
Oftentimes understanding an image requires looking up information from a knowledge base. A flow here is retrieval-augmented image captioning - first caption the image with a multi-modal model, then refine the caption by retrieving it from a text corpus.
3232

3333
Check out our guides below:
3434

@@ -61,7 +61,7 @@ GPT4-V: </examples/multi_modal/openai_multi_modal.ipynb>
6161

6262
### Pydantic Program for Generating Structured Output for Multi-Modal LLMs
6363

64-
You can generate `structured` output with new OpenAI GPT4V via LlamaIndex. The user just needs to specify a Pydantic object to define the structure of output.
64+
You can generate a `structured` output with the new OpenAI GPT4V via LlamaIndex. The user just needs to specify a Pydantic object to define the structure of the output.
6565

6666
Check out the guide below:
6767

@@ -107,7 +107,7 @@ maxdepth: 1
107107
/examples/multi_modal/ChromaMultiModalDemo.ipynb
108108
```
109109

110-
### Multi-Modal RAG on PDF's with Tables using Microsoft `Table Transformer`
110+
### Multi-Modal RAG on PDFs with Tables using Microsoft `Table Transformer`
111111

112112
One common challenge with RAG (Retrieval-Augmented Generation) involves handling PDFs that contain tables. Parsing tables in various formats can be quite complex.
113113

@@ -120,7 +120,7 @@ The experiment is divided into the following parts and we compared those 4 optio
120120
1. Retrieving relevant images (PDF pages) and sending them to GPT4-V to respond to queries.
121121
2. Regarding every PDF page as an image, let GPT4-V do the image reasoning for each page. Build Text Vector Store index for the image reasonings. Query the answer against the `Image Reasoning Vector Store`.
122122
3. Using Table Transformer to crop the table information from the retrieved images and then sending these cropped images to GPT4-V for query responses.
123-
4. Applying OCR on cropped table images and send the data to GPT4/ GPT-3.5 to answer the query.
123+
4. Applying OCR on cropped table images and sending the data to GPT4/ GPT-3.5 to answer the query.
124124

125125
```{toctree}
126126
---

scripts/publish_gpt_index_package.sh

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,4 +10,4 @@ twine upload dist/*
1010
# twine upload -r testpypi dist/*
1111

1212
# cleanup
13-
rm -rf build dist *.egg-info
13+
rm -rf build dist *.egg-info/

0 commit comments

Comments
 (0)