Skip to content

Commit

Permalink
Moving showcases to templates (#6769)
Browse files Browse the repository at this point in the history
GitOrigin-RevId: d8592754f1c536d5a7d7a9907e8aa0bb6b5efcce
  • Loading branch information
olruas authored and Manul from Pathway committed Jun 20, 2024
1 parent fac310f commit ae607ef
Show file tree
Hide file tree
Showing 5 changed files with 8 additions and 8 deletions.
6 changes: 3 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ Using incremental vector search, only the most relevant context is automatically

![Automated real-time knowledge mining and alerting](examples/pipelines/drive_alert/drive_alert_demo.gif)

For the code, see the [`drive_alert`](#examples) app example. You can find more details in a [blog post on alerting with LLM-App](https://pathway.com/developers/showcases/llm-alert-pathway).
For the code, see the [`drive_alert`](#examples) app example. You can find more details in a [blog post on alerting with LLM-App](https://pathway.com/developers/templates/llm-alert-pathway).


## How it works
Expand Down Expand Up @@ -99,7 +99,7 @@ Pick one that is closest to your needs.
| [`local`](examples/pipelines/local/) | This example runs the application using Huggingface Transformers, which eliminates the need for the data to leave the machine. It provides a convenient way to use state-of-the-art NLP models locally. |
| [`unstructured-to-sql`](examples/pipelines/unstructured_to_sql_on_the_fly/) | This example extracts the data from unstructured files and stores it into a PostgreSQL table. It also transforms the user query into an SQL query which is then executed on the PostgreSQL table. |
| [`alert`](examples/pipelines/alert/) | Ask questions, get alerted whenever response changes. Pathway is always listening for changes, whenever new relevant information is added to the stream (local files in this example), LLM decides if there is a substantial difference in response and notifies the user with a Slack message. |
| [`drive-alert`](examples/pipelines/drive_alert/) | The [`alert`](examples/pipelines/alert/) example on steroids. Whenever relevant information on Google Docs is modified or added, get real-time alerts via Slack. See the [`tutorial`](https://pathway.com/developers/showcases/llm-alert-pathway). |
| [`drive-alert`](examples/pipelines/drive_alert/) | The [`alert`](examples/pipelines/alert/) example on steroids. Whenever relevant information on Google Docs is modified or added, get real-time alerts via Slack. See the [`tutorial`](https://pathway.com/developers/templates/llm-alert-pathway). |
| [`contextful-geometric`](examples/pipelines/contextful_geometric/) | The [`contextful`](examples/pipelines/contextful/) example, which optimises use of tokens in queries. It asks the same questions with increasing number of documents given as a context in the question, until ChatGPT finds an answer. |


Expand Down Expand Up @@ -132,7 +132,7 @@ Each [example](examples/pipelines/) contains a README.md with instructions on ho

### Bonus: Build your own Pathway-powered LLM App

Want to learn more about building your own app? See step-by-step guide [Building a llm-app tutorial](https://pathway.com/developers/showcases/llm-app-pathway)
Want to learn more about building your own app? See step-by-step guide [Building a llm-app tutorial](https://pathway.com/developers/templates/llm-app-pathway)

Or,

Expand Down
4 changes: 2 additions & 2 deletions examples/pipelines/adaptive-rag/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@

## End to end Adaptive RAG with Pathway

This is the accompanying code for deploying the `adaptive RAG` technique with Pathway. To understand the technique and learn how it can save tokens without sacrificing accuracy, read [our showcase](https://pathway.com/developers/showcases/adaptive-rag).
This is the accompanying code for deploying the `adaptive RAG` technique with Pathway. To understand the technique and learn how it can save tokens without sacrificing accuracy, read [our showcase](https://pathway.com/developers/templates/adaptive-rag).

To learn more about building & deploying RAG applications with Pathway, including containerization, refer to [demo question answering](../demo-question-answering/README.md).

Expand Down Expand Up @@ -49,7 +49,7 @@ If you are interested in building this app in a fully private & local setup, che
You can modify any of the used components by checking the options from: `from pathway.xpacks.llm import embedders, llms, parsers, splitters`.
It is also possible to easily create new components by extending the [`pw.UDF`](https://pathway.com/developers/user-guide/data-transformation/user-defined-functions) class and implementing the `__wrapped__` function.

To see the setup used in our work, check [the showcase](https://pathway.com/developers/showcases/private-rag-ollama-mistral).
To see the setup used in our work, check [the showcase](https://pathway.com/developers/templates/private-rag-ollama-mistral).

## Running the app
To run the app you need to set your OpenAI API key, by setting the environmental variable `OPENAI_API_KEY` or creating an `.env` file in this directory with line `OPENAI_API_KEY=sk-...`. If you modify the code to use another LLM provider, you may need to set a relevant API key.
Expand Down
2 changes: 1 addition & 1 deletion examples/pipelines/contextful_geometric/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@

# RAG pipeline with up-to-date knowledge: get answers based on increasing number of documents

This example implements a pipeline that answers questions based on documents in a given folder. To get the answer it sends increasingly more documents to the LLM chat until it can find an answer. You can read more about the reasoning behind this approach [here](https://pathway.com/developers/showcases/adaptive-rag).
This example implements a pipeline that answers questions based on documents in a given folder. To get the answer it sends increasingly more documents to the LLM chat until it can find an answer. You can read more about the reasoning behind this approach [here](https://pathway.com/developers/templates/adaptive-rag).

Each query text is first turned into a vector using OpenAI embedding service,
then relevant documentation pages are found using a Nearest Neighbor index computed
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ Pipeline 2 then starts a REST API endpoint serving queries about programming in
Each query text is converted into a SQL query using the OpenAI API.

Architecture diagram and description are at
https://pathway.com/developers/showcases/unstructured-to-structured
https://pathway.com/developers/templates/unstructured-to-structured


⚠️ This project requires a running PostgreSQL instance.
Expand Down
2 changes: 1 addition & 1 deletion examples/pipelines/unstructured_to_sql_on_the_fly/app.py
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@
Each query text is converted into a SQL query using the OpenAI API.
Architecture diagram and description are at
https://pathway.com/developers/showcases/unstructured-to-structured
https://pathway.com/developers/templates/unstructured-to-structured
⚠️ This project requires a running PostgreSQL instance.
Expand Down

0 comments on commit ae607ef

Please sign in to comment.