diff --git a/docs/docs/guides/debugging.md b/docs/docs/guides/debugging.md index ba4c3ffc4cabf..7f9572104bb3e 100644 --- a/docs/docs/guides/debugging.md +++ b/docs/docs/guides/debugging.md @@ -656,6 +656,6 @@ agent.run("Who directed the 2023 film Oppenheimer and what is their age? What is ## Other callbacks -`Callbacks` are what we use to execute any functionality within a component outside the primary component logic. All of the above solutions use `Callbacks` under the hood to log intermediate steps of components. There are a number of `Callbacks` relevant for debugging that come with LangChain out of the box, like the [FileCallbackHandler](/docs/modules/callbacks/how_to/filecallbackhandler). You can also implement your own callbacks to execute custom functionality. +`Callbacks` are what we use to execute any functionality within a component outside the primary component logic. All of the above solutions use `Callbacks` under the hood to log intermediate steps of components. There are a number of `Callbacks` relevant for debugging that come with LangChain out of the box, like the [FileCallbackHandler](/docs/modules/callbacks/filecallbackhandler). You can also implement your own callbacks to execute custom functionality. See here for more info on [Callbacks](/docs/modules/callbacks/), how to use them, and customize them. diff --git a/docs/docs/guides/deployments/index.mdx b/docs/docs/guides/deployments/index.mdx index 92bf63641408e..c075c3b92ee92 100644 --- a/docs/docs/guides/deployments/index.mdx +++ b/docs/docs/guides/deployments/index.mdx @@ -20,11 +20,11 @@ This guide aims to provide a comprehensive overview of the requirements for depl Understanding these components is crucial when assessing serving systems. LangChain integrates with several open-source projects designed to tackle these issues, providing a robust framework for productionizing your LLM applications. Some notable frameworks include: -- [Ray Serve](/docs/ecosystem/integrations/ray_serve) +- [Ray Serve](/docs/integrations/providers/ray_serve) - [BentoML](https://github.com/bentoml/BentoML) -- [OpenLLM](/docs/ecosystem/integrations/openllm) -- [Modal](/docs/ecosystem/integrations/modal) -- [Jina](/docs/ecosystem/integrations/jina#deployment) +- [OpenLLM](/docs/integrations/providers/openllm) +- [Modal](/docs/integrations/providers/modal) +- [Jina](/docs/integrations/providers/jina) These links will provide further information on each ecosystem, assisting you in finding the best fit for your LLM deployment needs. diff --git a/docs/docs/guides/safety/hugging_face_prompt_injection.ipynb b/docs/docs/guides/safety/hugging_face_prompt_injection.ipynb index fc648f81a024b..ac1eff92c67e1 100644 --- a/docs/docs/guides/safety/hugging_face_prompt_injection.ipynb +++ b/docs/docs/guides/safety/hugging_face_prompt_injection.ipynb @@ -28,9 +28,7 @@ "cell_type": "code", "execution_count": null, "id": "9bdbfdc7c949a9c1", - "metadata": { - "collapsed": false - }, + "metadata": {}, "outputs": [], "source": [ "!pip install \"optimum[onnxruntime]\"" @@ -44,8 +42,7 @@ "ExecuteTime": { "end_time": "2023-12-18T11:41:24.738278Z", "start_time": "2023-12-18T11:41:20.842567Z" - }, - "collapsed": false + } }, "outputs": [], "source": [ @@ -80,7 +77,9 @@ "outputs": [ { "data": { - "text/plain": "'hugging_face_injection_identifier'" + "text/plain": [ + "'hugging_face_injection_identifier'" + ] }, "execution_count": 10, "metadata": {}, @@ -119,7 +118,9 @@ "outputs": [ { "data": { - "text/plain": "'Name 5 cities with the biggest number of inhabitants'" + "text/plain": [ + "'Name 5 cities with the biggest number of inhabitants'" + ] }, "execution_count": 11, "metadata": {}, @@ -374,7 +375,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.9.1" + "version": "3.10.1" } }, "nbformat": 4, diff --git a/docs/docs/guides/safety/index.mdx b/docs/docs/guides/safety/index.mdx index 8b97fdda7865b..b5d047d771ed4 100644 --- a/docs/docs/guides/safety/index.mdx +++ b/docs/docs/guides/safety/index.mdx @@ -4,6 +4,6 @@ One of the key concerns with using LLMs is that they may generate harmful or une - [Amazon Comprehend moderation chain](/docs/guides/safety/amazon_comprehend_chain): Use [Amazon Comprehend](https://aws.amazon.com/comprehend/) to detect and handle Personally Identifiable Information (PII) and toxicity. - [Constitutional chain](/docs/guides/safety/constitutional_chain): Prompt the model with a set of principles which should guide the model behavior. -- [Hugging Face prompt injection identification](/docs/guides/safety/huggingface_prompt_injection_identification): Detect and handle prompt injection attacks. +- [Hugging Face prompt injection identification](/docs/guides/safety/hugging_face_prompt_injection): Detect and handle prompt injection attacks. - [Logical Fallacy chain](/docs/guides/safety/logical_fallacy_chain): Checks the model output against logical fallacies to correct any deviation. - [Moderation chain](/docs/guides/safety/moderation): Check if any output text is harmful and flag it. diff --git a/docs/docs/integrations/callbacks/streamlit.md b/docs/docs/integrations/callbacks/streamlit.md index 28a83daf3ae2e..1f425d588ebd8 100644 --- a/docs/docs/integrations/callbacks/streamlit.md +++ b/docs/docs/integrations/callbacks/streamlit.md @@ -7,7 +7,7 @@ [![Open in GitHub Codespaces](https://github.com/codespaces/badge.svg)](https://codespaces.new/langchain-ai/streamlit-agent?quickstart=1) In this guide we will demonstrate how to use `StreamlitCallbackHandler` to display the thoughts and actions of an agent in an -interactive Streamlit app. Try it out with the running app below using the [MRKL agent](/docs/modules/agents/how_to/mrkl/): +interactive Streamlit app. Try it out with the running app below using the MRKL agent: