From 07d496bc9a59f824ad17d2e99ac7b9de4ee03682 Mon Sep 17 00:00:00 2001 From: Kathryn May Date: Wed, 1 Oct 2025 08:39:25 -0400 Subject: [PATCH] Correct intro paragraph on run eval prompt playground page --- src/langsmith/run-evaluation-from-prompt-playground.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/langsmith/run-evaluation-from-prompt-playground.mdx b/src/langsmith/run-evaluation-from-prompt-playground.mdx index 44c616b2e..56d832d8e 100644 --- a/src/langsmith/run-evaluation-from-prompt-playground.mdx +++ b/src/langsmith/run-evaluation-from-prompt-playground.mdx @@ -3,7 +3,7 @@ title: Run an evaluation from the prompt playground sidebarTitle: With the UI --- -LangSmith allows you to run evaluations directly in the . The prompt playground allows you to test your prompt and/or model configuration over a series of inputs to see how well it scores across different contexts or scenarios, without having to write any code. +LangSmith allows you to run evaluations directly in the UI. The [**Prompt Playground**](/langsmith/prompt-engineering#prompt-playground) allows you to test your prompt or model configuration over a series of inputs to see how well it scores across different contexts or scenarios, without having to write any code. Before you run an evaluation, you need to have an [existing dataset](/langsmith/evaluation-concepts#datasets). Learn how to [create a dataset from the UI](/langsmith/manage-datasets-in-application#set-up-your-dataset).