Skip to content

Add ability to include snippets in docs with inline-named sections for fragments and highlighting #2088

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 4 commits into from
Jun 27, 2025

Conversation

dmontagu
Copy link
Contributor

@dmontagu dmontagu commented Jun 27, 2025

Note: this doesn't yet make use of the fragments/highlighting-related functionality, but it now works.

You'd use syntax of the form:

```snippet {path="/examples/pydantic_ai_examples/evals/example_01_generate_dataset.py" fragment="main" highlight="examples"}```

to import the snippets with fragment highlighting, and you would have the source look like this to start/end sections:

import asyncio
from pathlib import Path
from types import NoneType

from pydantic_evals import Dataset
from pydantic_evals.generation import generate_dataset

from pydantic_ai_examples.evals.models import TimeRangeInputs, TimeRangeResponse


### [main]
async def main():
    dataset = await generate_dataset(
        ### [examples]
        dataset_type=Dataset[TimeRangeInputs, TimeRangeResponse, NoneType],
        ### [/examples]
        model='openai:o1',  # Use a smarter model since this is a more complex task that is only run once
        n_examples=10,
        ### [examples]
        extra_instructions="""
        Generate a dataset of test cases for the time range inference agent.

        Include a variety of inputs that might be given to the agent, including some where the only
        reasonable response is a `TimeRangeBuilderError`, and some where a `TimeRangeBuilderSuccess` is
        expected. Make use of the `IsInstance` evaluator to ensure that the inputs and outputs are of the appropriate
        type.

        When appropriate, use the `LLMJudge` evaluator to provide a more precise description of the time range the
        agent should have inferred. In particular, it's good if the example user inputs are somewhat ambiguous, to
        reflect realistic (difficult-to-handle) user questions, but the LLMJudge evaluator can help ensure that the
        agent's output is still judged based on precisely what the desired behavior is even for somewhat ambiguous
        user questions. You do not need to include LLMJudge evaluations for all cases (in particular, for cases where
        the expected output is unambiguous from the user's question), but you should include at least one or two
        examples that do benefit from an LLMJudge evaluation (and include it).

        To be clear, the LLMJudge rubrics should be concise and reflect only information that is NOT ALREADY PRESENT
        in the user prompt for the example.

        Leave the model and include_input arguments to LLMJudge as their default values (null).

        Also add a dataset-wide LLMJudge evaluator to ensure that the 'explanation' or 'error_message' fields are
        appropriate to be displayed to the user (e.g., written in second person, etc.).
        """,
        ### [/examples]
    )

    dataset.to_file(
        Path(__file__).parent / 'datasets' / 'time_range_v1.yaml',
        fmt='yaml',
    )  ### [/main]


if __name__ == '__main__':
    asyncio.run(main())

Note that you can start/end multiple sections on the same line by putting comma-separated names in the ### [...], also you can use /// to do sections in typescript code.

Copy link
Contributor

hyperlint-ai bot commented Jun 27, 2025

PR Change Summary

Enhanced documentation to support in-line sections for code snippets with fragment highlighting.

  • Updated code snippet syntax to use the new fragment and highlighting features.
  • Modified multiple example documentation files to reflect the new snippet format.

Modified Files

  • docs/examples/bank-support.md
  • docs/examples/chat-app.md
  • docs/examples/flight-booking.md
  • docs/examples/pydantic-model.md
  • docs/examples/question-graph.md
  • docs/examples/rag.md
  • docs/examples/sql-gen.md
  • docs/examples/stream-markdown.md
  • docs/examples/stream-whales.md
  • docs/examples/weather-agent.md

How can I customize these reviews?

Check out the Hyperlint AI Reviewer docs for more information on how to customize the review.

If you just want to ignore it on this PR, you can add the hyperlint-ignore label to the PR. Future changes won't trigger a Hyperlint review.

Note specifically for link checks, we only check the first 30 links in a file and we cache the results for several hours (for instance, if you just added a page, you might experience this). Our recommendation is to add hyperlint-ignore to the PR to ignore the link check for this PR.

@dmontagu dmontagu changed the title Add ability to include docs stuff with in-line sections for fragments and highlighting Add ability to include snippets in docs with inline-named sections for fragments and highlighting Jun 27, 2025
Copy link

github-actions bot commented Jun 27, 2025

Docs Preview

commit: a896395
Preview URL: https://888a8d09-pydantic-ai-previews.pydantic.workers.dev

@dmontagu dmontagu merged commit f3b9981 into main Jun 27, 2025
19 checks passed
@dmontagu dmontagu deleted the dmontagu/improve-docs-inclusions branch June 27, 2025 17:25
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant