Skip to content

Bump the all-python-packages group with 6 updates#513

Merged
nerdai merged 1 commit intomainfrom
dependabot/uv/all-python-packages-c1afbaf69d
Sep 29, 2025
Merged

Bump the all-python-packages group with 6 updates#513
nerdai merged 1 commit intomainfrom
dependabot/uv/all-python-packages-c1afbaf69d

Conversation

@dependabot
Copy link
Copy Markdown
Contributor

@dependabot dependabot Bot commented on behalf of github Sep 29, 2025

Bumps the all-python-packages group with 6 updates:

Package From To
pydantic-settings 2.10.1 2.11.0
sentence-transformers 5.1.0 5.1.1
grpcio 1.75.0 1.75.1
llama-index-core 0.14.2 0.14.3
unsloth 2025.9.5 2025.9.9
ruff 0.13.1 0.13.2

Updates pydantic-settings from 2.10.1 to 2.11.0

Release notes

Sourced from pydantic-settings's releases.

v2.11.0

What's Changed

New Contributors

Full Changelog: pydantic/pydantic-settings@2.10.1...v2.11.0

Commits

Updates sentence-transformers from 5.1.0 to 5.1.1

Release notes

Sourced from sentence-transformers's releases.

v5.1.1 - Explicit incorrect arguments, fixes for multi-GPU, evaluator, and hard negative

This patch makes Sentence Transformers more explicit with incorrect arguments and introduces some fixes for multi-GPU processing, evaluators, and hard negatives mining.

Install this version with

# Training + Inference
pip install sentence-transformers[train]==5.1.1
Inference only, use one of:
pip install sentence-transformers==5.1.1
pip install sentence-transformers[onnx-gpu]==5.1.1
pip install sentence-transformers[onnx]==5.1.1
pip install sentence-transformers[openvino]==5.1.1

Error if unused kwargs is passed & get_model_kwargs (#3500)

Some SentenceTransformer or SparseEncoder models support custom model-specific keyword arguments, such as jinaai/jina-embeddings-v4. As of this release, calling model.encode with keyword arguments that aren't used by the model will result in an error.

>>> from sentence_transformers import SentenceTransformer
>>> model = SentenceTransformer("all-MiniLM-L6-v2")
>>> model.encode("Who is Amelia Earhart?", normalize=True)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "[sic]/torch/utils/_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "[sic]/SentenceTransformer.py", line 983, in encode
    raise ValueError(
ValueError: SentenceTransformer.encode() has been called with additional keyword arguments that this model does not use: ['normalize']. As per SentenceTransformer.get_model_kwargs(), this model does not accept any additional keyword arguments.

Quite useful when you, for example, accidentally forget that the parameter to get normalized embeddings is normalize_embeddings. Prior to this version, this parameter would simply quietly be ignored.

To check which custom extra keyword arguments may be used for your model, you can call the new get_model_kwargs method:

>>> from sentence_transformers import SentenceTransformer, SparseEncoder
>>> SentenceTransformer("all-MiniLM-L6-v2").get_model_kwargs()
[]
>>> SentenceTransformer("jinaai/jina-embeddings-v4", trust_remote_code=True).get_model_kwargs()
['task', 'truncate_dim']
>>> SparseEncoder("opensearch-project/opensearch-neural-sparse-encoding-doc-v3-distill").get_model_kwargs()
['task']

Note: You can always pass the task parameter, it's the only model-specific parameter that will be quietly ignored. This means that you can always use model.encode(..., task="query") and model.encode(..., task="document").

Minor Features

... (truncated)

Commits
  • 22ff509 Release v5.1.1
  • 5ad8a44 Merge branch 'master' into v5.1-release
  • 1def8d3 Fix the number of missing negatives in mine_hard_negatives (#3504)
  • 2e077fb fix: add makedirs to informationretrievalevaluator (#3516)
  • 20c4820 Fix:Import SentenceTransformer class explicitly in losses module (#3521)
  • 7240b33 [feat] add get_model_kwargs method; throw error if unused kwarg is passed (...
  • 560cc33 always pass input_ids, attention_mask, token_type_ids, inputs_embeds ...
  • bd91098 Update rasyosef/splade-mini MSMARCO and BEIR-13 benchmark scores in pretraine...
  • ad8d27d Add Support for Knowledgeable Passage Retriever (KPR) (#3495)
  • 5b18f36 [feat] Use encode_document and encode_query in mine_hard_negatives (#3502)
  • Additional commits viewable in compare view

Updates grpcio from 1.75.0 to 1.75.1

Release notes

Sourced from grpcio's releases.

Release v1.75.1

This is release gRPC Core 1.75.1 (gemini).

For gRPC documentation, see grpc.io. For previous releases, see Releases.

This release contains refinements, improvements, and bug fixes.

What's Changed

Python

  • Release grpcio wheels with Python 3.14 support (#40403)
  • Asyncio: fixes grpc shutdown race condition occurring during python interpreter finalizations. (#40447)
    • This also addresses previously reported issues with empty error message on Python interpreter exit (Error in sys.excepthook:/Original exception was: empty): #36655, #38679, #33342
  • Python 3.14: preserve current behavior when using grpc.aio async methods outside of a running event loop. (#40750)
    • Note: using async methods outside of a running event loop is discouraged by Python, and will be deprecated in future gRPC releases. Please use the asyncio.run() function (or asyncio.Runner for custom loop factories). For interactive mode, use dedicated asyncio REPL: python -m asyncio.

Full Changelog: grpc/grpc@v1.75.0...v1.75.1

Commits
  • 9b63ce0 [Backport][v1.75.x][Fix] PHP macOS build: composer sha sum update, harden ins...
  • 3ab7404 [Release] Bump version to 1.75.1 (on v1.75.x branch) (#40773)
  • 876e1d1 [Backport][v1.75.x][Python] Handle python3.14 get_event_loop behavior changes...
  • 74ec067 [Backport][v1.75.x][Python][Support 3.14] Enable 3.14 supported wheels (#40726)
  • ff24d38 [Backport][v1.75.x][Python] aio: skip grpc/aio shutdown if py interpreter is ...
  • See full diff in compare view

Updates llama-index-core from 0.14.2 to 0.14.3

Release notes

Sourced from llama-index-core's releases.

v0.14.3

Release Notes

[2025-09-24]

llama-index-core [0.14.3]

  • Fix Gemini thought signature serialization (#19891)
  • Adding a ThinkingBlock among content blocks (#19919)

llama-index-llms-anthropic [0.9.0]

  • Adding a ThinkingBlock among content blocks (#19919)

llama-index-llms-baseten [0.1.4]

  • added kimik2 0905 and reordered list for validation (#19892)
  • Baseten Dynamic Model APIs Validation (#19893)

llama-index-llms-google-genai [0.6.0]

  • Add missing FileAPI support for documents (#19897)
  • Adding a ThinkingBlock among content blocks (#19919)

llama-index-llms-mistralai [0.8.0]

  • Adding a ThinkingBlock among content blocks (#19919)

llama-index-llms-openai [0.6.0]

  • Adding a ThinkingBlock among content blocks (#19919)

llama-index-protocols-ag-ui [0.2.2]

  • improve how state snapshotting works in AG-UI (#19934)

llama-index-readers-mongodb [0.5.0]

  • Use PyMongo Asynchronous API instead of Motor (#19875)

llama-index-readers-paddle-ocr [0.1.0]

  • [New Package] Add PaddleOCR Reader for extracting text from images in PDFs (#19827)

llama-index-readers-web [0.5.4]

  • feat(readers/web-firecrawl): migrate to Firecrawl v2 SDK (#19773)

llama-index-storage-chat-store-mongo [0.3.0]

... (truncated)

Changelog

Sourced from llama-index-core's changelog.

llama-index-core [0.14.3]

  • Fix Gemini thought signature serialization (#19891)
  • Adding a ThinkingBlock among content blocks (#19919)

llama-index-llms-anthropic [0.9.0]

  • Adding a ThinkingBlock among content blocks (#19919)

llama-index-llms-baseten [0.1.4]

  • added kimik2 0905 and reordered list for validation (#19892)
  • Baseten Dynamic Model APIs Validation (#19893)

llama-index-llms-google-genai [0.6.0]

  • Add missing FileAPI support for documents (#19897)
  • Adding a ThinkingBlock among content blocks (#19919)

llama-index-llms-mistralai [0.8.0]

  • Adding a ThinkingBlock among content blocks (#19919)

llama-index-llms-openai [0.6.0]

  • Adding a ThinkingBlock among content blocks (#19919)

llama-index-protocols-ag-ui [0.2.2]

  • improve how state snapshotting works in AG-UI (#19934)

llama-index-readers-mongodb [0.5.0]

  • Use PyMongo Asynchronous API instead of Motor (#19875)

llama-index-readers-paddle-ocr [0.1.0]

  • [New Package] Add PaddleOCR Reader for extracting text from images in PDFs (#19827)

llama-index-readers-web [0.5.4]

  • feat(readers/web-firecrawl): migrate to Firecrawl v2 SDK (#19773)

llama-index-storage-chat-store-mongo [0.3.0]

  • Use PyMongo Asynchronous API instead of Motor (#19875)

llama-index-storage-kvstore-mongodb [0.5.0]

  • Use PyMongo Asynchronous API instead of Motor (#19875)

... (truncated)

Commits

Updates unsloth from 2025.9.5 to 2025.9.9

Release notes

Sourced from unsloth's releases.

gpt-oss Reinforcement Learning + Auto Kernel Notebook

We’re introducing gpt-oss RL support and the fastest RL inference and lowest VRAM use vs. any implementation. Blog: https://docs.unsloth.ai/new/gpt-oss-reinforcement-learning

  • Unsloth now offers the fastest inference (~3x faster), lowest VRAM (50% less) and most context (8x longer) for gpt-oss RL vs. any implementation - with no accuracy loss.
  • Since RL on gpt-oss isn't yet vLLM compatible, we rewrote Transformers inference code to enable faster inference
  • gpt-oss-20b GSPO free Colab notebook
  • This notebook automatically creates faster matrix multiplication kernels and uses a new Unsloth reward function. We also show how to counteract reward-hacking which is one of RL's biggest challenges.
  • We previously released Vision RL with GSPO support
  • ⚠️ Reminder to NOT use Flash Attention 3 for gpt-oss as it'll make your training loss wrong.
  • DeepSeek-V3.1-Terminus is here and you can run locally via our GGUF Read how our 3-bit GGUF beats Claude-4-Opus (thinking) on Aider Polyglot here
  • Magistral 1.2 is here and you can run it locally here or fine-tune it for free by using our Kaggle notebook
  • Fine-tuning the new Qwen3 models including Qwen3-VL, Qwen3-Omni and Qwen3-Next should work in Unsloth if you install the latest transformers. The models are big however so ensure you have enough VRAM.
  • BERT is now fixed! Feel free to use our BERT fine-tuning notebook
  • ⭐ We’re hosting a Developer event with Mistral AI & NVIDIA at Y Combinator’s Office in San Francisco on Oct 21. Come say hello!
  • We’re also joining Pytorch and AMD for a 2 day Virtual AI Agents Challenge with prizes. Join Hackathon

Don't forget to also join our Reddit: r/unsloth 🥰

What's Changed

New Contributors

Full Changelog: unslothai/unsloth@September-2025-v2...September-2025-v3

Vision Reinforcement Learning + Memory Efficient RL

We're excited to support Vision models for RL and even more memory efficient + faster RL!

Unsloth now supports vision/multimodal RL with Gemma 3, Qwen2.5-VL and other vision models. Due to Unsloth's unique weight sharing and custom kernels, Unsloth makes VLM RL 1.5–2× faster, uses 90% less VRAM, and enables 10× longer context lengths than FA2 setups, with no accuracy loss. Qwen2.5-VL GSPO notebook Gemma 3 (4B) Vision GSPO notebook

Full details in our blogpost: https://docs.unsloth.ai/new/vision-reinforcement-learning-vlm-rl

  • This update also introduces Qwen's GSPO algorithm.
  • Our new vision RL support also comes now even faster & more memory efficient! Our new kernels & algos allows faster RL for text and vision LLMs with 50% less VRAM & 10× more context.
  • Introducing a new RL feature called 'Standby'. Before, RL requires GPU splitting between training & inference. With Unsloth Standby, you no longer have to & 'Unsloth Standby' uniquely limits speed degradation compared to other implementations and sometimes makes training even faster! Read our Blog

... (truncated)

Commits

Updates ruff from 0.13.1 to 0.13.2

Release notes

Sourced from ruff's releases.

0.13.2

Release Notes

Released on 2025-09-25.

Preview features

  • [flake8-async] Implement blocking-path-method (ASYNC240) (#20264)
  • [flake8-bugbear] Implement map-without-explicit-strict (B912) (#20429)
  • [flake8-bultins] Detect class-scope builtin shadowing in decorators, default args, and attribute initializers (A003) (#20178)
  • [ruff] Implement logging-eager-conversion (RUF065) (#19942)
  • Include .pyw files by default when linting and formatting (#20458)

Bug fixes

  • Deduplicate input paths (#20105)
  • [flake8-comprehensions] Preserve trailing commas for single-element lists (C409) (#19571)
  • [flake8-pyi] Avoid syntax error from conflict with PIE790 (PYI021) (#20010)
  • [flake8-simplify] Correct fix for positive maxsplit without separator (SIM905) (#20056)
  • [pyupgrade] Fix UP008 not to apply when __class__ is a local variable (#20497)
  • [ruff] Fix B004 to skip invalid hasattr/getattr calls (#20486)
  • [ruff] Replace -nan with nan when using the value to construct a Decimal (FURB164 ) (#20391)

Documentation

  • Add 'Finding ways to help' to CONTRIBUTING.md (#20567)
  • Update import path to ruff-wasm-web (#20539)
  • [flake8-bandit] Clarify the supported hashing functions (S324) (#20534)

Other changes

  • [playground] Allow hover quick fixes to appear for overlapping diagnostics (#20527)
  • [playground] Fix non‑BMP code point handling in quick fixes and markers (#20526)

Contributors

Install ruff 0.13.2

... (truncated)

Changelog

Sourced from ruff's changelog.

0.13.2

Released on 2025-09-25.

Preview features

  • [flake8-async] Implement blocking-path-method (ASYNC240) (#20264)
  • [flake8-bugbear] Implement map-without-explicit-strict (B912) (#20429)
  • [flake8-bultins] Detect class-scope builtin shadowing in decorators, default args, and attribute initializers (A003) (#20178)
  • [ruff] Implement logging-eager-conversion (RUF065) (#19942)
  • Include .pyw files by default when linting and formatting (#20458)

Bug fixes

  • Deduplicate input paths (#20105)
  • [flake8-comprehensions] Preserve trailing commas for single-element lists (C409) (#19571)
  • [flake8-pyi] Avoid syntax error from conflict with PIE790 (PYI021) (#20010)
  • [flake8-simplify] Correct fix for positive maxsplit without separator (SIM905) (#20056)
  • [pyupgrade] Fix UP008 not to apply when __class__ is a local variable (#20497)
  • [ruff] Fix B004 to skip invalid hasattr/getattr calls (#20486)
  • [ruff] Replace -nan with nan when using the value to construct a Decimal (FURB164 ) (#20391)

Documentation

  • Add 'Finding ways to help' to CONTRIBUTING.md (#20567)
  • Update import path to ruff-wasm-web (#20539)
  • [flake8-bandit] Clarify the supported hashing functions (S324) (#20534)

Other changes

  • [playground] Allow hover quick fixes to appear for overlapping diagnostics (#20527)
  • [playground] Fix non‑BMP code point handling in quick fixes and markers (#20526)

Contributors

Commits
  • b0bdf03 Bump 0.13.2 (#20576)
  • 7331d39 Update rooster to 0.1.0 (#20575)
  • 529e5fa [ty] Ecosystem analyzer: timing report (#20571)
  • efbb80f [ty] Remove hack in protocol satisfiability check (#20568)
  • 9f3cffc Add 'Finding ways to help' to CONTRIBUTING.md (#20567)
  • 21be94a [ty] Explicitly test assignability/subtyping between unions of nominal types ...
  • b7d5dc9 [ty] Add tests for interactions of @classmethod, @staticmethod, and proto...
  • e1bb74b [ty] Match variadic argument to variadic parameter (#20511)
  • edeb458 [ty] fallback to resolve_real_module in file_to_module (#20461)
  • bea92c8 [ty] More precise type inference for dictionary literals (#20523)
  • Additional commits viewable in compare view

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options

You can trigger Dependabot actions by commenting on this PR:

  • @dependabot rebase will rebase this PR
  • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
  • @dependabot merge will merge this PR after your CI passes on it
  • @dependabot squash and merge will squash and merge this PR after your CI passes on it
  • @dependabot cancel merge will cancel a previously requested merge and block automerging
  • @dependabot reopen will reopen this PR if it is closed
  • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
  • @dependabot show <dependency name> ignore conditions will show all of the ignore conditions of the specified dependency
  • @dependabot ignore <dependency name> major version will close this group update PR and stop Dependabot creating any more for the specific dependency's major version (unless you unignore this specific dependency's major version or upgrade to it yourself)
  • @dependabot ignore <dependency name> minor version will close this group update PR and stop Dependabot creating any more for the specific dependency's minor version (unless you unignore this specific dependency's minor version or upgrade to it yourself)
  • @dependabot ignore <dependency name> will close this group update PR and stop Dependabot creating any more for the specific dependency (unless you unignore this specific dependency or upgrade to it yourself)
  • @dependabot unignore <dependency name> will remove all of the ignore conditions of the specified dependency
  • @dependabot unignore <dependency name> <ignore condition> will remove the ignore condition of the specified dependency and ignore conditions

Bumps the all-python-packages group with 6 updates:

| Package | From | To |
| --- | --- | --- |
| [pydantic-settings](https://github.com/pydantic/pydantic-settings) | `2.10.1` | `2.11.0` |
| [sentence-transformers](https://github.com/UKPLab/sentence-transformers) | `5.1.0` | `5.1.1` |
| [grpcio](https://github.com/grpc/grpc) | `1.75.0` | `1.75.1` |
| [llama-index-core](https://github.com/run-llama/llama_index) | `0.14.2` | `0.14.3` |
| [unsloth](https://github.com/unslothai/unsloth) | `2025.9.5` | `2025.9.9` |
| [ruff](https://github.com/astral-sh/ruff) | `0.13.1` | `0.13.2` |


Updates `pydantic-settings` from 2.10.1 to 2.11.0
- [Release notes](https://github.com/pydantic/pydantic-settings/releases)
- [Commits](pydantic/pydantic-settings@2.10.1...v2.11.0)

Updates `sentence-transformers` from 5.1.0 to 5.1.1
- [Release notes](https://github.com/UKPLab/sentence-transformers/releases)
- [Commits](huggingface/sentence-transformers@v5.1.0...v5.1.1)

Updates `grpcio` from 1.75.0 to 1.75.1
- [Release notes](https://github.com/grpc/grpc/releases)
- [Changelog](https://github.com/grpc/grpc/blob/master/doc/grpc_release_schedule.md)
- [Commits](grpc/grpc@v1.75.0...v1.75.1)

Updates `llama-index-core` from 0.14.2 to 0.14.3
- [Release notes](https://github.com/run-llama/llama_index/releases)
- [Changelog](https://github.com/run-llama/llama_index/blob/main/CHANGELOG.md)
- [Commits](run-llama/llama_index@v0.14.2...v0.14.3)

Updates `unsloth` from 2025.9.5 to 2025.9.9
- [Release notes](https://github.com/unslothai/unsloth/releases)
- [Commits](https://github.com/unslothai/unsloth/commits)

Updates `ruff` from 0.13.1 to 0.13.2
- [Release notes](https://github.com/astral-sh/ruff/releases)
- [Changelog](https://github.com/astral-sh/ruff/blob/main/CHANGELOG.md)
- [Commits](astral-sh/ruff@0.13.1...0.13.2)

---
updated-dependencies:
- dependency-name: pydantic-settings
  dependency-version: 2.11.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: all-python-packages
- dependency-name: sentence-transformers
  dependency-version: 5.1.1
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: all-python-packages
- dependency-name: grpcio
  dependency-version: 1.75.1
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: all-python-packages
- dependency-name: llama-index-core
  dependency-version: 0.14.3
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: all-python-packages
- dependency-name: unsloth
  dependency-version: 2025.9.9
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: all-python-packages
- dependency-name: ruff
  dependency-version: 0.13.2
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: all-python-packages
...

Signed-off-by: dependabot[bot] <support@github.com>
@dependabot dependabot Bot added dependencies Pull requests that update a dependency file python:uv Pull requests that update python:uv code labels Sep 29, 2025
@codecov
Copy link
Copy Markdown

codecov Bot commented Sep 29, 2025

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 98.66%. Comparing base (edf51c9) to head (4646267).
⚠️ Report is 1 commits behind head on main.

Additional details and impacted files
@@           Coverage Diff           @@
##             main     #513   +/-   ##
=======================================
  Coverage   98.66%   98.66%           
=======================================
  Files         156      156           
  Lines        4119     4119           
=======================================
  Hits         4064     4064           
  Misses         55       55           

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@nerdai nerdai merged commit e1af146 into main Sep 29, 2025
7 checks passed
@nerdai nerdai deleted the dependabot/uv/all-python-packages-c1afbaf69d branch September 29, 2025 22:07
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

dependencies Pull requests that update a dependency file python:uv Pull requests that update python:uv code

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant