fix(deps): update machine-learning #8280
Merged
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This PR contains the following updates:
0.21.4
->0.22.0
0.22.2
(+1)2.24.0
->2.24.1
881dbb6
->3624db3
0.23.5.post1
->0.23.6
3.12.0
->3.14.0
a2eb07f
->90f8795
991e20a
->e2ed446
0.3.3
->0.3.4
0.28.0
->0.29.0
Warning
Some dependencies could not be looked up. Check the Dependency Dashboard for more information.
Release Notes
huggingface/huggingface_hub (huggingface-hub)
v0.22.0
: : Chat completion, inference types and hub mixins!Compare Source
Discuss about the release in our Community Tab. Feedback is welcome!! 🤗
✨ InferenceClient
Support for inference tools continues to improve in
huggingface_hub
. At the menu in this release? A newchat_completion
API and fully typed inputs/outputs!Chat-completion API!
A long-awaited API has just landed in
huggingface_hub
!InferenceClient.chat_completion
follows most of OpenAI's API, making it much easier to integrate with existing tools.Technically speaking it uses the same backend as the
text-generation
task but requires a preprocessing step to format the list of messages into a single text prompt. The chat template is rendered server-side when models are powered by TGI, which is the case for most LLMs: Llama, Zephyr, Mistral, Gemma, etc. Otherwise, the templating happens client-side which requiresminijinja
package to be installed. We are actively working on bridging this gap, aiming at rendering all templates server-side in the future.InferenceClient.chat_completion
+ use new types for text-generation by @Wauplin in #2094Inference types
We are currently working towards more consistency in tasks definitions across the Hugging Face ecosystem. This is no easy job but a major milestone has recently been achieved! All inputs and outputs of the main ML tasks are now fully specified as JSONschema objects. This is the first brick needed to have consistent expectations when running inference across our stack: transformers (Python), transformers.js (Typescript), Inference API (Python), Inference Endpoints (Python), Text Generation Inference (Rust), Text Embeddings Inference (Rust), InferenceClient (Python), Inference.js (Typescript), etc.
Integrating those definitions will require more work but
huggingface_hub
is one of the first tools to integrate them. As a start, allInferenceClient
return values are now typed dataclasses. Furthermore, typed dataclasses have been generated for all tasks' inputs and outputs. This means you can now integrate them in your own library to ensure consistency with the Hugging Face ecosystem. Specifications are open-source (see here) meaning anyone can access and contribute to them. Python's generated classes are documented here.Here is a short example showcasing the new output types:
Note that those dataclasses are backward-compatible with the dict-based interface that was previously in use. In the example above, both
ObjectDetectionBoundingBox(...).xmin
andObjectDetectionBoundingBox(...)["xmin"]
are correct, even though the former should be the preferred solution from now on.🧩 ModelHubMixin
ModelHubMixin
is an object that can be used as a parent class for the objects in your library in order to provide built-in serialization methods to upload and download pretrained models from the Hub. This mixin is adapted into aPyTorchHubMixin
that can serialize and deserialize any Pytorch model. The 0.22 release brings its share of improvements to these classes:model_hub_mixin
) and custom tags from the library (see 2.). You can extend/modify this modelcard by overwriting thegenerate_model_card
method.For more details on how to integrate these classes, check out the integration guide.
ModelHubMixin
: pass config when__init__
accepts **kwargs by @Wauplin in #2058PytorchModelHubMixin
by @Wauplin in #2079ModelHubMixin
by @Wauplin in #2080🛠️ Misc improvements
HfFileSystem
download speed was limited by some internal logic infsspec
. We've now updated theget_file
andread
implementations to improve their download speed to a level similar tohf_hub_download
.We are aiming at moving all errors raised by
huggingface_hub
into a single modulehuggingface_hub.errors
to ease the developer experience. This work has been started as a community contribution from @Y4suyuki.HfApi
class now accepts aheaders
parameters that is then passed to every HTTP call made to the Hub.📚 More documentation in Korean!
package_reference/overview.md
to Korean by @jungnerd in #2113💔 Breaking changes
The new types returned by
InferenceClient
methods should be backward compatible, especially to access values either as attributes (.my_field
) or as items (i.e.["my_field"]
). However, dataclasses and dicts do not always behave exactly the same so might notice some breaking changes. Those breaking changes should be very limited.ModelHubMixin
internals changed quite a bit, breaking some use cases. We don't think those use cases were in use and changing them should really benefit 99% of integrations. If you witness any inconsistency or error in your integration, please let us know and we will do our best to mitigate the problem. One of the biggest change is that the config values are not attached to the mixin instance asinstance.config
anymore but asinstance._model_hub_mixin
. The.config
attribute has been mistakenly introduced in0.20.x
so we hope it has not been used much yet.huggingface_hub.file_download.http_user_agent
has been removed in favor of the officially documenthuggingface_hub.utils.build_hf_headers
. It was a deprecated method since0.18.x
.Small fixes and maintenance
⚙️ CI optimization
The CI pipeline has been greatly improved, especially thanks to the efforts from @bmuskalla. Most tests are now passing in under 3 minutes, against 8 to 10 minutes previously. Some long-running tests have been greatly simplified and all tests are now ran in parallel with
python-xdist
, thanks to a complete decorrelation between them.We are now also using the great
uv
installer instead ofpip
in our CI, which saves around 30-40s per pipeline.python-xdist
on all tests by @bmuskalla in #2059⚙️ fixes
⚙️ internal
Significant community contributions
The following contributors have made significant changes to the library over the last release:
python-xdist
on all tests by @bmuskalla in #2059locustio/locust (locust)
v2.24.1
Compare Source
Full Changelog
Fixed bugs:
'NoneType' object has no attribute 'get'
whenstream=True
inFastHttpSession.request
#2640Closed issues:
Merged pull requests:
content
property and lazily load response #2643 (neiser)pytest-dev/pytest-asyncio (pytest-asyncio)
v0.23.6
Compare Source
pytest-dev/pytest-mock (pytest-mock)
v3.14.0
Compare Source
#​415 <https://github.com/pytest-dev/pytest-mock/pull/415>
_:MockType
andAsyncMockType
can be imported frompytest_mock
for type annotation purposes.#​420 <https://github.com/pytest-dev/pytest-mock/issues/420>
_: Fixed a regression which would causemocker.patch.object
to not being properly cleared between tests.v3.13.0
Compare Source
#​417 <https://github.com/pytest-dev/pytest-mock/pull/417>
_:spy
now hasspy_return_list
, which is a list containing all the values returned by the spied function.pytest-mock
now requirespytest>=6.2.5
.#​410 <https://github.com/pytest-dev/pytest-mock/pull/410>
: pytest-mock'ssetup.py
file is removed.If you relied on this file, e.g. to install pytest using
setup.py install
,please see
Why you shouldn't invoke setup.py directly <https://blog.ganssle.io/articles/2021/10/setup-py-deprecated.html#summary>
for alternatives.astral-sh/ruff (ruff)
v0.3.4
Compare Source
Preview features
flake8-simplify
] Detect implicitelse
cases inneedless-bool
(SIM103
) (#10414)pylint
] Implementnan-comparison
(PLW0117
) (#10401)pylint
] Implementnonlocal-and-global
(E115
) (#10407)pylint
] Implementsingledispatchmethod-function
(PLE5120
) (#10428)refurb
] Implementlist-reverse-copy
(FURB187
) (#10212)Rule changes
flake8-pytest-style
] Add automatic fix forpytest-parametrize-values-wrong-type
(PT007
) (#10461)pycodestyle
] Allow SPDX license headers to exceed the line length (E501
) (#10481)Formatter
Bug fixes
C409
) (#10491)name
from being reformatted (#10442)W605
(#10480).pyi
files (#10512)E231
bug: Inconsistent catch compared to pycodestyle, such as when dict nested in list (#10469)Options
references to blank line docs (#10498)from __future__ import annotations
is active (#10362)"' (#10513)flake8-bugbear
] Allow tuples of exceptions (B030
) (#10437)flake8-quotes
] Avoid syntax errors due to invalid quotes (Q000, Q002
) (#10199)encode/uvicorn (uvicorn)
v0.29.0
Compare Source
Added
v0.28.1
Compare Source
Fixed
ClientDisconnected
on HTTP (#2276) 19/03/24Configuration
📅 Schedule: Branch creation - "on tuesday" (UTC), Automerge - At any time (no schedule defined).
🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.
♻ Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.
👻 Immortal: This PR will be recreated if closed unmerged. Get config help if that's undesired.
This PR has been generated by Mend Renovate. View repository job log here.