Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bump the python group with 5 updates #2340

Merged
merged 1 commit into from
Feb 12, 2024

Conversation

dependabot[bot]
Copy link
Contributor

@dependabot dependabot bot commented on behalf of github Feb 9, 2024

Bumps the python group with 5 updates:

Package From To
tf-nightly-cpu 2.16.0.dev20240104 2.16.0.dev20240209
torch 2.1.2+cu121 2.2.0+cu121
torchvision 0.16.2+cu121 0.17.0+cu121
tf-nightly[and-cuda] 2.16.0.dev20240104 2.16.0.dev20240209
jax[cuda12_pip] 0.4.23 0.4.24

Updates tf-nightly-cpu from 2.16.0.dev20240104 to 2.16.0.dev20240209

Commits

Updates torch from 2.1.2+cu121 to 2.2.0+cu121

Updates torchvision from 0.16.2+cu121 to 0.17.0+cu121

Updates tf-nightly[and-cuda] from 2.16.0.dev20240104 to 2.16.0.dev20240209

Commits

Updates jax[cuda12_pip] from 0.4.23 to 0.4.24

Release notes

Sourced from jax[cuda12_pip]'s releases.

JAX release v0.4.24

jaxlib release v0.4.24

Changelog

Sourced from jax[cuda12_pip]'s changelog.

jax 0.4.24 (Feb 6, 2024)

  • Changes

    • JAX lowering to StableHLO does not depend on physical devices anymore. If your primitive wraps custom_paritioning or JAX callbacks in the lowering rule i.e. function passed to rule parameter of mlir.register_lowering then add your primitive to jax._src.dispatch.prim_requires_devices_during_lowering set. This is needed because custom_partitioning and JAX callbacks need physical devices to create Shardings during lowering. This is a temporary state until we can create Shardings without physical devices.
    • {func}jax.numpy.argsort and {func}jax.numpy.sort now support the stable and descending arguments.
    • Several changes to the handling of shape polymorphism (used in {mod}jax.experimental.jax2tf and {mod}jax.experimental.export):
      • cleaner pretty-printing of symbolic expressions ({jax-issue}[#19227](https://github.com/google/jax/issues/19227))
      • added the ability to specify symbolic constraints on the dimension variables. This makes shape polymorphism more expressive, and gives a way to workaround limitations in the reasoning about inequalities. See https://github.com/google/jax/blob/main/jax/experimental/jax2tf/README.md#user-specified-symbolic-constraints.
      • with the addition of symbolic constraints ({jax-issue}[#19235](https://github.com/google/jax/issues/19235)) we now consider dimension variables from different scopes to be different, even if they have the same name. Symbolic expressions from different scopes cannot interact, e.g., in arithmetic operations. Scopes are introduced by {func}jax.experimental.jax2tf.convert, {func}jax.experimental.export.symbolic_shape, {func}jax.experimental.export.symbolic_args_specs. The scope of a symbolic expression e can be read with e.scope and passed in to the above functions to direct them to construct sybolic expressions in a given scope. See https://github.com/google/jax/blob/main/jax/experimental/jax2tf/README.md#user-specified-symbolic-constraints.
      • simplified and faster equality comparisons, where we consider two symbolic dimensions to be equal if the normalized form of their difference reduces to 0 ({jax-issue}[#19231](https://github.com/google/jax/issues/19231); note that this may result in user-visible behavior changes)
      • improved the error messages for inconclusive inequality comparisons ({jax-issue}[#19235](https://github.com/google/jax/issues/19235)).
      • the core.non_negative_dim API (introduced recently) was deprecated and core.max_dim and core.min_dim were introduced ({jax-issue}[#18953](https://github.com/google/jax/issues/18953)) to express max and min for symbolic dimensions. You can use core.max_dim(d, 0) instead of core.non_negative_dim(d).
      • the shape_poly.is_poly_dim is deprecated in favor of export.is_symbolic_dim ({jax-issue}[#19282](https://github.com/google/jax/issues/19282)).
      • the export.args_specs is deprecated in favor of export.symbolic_args_specs ({jax-issue}#19283`).
      • the shape_poly.PolyShape and jax2tf.PolyShape are deprecated, use strings for polymorphic shapes specifications ({jax-issue}[#19284](https://github.com/google/jax/issues/19284)).
      • JAX default native serialization version is now 9. This is relevant for {mod}jax.experimental.jax2tf and {mod}jax.experimental.export. See description of version numbers.

... (truncated)

Commits
  • 9e94e6e Fixed a typo in min/max Triton lowering rules
  • b53f757 Merge pull request #19667 from jakevdp:array-empty-repr
  • c1c0c1c Merge pull request #19634 from jakevdp:key-reuse-scan
  • d9cbd7b Improve repr for empty jax.Array
  • 69a9f7f Merge pull request #19629 from jakevdp:key-reuse-pjit
  • 206398a Merge pull request #19599 from ROCm:rocm-add-triton_command_buffer
  • be99451 Merge pull request #19665 from jakevdp:disable-jit-doc
  • 82611eb document that under disable_jit, individual primitives are still compiled
  • f01c27f [ROCm]: Add ROCm command buffer support for triton kernel
  • e224c3d Update XLA dependency to use revision
  • Additional commits viewable in compare view

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options

You can trigger Dependabot actions by commenting on this PR:

  • @dependabot rebase will rebase this PR
  • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
  • @dependabot merge will merge this PR after your CI passes on it
  • @dependabot squash and merge will squash and merge this PR after your CI passes on it
  • @dependabot cancel merge will cancel a previously requested merge and block automerging
  • @dependabot reopen will reopen this PR if it is closed
  • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
  • @dependabot show <dependency name> ignore conditions will show all of the ignore conditions of the specified dependency
  • @dependabot ignore <dependency name> major version will close this group update PR and stop Dependabot creating any more for the specific dependency's major version (unless you unignore this specific dependency's major version or upgrade to it yourself)
  • @dependabot ignore <dependency name> minor version will close this group update PR and stop Dependabot creating any more for the specific dependency's minor version (unless you unignore this specific dependency's minor version or upgrade to it yourself)
  • @dependabot ignore <dependency name> will close this group update PR and stop Dependabot creating any more for the specific dependency (unless you unignore this specific dependency or upgrade to it yourself)
  • @dependabot unignore <dependency name> will remove all of the ignore conditions of the specified dependency
  • @dependabot unignore <dependency name> <ignore condition> will remove the ignore condition of the specified dependency and ignore conditions

Bumps the python group with 5 updates:

| Package | From | To |
| --- | --- | --- |
| [tf-nightly-cpu](https://github.com/tensorflow/tensorflow) | `2.16.0.dev20240104` | `2.16.0.dev20240209` |
| torch | `2.1.2+cu121` | `2.2.0+cu121` |
| torchvision | `0.16.2+cu121` | `0.17.0+cu121` |
| [tf-nightly[and-cuda]](https://github.com/tensorflow/tensorflow) | `2.16.0.dev20240104` | `2.16.0.dev20240209` |
| [jax[cuda12_pip]](https://github.com/google/jax) | `0.4.23` | `0.4.24` |


Updates `tf-nightly-cpu` from 2.16.0.dev20240104 to 2.16.0.dev20240209
- [Release notes](https://github.com/tensorflow/tensorflow/releases)
- [Changelog](https://github.com/tensorflow/tensorflow/blob/master/RELEASE.md)
- [Commits](https://github.com/tensorflow/tensorflow/commits)

Updates `torch` from 2.1.2+cu121 to 2.2.0+cu121

Updates `torchvision` from 0.16.2+cu121 to 0.17.0+cu121

Updates `tf-nightly[and-cuda]` from 2.16.0.dev20240104 to 2.16.0.dev20240209
- [Release notes](https://github.com/tensorflow/tensorflow/releases)
- [Changelog](https://github.com/tensorflow/tensorflow/blob/master/RELEASE.md)
- [Commits](https://github.com/tensorflow/tensorflow/commits)

Updates `jax[cuda12_pip]` from 0.4.23 to 0.4.24
- [Release notes](https://github.com/google/jax/releases)
- [Changelog](https://github.com/google/jax/blob/main/CHANGELOG.md)
- [Commits](jax-ml/jax@jax-v0.4.23...jax-v0.4.24)

---
updated-dependencies:
- dependency-name: tf-nightly-cpu
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: python
- dependency-name: torch
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: python
- dependency-name: torchvision
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: python
- dependency-name: tf-nightly[and-cuda]
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: python
- dependency-name: jax[cuda12_pip]
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: python
...

Signed-off-by: dependabot[bot] <support@github.com>
@dependabot dependabot bot added dependencies Pull requests that update a dependency file python Pull requests that update Python code labels Feb 9, 2024
@divyashreepathihalli divyashreepathihalli added the kokoro:force-run Runs Tests on GPU label Feb 12, 2024
@divyashreepathihalli divyashreepathihalli merged commit 9a61e1a into master Feb 12, 2024
12 checks passed
@dependabot dependabot bot deleted the dependabot/pip/python-a12e85b09b branch February 12, 2024 19:05
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
dependencies Pull requests that update a dependency file kokoro:force-run Runs Tests on GPU python Pull requests that update Python code
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant