Skip to content

Conversation

@Abhinavexists
Copy link
Contributor

Fix: #21861

wrapped the layer call in a small function that accepts only positional arguments. That wrapper remembers the original keyword arguments.
passed this wrapper to torch.utils.checkpoint, so replay uses positional args but the wrapper restores the missing kwargs before calling the layer.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @Abhinavexists, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request resolves an issue within the Keras Torch backend where the remat (recomputation/checkpointing) utility did not correctly process functions that relied on keyword arguments. The fix implements a clever wrapping mechanism that allows torch.utils.checkpoint to operate efficiently with positional arguments, while ensuring the original function's keyword argument behavior is maintained. This enhancement improves the robustness and applicability of memory optimization techniques within the Torch backend.

Highlights

  • Remat Keyword Argument Handling: The remat function in the Torch backend has been updated to correctly handle functions that accept keyword arguments. This is achieved by introducing a wrapper that ensures torch.utils.checkpoint.checkpoint receives only positional arguments, while the original keyword arguments are preserved and applied when the wrapped function is called.
  • New Test Case for Remat: A new test, test_remat_with_kwargs, has been added to keras/src/backend/common/remat_test.py. This test thoroughly validates the remat functionality with various scenarios involving keyword arguments, including full, default, and partial keyword argument usage.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request correctly fixes an issue in the Torch backend's remat implementation where keyword arguments were being ignored. The fix introduces a wrapper function to properly handle keyword arguments with torch.utils.checkpoint.checkpoint, which is the correct approach. The accompanying new test case is comprehensive and validates the fix across different scenarios. I have one minor suggestion to improve code readability.

@codecov-commenter
Copy link

codecov-commenter commented Nov 21, 2025

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 82.57%. Comparing base (529e162) to head (bcef432).

Additional details and impacted files
@@           Coverage Diff           @@
##           master   #21865   +/-   ##
=======================================
  Coverage   82.57%   82.57%           
=======================================
  Files         577      577           
  Lines       59568    59573    +5     
  Branches     9345     9346    +1     
=======================================
+ Hits        49187    49194    +7     
+ Misses       7975     7974    -1     
+ Partials     2406     2405    -1     
Flag Coverage Δ
keras 82.39% <100.00%> (+<0.01%) ⬆️
keras-jax 62.86% <16.66%> (-0.01%) ⬇️
keras-numpy 57.51% <16.66%> (-0.01%) ⬇️
keras-openvino 34.33% <0.00%> (-0.01%) ⬇️
keras-tensorflow 64.39% <16.66%> (-0.01%) ⬇️
keras-torch 63.57% <100.00%> (+<0.01%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Abhinavexists and others added 3 commits November 21, 2025 23:46
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Comment on lines -676 to +683
return torch.utils.checkpoint.checkpoint(f, *args, use_reentrant=False)
if not kwargs:
return checkpoint(f, *args, use_reentrant=False)

def positional_wrapper(*pos_args):
return f(*pos_args, **kwargs)

return checkpoint(positional_wrapper, *args, use_reentrant=False)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looking at the documentation, it looks like you can just do:

return torch.utils.checkpoint.checkpoint(f, *args, use_reentrant=False, **kwargs)

Is that not the case?

My concern with your approach is that I think it statically binds the kwargs so they cannot be tensors.

Copy link
Contributor Author

@Abhinavexists Abhinavexists Nov 26, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@hertschuh
PyTorch's checkpoint() doesn't support passing kwargs to the checkpointed function as i tried to use return torch.utils.checkpoint.checkpoint(f, *args, use_reentrant=False, **kwargs) because it was my initial approach but it looses there kwargs and only accept positional args.

Regarding the static binding concern: you're right that this could be problematic. Let me add a test case with tensor kwargs to verify gradient tracking works correctly.

if that works here ?

@hertschuh hertschuh added the stat:contributions welcome A pull request to fix this issue would be welcome. label Nov 26, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

size:S stat:contributions welcome A pull request to fix this issue would be welcome.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Torch backend] Layer loses keyword arguments when wrapped by RematScope

4 participants