Skip to content

Conversation

@yangliyl-g
Copy link
Contributor

The original impl of masking is essentially (1.0 - mask) * -1e-9 + inputs; this can be sensitive to numerical noise on mask (imagine if it is slightly off from either 1 or 0; then we would be adding a very large perturbation to the inputs).

Using comparison and where ops are much more numerically robust. Also for cases where we get floating point masks coming in, we add a binarization step to make it compatible w/ the where op.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @yangliyl-g, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request enhances the Softmax activation layer in Keras3 by improving the numerical robustness of its mask handling. The previous method of applying masks was susceptible to floating-point inaccuracies, which could lead to unexpected behavior. This update introduces a more stable approach using conditional operations and ensures proper binarization of non-boolean masks, resulting in more reliable and predictable softmax computations.

Highlights

  • Numerical Robustness: The Softmax layer's mask handling has been refactored to improve numerical stability, addressing potential issues with floating-point precision in the original implementation.
  • Mask Binarization: A binarization step has been introduced for non-boolean masks, converting floating-point masks to boolean based on a threshold of 0.5 before applying the masking logic.
  • Conditional Masking: The masking mechanism now utilizes backend.numpy.where instead of direct arithmetic addition, providing a more robust and explicit way to handle masked inputs.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request improves the numerical robustness of softmax mask handling by replacing an arithmetic approach with backend.numpy.where. This is a solid improvement that also correctly handles floating-point masks by binarizing them. A key benefit of this change is the removal of in-place modification of the inputs tensor, which avoids potential side effects and aligns with best practices for writing neural network layers. I have one minor suggestion to correct a typo in a comment for clarity. Overall, this is a great change.

) * _large_negative_number(inputs.dtype)
inputs += adder
# We keep the positions where the mask is True or > 0.5, and set the
# other (masked) positions to -1e.9.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

There's a small typo in this comment. -1e.9 appears to be a typo for -1e9. To improve clarity and accuracy, especially since _large_negative_number can return different values based on dtype, I suggest making the comment more general.

Suggested change
# other (masked) positions to -1e.9.
# other (masked) positions to a large negative number.

Copy link
Collaborator

@hertschuh hertschuh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the improvement!

@google-ml-butler google-ml-butler bot added kokoro:force-run ready to pull Ready to be merged into the codebase labels Nov 14, 2025
@codecov-commenter
Copy link

codecov-commenter commented Nov 14, 2025

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 82.47%. Comparing base (2730d52) to head (6c87c58).
⚠️ Report is 2 commits behind head on master.

Additional details and impacted files
@@            Coverage Diff             @@
##           master   #21850      +/-   ##
==========================================
- Coverage   82.48%   82.47%   -0.01%     
==========================================
  Files         577      577              
  Lines       59506    59508       +2     
  Branches     9330     9332       +2     
==========================================
- Hits        49084    49082       -2     
- Misses       8010     8014       +4     
  Partials     2412     2412              
Flag Coverage Δ
keras 82.30% <100.00%> (-0.01%) ⬇️
keras-jax 62.90% <100.00%> (+<0.01%) ⬆️
keras-numpy 57.55% <100.00%> (+<0.01%) ⬆️
keras-openvino 34.34% <100.00%> (-0.01%) ⬇️
keras-tensorflow 64.12% <100.00%> (+<0.01%) ⬆️
keras-torch 63.61% <100.00%> (+<0.01%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@hertschuh
Copy link
Collaborator

Oh, you'll need to reformat the code.

ruff --config pyproject.toml format . should do the job.

Otherwise the real way is:
https://github.com/keras-team/keras/blob/master/CONTRIBUTING.md#generating-public-api-and-formatting-the-code

@google-ml-butler google-ml-butler bot removed the ready to pull Ready to be merged into the codebase label Nov 14, 2025
@google-ml-butler google-ml-butler bot added kokoro:force-run ready to pull Ready to be merged into the codebase labels Nov 14, 2025
@hertschuh hertschuh merged commit edbf8f5 into keras-team:master Nov 14, 2025
12 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ready to pull Ready to be merged into the codebase size:S

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants