Skip to content

Conversation

utsab345
Copy link

@utsab345 utsab345 commented Oct 5, 2025

Description

Fixes #21714 - Incorrect results when using MultiHeadAttention attention_axes with negative indexing

Problem

When attention_axes was specified with negative indices (e.g., -2), the normalization was happening relative to the projected tensor rank (which includes the num_heads dimension) rather than the input tensor rank. This caused incorrect axis selection and wrong einsum equations.

For example, with input shape (10, 5, 128, 16) (rank 4):

  • attention_axes=2 correctly produced equation abfde,abcde->abdcf
  • attention_axes=-2 incorrectly produced equation abcfe,abcde->abcdf

Solution

Modified _build_attention method in MultiHeadAttention to normalize negative indices relative to the input rank (rank - 1) before the num_heads dimension is added during projection. This ensures:

  • attention_axes=-2 normalizes to input_rank + (-2) = 4 + (-2) = 2
  • Both attention_axes=2 and attention_axes=-2 now produce identical results

Changes

  • Updated keras/src/layers/attention/multi_head_attention.py:
    • Added negative index normalization in _build_attention method
  • Added integration_tests/test_multi_head_attention_negative_axis.py:
    • Test verifies that negative and positive indexing produce identical outputs

Testing

x = np.random.normal(size=(2, 3, 8, 4))
mha_pos = keras.layers.MultiHeadAttention(num_heads=2, key_dim=4, attention_axes=2)
mha_neg = keras.layers.MultiHeadAttention(num_heads=2, key_dim=4, attention_axes=-2)
# Both now produce identical shapes and values (with same weights)

Copy link
Contributor

Summary of Changes

Hello @utsab345, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request resolves a critical bug in the MultiHeadAttention layer where negative indices for attention_axes were miscalculated, resulting in incorrect attention mechanisms. The solution involves precise normalization of these indices against the input tensor's rank. Additionally, it includes minor enhancements to image saving utilities and introduces comprehensive integration tests to validate these fixes and improvements.

Highlights

  • MultiHeadAttention Negative Indexing Fix: Corrected an issue where negative attention_axes in MultiHeadAttention were normalized incorrectly against the projected tensor rank, leading to erroneous einsum equations.
  • Input Rank Normalization: Implemented a fix in _build_attention to normalize negative attention_axes relative to the input tensor's rank before the num_heads dimension is added.
  • Equation Builder Normalization: Added a similar normalization step in _build_attention_equation to ensure negative indices are consistently handled when generating einsum equations.
  • Image Utility Improvements: Standardized "jpg" to "jpeg" in the save_img utility and refined the associated warning message for RGBA to RGB conversion.
  • New Integration Tests: Introduced new tests for MultiHeadAttention's negative attention_axes and for the save_img utility's JPG handling to ensure correct behavior.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request correctly fixes an issue with negative index handling in MultiHeadAttention's attention_axes. The core logic change in _build_attention properly normalizes negative axes with respect to the input tensor's rank, and a new integration test verifies this behavior.

My review includes a suggestion to simplify the normalization logic using a list comprehension for better readability. More importantly, I've identified a redundant and incorrect piece of code in _build_attention_equation that seems to be a leftover from a different fix attempt. It incorrectly uses the projected rank for normalization, which is the original bug, and should be removed to avoid confusion and future issues.

The PR also includes unrelated but correct changes for image saving utilities. It would be clearer to have these in a separate pull request.

Comment on lines +773 to +777
# Normalize negative indices to positive indices
if isinstance(attn_axes, (list, tuple)):
attn_axes = tuple(ax % rank if ax < 0 else ax for ax in attn_axes)
else:
attn_axes = (attn_axes % rank if attn_axes < 0 else attn_axes,)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

This block for normalizing negative indices appears to be both redundant and incorrect.

  1. Redundant: The _build_attention method already normalizes self._attention_axes to positive indices before passing them to this function. This block will therefore have no effect on the already-positive axes.
  2. Incorrect: If this block were to handle negative indices, its logic ax % rank is incorrect. It normalizes based on the projected tensor's rank, which is the exact bug this PR aims to fix. The correct normalization should be relative to the input rank, as correctly implemented in _build_attention.

To avoid confusion and potential future bugs, this block should be removed.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agree, please revert.

Comment on lines +382 to +391
input_rank = rank - 1
normalized_axes = []
for ax in self._attention_axes:
if ax < 0:
# Normalize relative to input rank
normalized_ax = input_rank + ax
else:
normalized_ax = ax
normalized_axes.append(normalized_ax)
self._attention_axes = tuple(normalized_axes)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

For better readability and conciseness, this loop for normalizing axes can be simplified into a single list comprehension.

            input_rank = rank - 1
            self._attention_axes = tuple(
                input_rank + ax if ax < 0 else ax for ax in self._attention_axes
            )

@codecov-commenter
Copy link

codecov-commenter commented Oct 5, 2025

Codecov Report

❌ Patch coverage is 50.00000% with 7 lines in your changes missing coverage. Please review.
✅ Project coverage is 82.59%. Comparing base (94ca6ef) to head (deebbc6).

Files with missing lines Patch % Lines
keras/src/layers/attention/multi_head_attention.py 63.63% 2 Missing and 2 partials ⚠️
keras/src/utils/image_utils.py 0.00% 3 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##           master   #21721      +/-   ##
==========================================
- Coverage   82.59%   82.59%   -0.01%     
==========================================
  Files         572      572              
  Lines       58401    58413      +12     
  Branches     9146     9150       +4     
==========================================
+ Hits        48238    48244       +6     
- Misses       7828     7832       +4     
- Partials     2335     2337       +2     
Flag Coverage Δ
keras 82.39% <50.00%> (-0.01%) ⬇️
keras-jax 63.24% <50.00%> (-0.01%) ⬇️
keras-numpy 57.58% <50.00%> (-0.01%) ⬇️
keras-openvino 34.36% <0.00%> (-0.01%) ⬇️
keras-tensorflow 63.97% <50.00%> (-0.01%) ⬇️
keras-torch 63.55% <50.00%> (-0.01%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@utsab345 utsab345 force-pushed the fix-multihead-attention-negative-axes branch from 778c3fb to e704b28 Compare October 5, 2025 01:54
@utsab345 utsab345 force-pushed the fix-multihead-attention-negative-axes branch from e704b28 to deebbc6 Compare October 5, 2025 02:04
Comment on lines +773 to +777
# Normalize negative indices to positive indices
if isinstance(attn_axes, (list, tuple)):
attn_axes = tuple(ax % rank if ax < 0 else ax for ax in attn_axes)
else:
attn_axes = (attn_axes % rank if attn_axes < 0 else attn_axes,)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agree, please revert.

"""
data_format = backend.standardize_data_format(data_format)
# Normalize jpg → jpeg
if file_format is not None and file_format.lower() == "jpg":
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unrelated changes, please revert.

@@ -0,0 +1,27 @@
import os
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unrelated changes, please revert.



def test_attention_axes_negative_indexing_matches_positive():
x = np.random.normal(size=(2, 3, 8, 4))
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Move to multi_head_attention_test.py and use the unit test style, i.e. self.assertEqual, self.assertAllClose, ...

else:
self._attention_axes = tuple(self._attention_axes)
# Normalize negative indices relative to INPUT rank (rank - 1)
input_rank = rank - 1
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why rank - 1?

I think this would be enough instead of lines 381-391:

self._attention_axes = tuple(axis + rank if axis < 0 else axis for axis in self._attention_axes)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Incorrect results when using MultiHeadAttention attention_axes with negative indexing
4 participants