Skip to content

fix: reject optimizations that modify class members without changing target method#1950

Open
mashraf-222 wants to merge 6 commits intomainfrom
cf-1084-reject-unchanged-target-modifications
Open

fix: reject optimizations that modify class members without changing target method#1950
mashraf-222 wants to merge 6 commits intomainfrom
cf-1084-reject-unchanged-target-modifications

Conversation

@mashraf-222
Copy link
Copy Markdown
Contributor

Problem

The optimizer accepts LLM output that modifies constructors, adds fields, or adds helper methods without actually changing the target method. This causes 15+ wrong-method PRs where the PR title says one method but the diff changes constructors or adds unused members.

Examples from Zuul/Eureka:

  • Header.getValue — title says getValue, diff changes constructor
  • FilterType.toString — diff removes private keyword, doesn't touch toString
  • MutableFilterRegistry.size — changes from delegating to cached count
  • SessionContext.getRouteVIP — correctness bug, returns wrong value
  • EurekaHttpResponse.getStatusCode — title says getStatusCode, diff changes constructor

Root Cause

replace_function() in replacement.py blindly applies all class member changes from the LLM output (constructors via _replace_constructors, fields and helpers via _insert_class_members) without checking whether the target method itself was modified.

Fix

Added a guard in replace_function() that:

  1. Detects if the LLM produced any class-level modifications (fields, helpers, constructors)
  2. Extracts the original target method source from the file
  3. Compares it (normalized for whitespace) with the optimization's target method
  4. If the target is unchanged → rejects the optimization as a no-op

Two helper functions added:

  • _normalized_equal() — whitespace-insensitive comparison
  • _extract_original_method_source() — extracts the original method from source (handles overloads via line numbers)

Test Coverage

5 new tests in TestUnchangedTargetWithClassModifications:

  • test_unchanged_target_with_unused_field_rejected — field added, target unchanged → rejected
  • test_unchanged_target_with_modified_constructor_rejected — constructor modified, target unchanged → rejected (reproduces Zuul release v0.10.3 #52 pattern)
  • test_unchanged_target_with_helper_rejected — helper added, target unchanged → rejected
  • test_changed_target_with_field_accepted — target changed + field added → accepted (valid memoization)
  • test_changed_target_only_accepted — target changed, no class modifications → accepted

All 37 tests pass (32 existing + 5 new).

Closes CF-1084

🤖 Generated with Claude Code

…target method

Guard against the "wrong-file" pattern where the LLM adds cache fields,
modifies constructors, or adds helper methods without actually changing
the method it was asked to optimize. 15+ known PRs across Zuul and
Eureka exhibited this pattern.

The check compares the target method source (normalized for whitespace)
before and after the optimization. If the target is unchanged but the
LLM added fields, helpers, or modified constructors, the optimization
is rejected as a no-op.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@claude
Copy link
Copy Markdown
Contributor

claude Bot commented Apr 1, 2026

Claude encountered an error —— View job


I'll analyze this and get back to you.

"""Compare two method sources ignoring whitespace differences."""

def normalize(s: str) -> str:
return "\n".join(line.strip() for line in s.strip().splitlines() if line.strip())
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚡️Codeflash found 34% (0.34x) speedup for _normalized_equal in codeflash/languages/java/replacement.py

⏱️ Runtime : 1.13 milliseconds 847 microseconds (best of 250 runs)

📝 Explanation and details

The optimization replaced the nested list comprehension "\n".join(line.strip() for line in s.strip().splitlines() if line.strip()) with an explicit loop that strips each line once and appends non-empty results to a list, eliminating redundant line.strip() calls that the original performed twice per line (once in the filter and once in the generator). This reduces per-line overhead from two strip operations plus membership tests to a single strip followed by a boolean check, cutting runtime by 33% across inputs ranging from short strings to 1000+ line blocks. The function is called by replace_function to validate that LLM-generated optimizations actually modify the target method, so the speedup directly reduces latency in the hot path for detecting no-op optimizations.

Correctness verification report:

Test Status
⚙️ Existing Unit Tests 🔘 None Found
🌀 Generated Regression Tests 68 Passed
⏪ Replay Tests 🔘 None Found
🔎 Concolic Coverage Tests 🔘 None Found
📊 Tests Coverage 100.0%
🌀 Click to see Generated Regression Tests
from codeflash.languages.java.replacement import _normalized_equal


def test_simple_identical_strings_are_equal():
    # identical short strings should be equal
    a = "def foo():\n    return 1"
    b = "def foo():\n    return 1"
    assert _normalized_equal(a, b) is True  # 3.03μs -> 2.31μs (31.1% faster)


def test_whitespace_differences_ignored_between_lines():
    # leading/trailing spaces and empty lines are ignored by normalization
    a = "\n\n  def foo():   \n      return 1  \n\n"
    b = "def foo():\n\treturn 1"  # tab for indentation vs spaces; leading/trailing whitespace will be stripped
    # Both normalize to lines ["def foo():", "return 1"] so should be equal
    assert _normalized_equal(a, b) is True  # 3.25μs -> 2.92μs (11.3% faster)


def test_different_content_not_equal():
    # content changes (case or words) must be detected
    a = "def foo():\n    return 1"
    b = "def foo():\n    return 2"
    assert _normalized_equal(a, b) is False  # 2.75μs -> 2.00μs (37.5% faster)


def test_internal_multiple_spaces_are_preserved():
    # internal spaces are not trimmed by line.strip(), only leading/trailing
    a = "value =  a   +   b"  # multiple internal spaces
    b = "value = a + b"  # fewer internal spaces
    # These remain different after normalization because internal spaces are preserved
    assert _normalized_equal(a, b) is False  # 2.26μs -> 1.63μs (38.7% faster)


def test_empty_and_whitespace_only_strings_equal():
    # empty strings or strings with only whitespace/newlines normalize to empty
    assert _normalized_equal("", "") is True  # 1.91μs -> 1.19μs (60.5% faster)
    assert _normalized_equal("   \n  \n", "") is True  # 1.20μs -> 1.35μs (11.1% slower)
    assert _normalized_equal("\n\n", "   ") is True  # 1.06μs -> 1.16μs (8.69% slower)


def test_order_of_nonempty_lines_matters():
    # order of non-empty lines is preserved by normalization and thus matters
    a = "first\nsecond\nthird"
    b = "second\nfirst\nthird"
    assert _normalized_equal(a, b) is False  # 3.23μs -> 2.42μs (33.5% faster)


def test_handling_of_windows_crlf_and_old_mac_newlines():
    # splitlines() handles \r\n and \r consistently; normalization should still match
    a = "line1\r\n   line2\r\n\r\nline3  "
    b = "  line1\nline2\n\nline3"
    assert _normalized_equal(a, b) is True  # 3.52μs -> 2.96μs (19.0% faster)


def test_tabs_inside_lines_are_preserved_but_trim_edges():
    # leading/trailing tabs are stripped, internal tabs remain significant
    a = "\t\titem\twith\tinternal\ttabs\t\n"
    b = "item\twith\tinternal\ttabs"
    # After normalization both lines become identical content-wise
    assert _normalized_equal(a, b) is True  # 2.29μs -> 1.96μs (16.8% faster)


def test_none_inputs_raise_attribute_error():
    # Test that invalid input types fail gracefully with different patterns
    # Pattern 1: First string is valid, second is empty—tests normalization success path
    result = _normalized_equal("public class Foo {}", "")  # 2.21μs -> 1.40μs (57.8% faster)
    assert result is False

    # Pattern 2: Both strings contain code—tests symmetric comparison
    code_a = "class A {\n    int x = 5;\n}"
    code_b = "class A {\n    int x = 5;\n}"
    assert _normalized_equal(code_a, code_b) is True  # 2.64μs -> 1.96μs (34.7% faster)

    # Pattern 3: Strings with comments and special formatting
    code_with_comments = "// comment\nint method() {\n    return 42;\n}"
    code_no_comments = "int method() {\n    return 42;\n}"
    assert _normalized_equal(code_with_comments, code_no_comments) is False  # 2.09μs -> 1.56μs (34.0% faster)


def test_non_breaking_space_vs_regular_space_not_ignored():
    # Non-breaking space (\u00A0) is not removed by str.strip() in many Python builds;
    # it may remain internal and thus cause inequality compared to ordinary space.
    a = "a\u00a0b"  # 'a' + NBSP + 'b'
    b = "a b"  # 'a' + regular space + 'b'
    # These should not be treated as equal because internal characters differ
    assert _normalized_equal(a, b) is False  # 2.47μs -> 1.79μs (37.9% faster)


def test_large_number_of_lines_with_varying_whitespace_are_equal():
    # Construct 1000 logical lines with varied leading/trailing whitespace and empty lines inserted.
    lines_count = 1000
    # Build first representation with extra indentation and trailing spaces
    a_lines = []
    for i in range(lines_count):
        a_lines.append(f"   line_{i}   ")  # leading/trailing spaces to be stripped
        if i % 50 == 0:
            a_lines.append("   ")  # insert whitespace-only lines that should be removed
    a = "\n".join(a_lines)

    # Build second representation with different whitespace and occasional blank lines
    b_lines = []
    for i in range(lines_count):
        # use tabs and varying surrounding blank lines
        prefix = "\t" if i % 3 == 0 else "  "
        suffix = "" if i % 7 == 0 else "   "
        b_lines.append(f"{prefix}line_{i}{suffix}")
        if i % 60 == 0:
            b_lines.append("")  # actual blank line
    b = "\r\n".join(b_lines)  # use CRLF occasionally

    # After normalization, both should produce the same sequence of non-empty, stripped lines
    assert _normalized_equal(a, b) is True  # 362μs -> 268μs (34.8% faster)


def test_repeated_comparisons_stability_over_many_iterations():
    # Test correctness with diverse real-world Java method patterns
    test_cases = [
        ("public void test() {}", "public void test() {}"),
        ("void method() {\n    int x = 1;\n}", "void method() {\n    int x = 1;\n}"),
        ('  String getName() {\n    return "test";\n  }', 'String getName() {\n    return "test";\n}'),
        (
            "List<String> getList() {\n    return new ArrayList<>();\n}",
            "List<String> getList() {\n    return new ArrayList<>();\n}",
        ),
        ("@Override\npublic boolean equals(Object o) {}", "@Override\npublic boolean equals(Object o) {}"),
    ]

    for a, b in test_cases:
        assert _normalized_equal(a, b) is True  # 11.1μs -> 8.05μs (37.3% faster)

    # Test that different methods are correctly identified as non-equal
    different_cases = [
        ("void foo() { return 1; }", "void bar() { return 1; }"),
        ("int getValue() { return 0; }", "int getValue() { return 1; }"),
        ("static void test() {}", "void test() {}"),
    ]

    for a, b in different_cases:
        assert _normalized_equal(a, b) is False  # 3.65μs -> 2.34μs (56.0% faster)
from codeflash.languages.java.replacement import _normalized_equal


def test_identical_strings_are_equal():
    """Two identical strings should return True."""
    assert _normalized_equal("hello", "hello") is True  # 2.52μs -> 1.62μs (55.5% faster)


def test_identical_multiline_strings_are_equal():
    """Two identical multiline strings should return True."""
    code1 = "def foo():\n    pass"
    code2 = "def foo():\n    pass"
    assert _normalized_equal(code1, code2) is True  # 3.23μs -> 2.46μs (31.3% faster)


def test_different_strings_are_not_equal():
    """Two different strings should return False."""
    assert _normalized_equal("hello", "world") is False  # 2.27μs -> 1.59μs (42.7% faster)


def test_strings_differing_only_in_leading_whitespace():
    """Strings with different leading whitespace but same content should be equal."""
    code1 = "def foo():\n    pass"
    code2 = "    def foo():\n        pass"
    assert _normalized_equal(code1, code2) is True  # 3.17μs -> 2.37μs (33.3% faster)


def test_strings_differing_only_in_trailing_whitespace():
    """Strings with different trailing whitespace but same content should be equal."""
    code1 = "def foo():\n    pass"
    code2 = "def foo():\n    pass   \n"
    assert _normalized_equal(code1, code2) is True  # 3.00μs -> 2.12μs (41.5% faster)


def test_strings_differing_only_in_internal_indentation():
    """Strings with different internal indentation should be equal after normalization."""
    code1 = "def foo():\n    pass"
    code2 = "def foo():\npass"
    assert _normalized_equal(code1, code2) is True  # 2.86μs -> 2.02μs (41.6% faster)


def test_strings_with_extra_blank_lines():
    """Strings with different numbers of blank lines should be equal."""
    code1 = "def foo():\n    pass\n    return"
    code2 = "def foo():\n\n    pass\n\n    return"
    assert _normalized_equal(code1, code2) is True  # 3.76μs -> 2.95μs (27.1% faster)


def test_strings_with_tabs_and_spaces():
    """Strings with mixed tabs and spaces should be normalized correctly."""
    code1 = "def foo():\n    pass"
    code2 = "def foo():\n\tpass"
    assert _normalized_equal(code1, code2) is True  # 2.81μs -> 2.05μs (37.1% faster)


def test_single_line_with_leading_trailing_spaces():
    """Single line strings with leading/trailing spaces should be equal."""
    assert _normalized_equal("  hello  ", "hello") is True  # 2.37μs -> 1.79μs (32.3% faster)


def test_empty_strings():
    """Two empty strings should be equal."""
    assert _normalized_equal("", "") is True  # 2.00μs -> 1.20μs (66.6% faster)


def test_strings_with_only_whitespace():
    """Strings containing only whitespace should normalize to empty."""
    assert _normalized_equal("   \n   \n   ", "") is True  # 1.96μs -> 1.98μs (1.01% slower)


def test_one_empty_one_whitespace_only():
    """Empty string should equal string with only whitespace."""
    assert _normalized_equal("", "     ") is True  # 1.87μs -> 1.41μs (32.6% faster)


def test_strings_with_only_newlines():
    """Strings containing only newlines should normalize to empty."""
    assert _normalized_equal("\n\n\n", "") is True  # 1.84μs -> 1.71μs (7.65% faster)


def test_strings_with_mixed_newline_types():
    """Strings should normalize regardless of newline representation."""
    code1 = "line1\nline2\nline3"
    code2 = "line1\n\nline2\n\nline3"
    assert _normalized_equal(code1, code2) is True  # 3.78μs -> 2.89μs (30.5% faster)


def test_complex_multiline_with_varied_indentation():
    """Complex multiline code with varied indentation should normalize correctly."""
    code1 = """
    def method():
        if True:
            return 42
        else:
            return 0
    """
    code2 = """def method():
if True:
return 42
else:
return 0"""
    assert _normalized_equal(code1, code2) is True  # 4.92μs -> 4.27μs (15.2% faster)


def test_strings_with_special_characters():
    """Strings with special characters should be compared correctly."""
    code1 = "print('hello!@#$%^&*()')"
    code2 = "print('hello!@#$%^&*()')"
    assert _normalized_equal(code1, code2) is True  # 2.25μs -> 1.64μs (37.2% faster)


def test_strings_with_special_characters_different():
    """Strings with different special characters should not be equal."""
    code1 = "print('hello!@#')"
    code2 = "print('hello!$%')"
    assert _normalized_equal(code1, code2) is False  # 2.21μs -> 1.48μs (49.3% faster)


def test_strings_with_unicode_characters():
    """Strings with unicode characters should be compared correctly."""
    code1 = "# ñ é ü\nprint('café')"
    code2 = "# ñ é ü\nprint('café')"
    assert _normalized_equal(code1, code2) is True  # 4.06μs -> 3.35μs (21.3% faster)


def test_strings_with_different_unicode():
    """Strings with different unicode characters should not be equal."""
    code1 = "print('café')"
    code2 = "print('cafe')"
    assert _normalized_equal(code1, code2) is False  # 2.56μs -> 1.68μs (52.4% faster)


def test_single_character_strings():
    """Single character strings should be compared correctly."""
    assert _normalized_equal("a", "a") is True  # 2.09μs -> 1.40μs (49.3% faster)


def test_single_character_strings_different():
    """Different single character strings should not be equal."""
    assert _normalized_equal("a", "b") is False  # 2.10μs -> 1.43μs (46.9% faster)


def test_very_long_identical_lines():
    """Very long identical lines should be equal."""
    long_line = "x" * 1000
    assert _normalized_equal(long_line, long_line) is True  # 3.58μs -> 2.97μs (20.5% faster)


def test_very_long_different_lines():
    """Very long different lines should not be equal."""
    long_line1 = "x" * 1000
    long_line2 = "y" * 1000
    assert _normalized_equal(long_line1, long_line2) is False  # 3.71μs -> 2.93μs (26.6% faster)


def test_strings_with_consecutive_spaces_in_content():
    """Strings with consecutive spaces within content should be preserved."""
    code1 = "x  =  5"
    code2 = "x  =  5"
    assert _normalized_equal(code1, code2) is True  # 2.09μs -> 1.43μs (46.2% faster)


def test_strings_with_different_consecutive_spaces():
    """Strings with different consecutive spaces should not be equal."""
    code1 = "x  =  5"
    code2 = "x = 5"
    assert _normalized_equal(code1, code2) is False  # 2.07μs -> 1.48μs (39.8% faster)


def test_multiline_with_blank_lines_at_start():
    """Multiline string with blank lines at start should normalize correctly."""
    code1 = "\n\n\nhello"
    code2 = "hello"
    assert _normalized_equal(code1, code2) is True  # 2.21μs -> 2.09μs (5.73% faster)


def test_multiline_with_blank_lines_at_end():
    """Multiline string with blank lines at end should normalize correctly."""
    code1 = "hello\n\n\n"
    code2 = "hello"
    assert _normalized_equal(code1, code2) is True  # 2.17μs -> 1.83μs (18.6% faster)


def test_multiline_with_blank_lines_everywhere():
    """Multiline string with blank lines everywhere should normalize correctly."""
    code1 = "\n\n  hello  \n\n  world  \n\n"
    code2 = "hello\nworld"
    assert _normalized_equal(code1, code2) is True  # 3.71μs -> 3.02μs (22.9% faster)


def test_string_with_only_spaces():
    """String with only spaces should normalize to empty."""
    assert _normalized_equal("     ", "") is True  # 1.95μs -> 1.43μs (36.4% faster)


def test_java_method_example_1():
    """Example Java method with different formatting should be equal."""
    method1 = """
    public void doSomething() {
        System.out.println("Hello");
    }
    """
    method2 = """public void doSomething() {
System.out.println("Hello");
}"""
    assert _normalized_equal(method1, method2) is True  # 4.06μs -> 3.56μs (13.8% faster)


def test_java_method_example_2():
    """Another Java method example with varied whitespace."""
    method1 = """
        public int calculate(int x, int y) {
            return x + y;
        }
    """
    method2 = """public int calculate(int x, int y) {
return x + y;
}"""
    assert _normalized_equal(method1, method2) is True  # 3.88μs -> 3.19μs (21.7% faster)


def test_strings_case_sensitive():
    """Comparison should be case-sensitive."""
    assert _normalized_equal("Hello", "hello") is False  # 2.20μs -> 1.48μs (48.7% faster)


def test_strings_with_leading_spaces_only_first():
    """String with leading spaces should not equal different string."""
    code1 = "    hello"
    code2 = "world"
    assert _normalized_equal(code1, code2) is False  # 2.17μs -> 1.61μs (34.8% faster)


def test_large_multiline_string_identical():
    """Large multiline string should equal itself after normalization."""
    lines = ["line " + str(i) for i in range(100)]
    code1 = "\n".join(lines)
    code2 = "\n".join("    " + line + "    " for line in lines)
    assert _normalized_equal(code1, code2) is True  # 30.6μs -> 22.6μs (35.5% faster)


def test_very_large_multiline_string_identical():
    """Very large multiline string (1000+ lines) should equal itself."""
    lines = ["def func_" + str(i) + "(): pass" for i in range(500)]
    code1 = "\n".join(lines)
    code2 = "\n    ".join([""] + lines)  # Add leading indentation
    assert _normalized_equal(code1, code2) is True  # 136μs -> 101μs (34.6% faster)


def test_large_string_with_many_blank_lines():
    """Large string with many interleaved blank lines should normalize correctly."""
    lines = ["line " + str(i) for i in range(100)]
    code1 = "\n".join(lines)
    code2 = "\n\n".join(lines)  # Double newlines between each line
    assert _normalized_equal(code1, code2) is True  # 28.9μs -> 23.5μs (23.0% faster)


def test_deeply_indented_code():
    """Deeply indented code should normalize correctly."""
    code1 = "level1\n  level2\n    level3\n      level4\n        level5"
    code2 = "level1\nlevel2\nlevel3\nlevel4\nlevel5"
    assert _normalized_equal(code1, code2) is True  # 4.23μs -> 3.31μs (27.9% faster)


def test_large_identical_code_blocks():
    """Large identical code blocks should be equal."""
    block = """def method():
    if condition:
        do_something()
    else:
        do_other_thing()
    return result"""
    code1 = "\n\n".join([block] * 50)
    code2 = "\n".join([block] * 50)
    assert _normalized_equal(code1, code2) is True  # 94.9μs -> 69.5μs (36.6% faster)


def test_performance_with_1000_character_lines():
    """Performance test with very long lines."""
    long_line = "x = " + ", ".join(str(i) for i in range(250))
    code1 = long_line
    code2 = "  " + long_line + "  "
    assert _normalized_equal(code1, code2) is True  # 4.16μs -> 3.67μs (13.1% faster)


def test_large_string_with_mixed_content():
    """Large string with mixed whitespace and content."""
    parts = []
    for i in range(200):
        parts.append("statement_" + str(i))
        if i % 10 == 0:
            parts.append("")  # Add blank lines
    code1 = "\n".join(parts)
    code2 = "\n    ".join(parts)  # Different indentation
    # Filter out empty strings for comparison
    code1_filtered = "\n".join(line for line in code1.split("\n") if line.strip())
    code2_filtered = "\n".join(line.strip() for line in code2.split("\n") if line.strip())
    assert _normalized_equal(code1, code2) is True  # 57.1μs -> 44.5μs (28.5% faster)


def test_equal_after_normalization_large_diff():
    """Test that large strings with significant whitespace differences are still equal."""
    lines = ["code_" + str(i) for i in range(300)]
    code1 = "\n".join(lines)
    # Heavily indent and add blank lines
    code2_parts = []
    for i, line in enumerate(lines):
        if i % 5 == 0:
            code2_parts.append("")
            code2_parts.append("")
        code2_parts.append("        " + line + "        ")
    code2 = "\n".join(code2_parts)
    assert _normalized_equal(code1, code2) is True  # 92.0μs -> 69.5μs (32.4% faster)


def test_different_large_strings():
    """Two different large strings should not be equal."""
    code1 = "\n".join("line_a_" + str(i) for i in range(100))
    code2 = "\n".join("line_b_" + str(i) for i in range(100))
    assert _normalized_equal(code1, code2) is False  # 24.5μs -> 19.7μs (24.7% faster)


def test_large_similar_strings_one_line_different():
    """Two large strings differing in one line should not be equal."""
    lines1 = ["line_" + str(i) for i in range(100)]
    lines2 = lines1.copy()
    lines2[50] = "different_line"
    code1 = "\n".join(lines1)
    code2 = "\n".join(lines2)
    assert _normalized_equal(code1, code2) is False  # 24.6μs -> 19.1μs (28.9% faster)


def test_alternating_indentation_pattern():
    """String with alternating indentation pattern should normalize correctly."""
    lines = []
    for i in range(100):
        if i % 2 == 0:
            lines.append("  line_" + str(i))
        else:
            lines.append("    line_" + str(i))
    code1 = "\n".join(lines)
    code2 = "\n".join("line_" + str(i) for i in range(100))
    assert _normalized_equal(code1, code2) is True  # 28.9μs -> 21.3μs (36.1% faster)


def test_worst_case_whitespace_normalization():
    """Worst case: every line has different indentation."""
    lines = []
    for i in range(200):
        lines.append(" " * i + "code_" + str(i))
    code1 = "\n".join(lines)
    code2 = "\n".join("code_" + str(i) for i in range(200))
    assert _normalized_equal(code1, code2) is True  # 103μs -> 72.3μs (43.1% faster)

To test or edit this optimization locally git merge codeflash/optimize-pr1950-2026-04-01T16.29.13

Suggested change
return "\n".join(line.strip() for line in s.strip().splitlines() if line.strip())
parts: list[str] = []
for line in s.splitlines():
stripped = line.strip()
if stripped:
parts.append(stripped)
return "\n".join(parts)

Static Badge

@KRRT7
Copy link
Copy Markdown
Collaborator

KRRT7 commented Apr 1, 2026

@claude I believe the inner normalize function exists elsewhere in the codebase, double check

@claude
Copy link
Copy Markdown
Contributor

claude Bot commented Apr 1, 2026

Claude finished @KRRT7's task in 1m 7s —— View job


Finding: inner normalize duplicates normalize_java_code

  • Gather context
  • Compare implementations
  • Report findings

You're right. The inner normalize function inside _normalized_equal (replacement.py:371-372) is a subset of normalize_java_code that already exists in formatter.py:264-329.

Comparison:

_normalized_equal inner normalize (replacement.py:371-372):

def normalize(s: str) -> str:
    return "\n".join(line.strip() for line in s.strip().splitlines() if line.strip())

→ strips whitespace only

normalize_java_code (formatter.py:264-329):

def normalize_java_code(source: str) -> str:
    # strips whitespace AND removes // line comments AND /* */ block comments

→ strips whitespace + removes all Java comments

normalize_java_code does everything the inner normalize does, plus comment stripping. The PR should replace the inner function with normalize_java_code directly — this also fixes the issue I flagged in my earlier review where _normalized_equal treats comment-only changes as "real" method modifications.

normalize_java_code is already exported from codeflash/languages/java/__init__.py and imported in support.py, so it just needs to be imported in replacement.py.

Suggested fix:

# Remove the inner normalize helper entirely; replace _normalized_equal with:
def _normalized_equal(a: str, b: str) -> bool:
    return normalize_java_code(a) == normalize_java_code(b)

Fix this →


Add -> None return annotations and Path / JavaSupport parameter annotations
to every test method + fixture so the prek mypy hook passes when the file
is in the CI diff.
Rich renders the banner panel with box-drawing characters (╭, ╮, │, etc.)
that cp1252 cannot decode. On Windows, subprocess.run(..., text=True) uses
cp1252 by default, so decoding the child stdout raises UnicodeDecodeError
and subprocess sets result.stdout to None — breaking the assertion with a
misleading "argument of type 'NoneType' is not iterable".

Pass encoding="utf-8" explicitly so the test passes on every platform.
@mashraf-222
Copy link
Copy Markdown
Contributor Author

Review

Bug premise verified — real. replace_function in codeflash/languages/java/replacement.py on main applies parsed.new_fields / parsed.modified_constructors / helpers without ever comparing the target-method text before and after parsing. That explains the Zuul/Eureka "wrong-method" PR class where the target method body is unchanged but constructors or class-level fields are modified.

Fix is architecturally correct. The guard at replace_function runs before _insert_class_members mutates source, follows the existing "return source → reject" pattern, and correctly distinguishes changes that modify the target body (accepted) from changes that don't (rejected).

Relationship to PR #1947 (CF-1081): overlapping but distinct.

Neither is strictly a subset. Recommend merging both, with #1947 merged first since its check is narrower; then this PR's broader guard covers the constructor case #1947 skips. A follow-up consolidation (single helper + one normalize_java_code via formatter.py:264 instead of the inlined duplicate) is worth doing after both land.

CI blockers addressed in the last two commits:

  1. prek / prek was failing because the prek mypy hook ran on tests/test_languages/test_java/test_replacement.py and flagged every untyped test — pre-existing pattern that the hook surfaced because this PR touches the file. Added -> None + parameter type annotations on all 37 test methods and the java_support fixture.
  2. unit-tests (windows-latest, 3.13)test_help_banner.py was failing on main (Rich box-drawing characters undecodable under cp1252). Added encoding="utf-8" to both subprocess.run calls — same fix as PR fix: decode help-banner test subprocess output as UTF-8 #2120.

Snyk "code test limit reached" is an account quota, unrelated. Ready for re-review.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants