Skip to content

Commit

Permalink
gimple: allow more folding of memcpy [PR102125]
Browse files Browse the repository at this point in the history
The current restriction on folding memcpy to a single element of size
MOVE_MAX is excessively cautious on most machines and limits some
significant further optimizations.  So relax the restriction provided
the copy size does not exceed MOVE_MAX * MOVE_RATIO and that a SET
insn exists for moving the value into machine registers.

Note that there were already checks in place for having misaligned
move operations when one or more of the operands were unaligned.

On Arm this now permits optimizing

uint64_t bar64(const uint8_t *rData1)
{
    uint64_t buffer;
    memcpy(&buffer, rData1, sizeof(buffer));
    return buffer;
}

from
        ldr     r2, [r0]        @ unaligned
        sub     sp, sp, #8
        ldr     r3, [r0, #4]    @ unaligned
        strd    r2, [sp]
        ldrd    r0, [sp]
        add     sp, sp, #8

to
        mov     r3, r0
        ldr     r0, [r0]        @ unaligned
        ldr     r1, [r3, #4]    @ unaligned

PR target/102125 - (ARM Cortex-M3 and newer) missed optimization. memcpy not needed operations

gcc/ChangeLog:

	PR target/102125
	* gimple-fold.c (gimple_fold_builtin_memory_op): Allow folding
	memcpy if the size is not more than MOVE_MAX * MOVE_RATIO.
  • Loading branch information
Richard Earnshaw committed Sep 13, 2021
1 parent f0cfd07 commit 5f6a6c9
Showing 1 changed file with 11 additions and 5 deletions.
16 changes: 11 additions & 5 deletions gcc/gimple-fold.c
Original file line number Diff line number Diff line change
Expand Up @@ -67,6 +67,8 @@ along with GCC; see the file COPYING3. If not see
#include "tree-vector-builder.h"
#include "tree-ssa-strlen.h"
#include "varasm.h"
#include "memmodel.h"
#include "optabs.h"

enum strlen_range_kind {
/* Compute the exact constant string length. */
Expand Down Expand Up @@ -957,14 +959,17 @@ gimple_fold_builtin_memory_op (gimple_stmt_iterator *gsi,
= build_int_cst (build_pointer_type_for_mode (char_type_node,
ptr_mode, true), 0);

/* If we can perform the copy efficiently with first doing all loads
and then all stores inline it that way. Currently efficiently
means that we can load all the memory into a single integer
register which is what MOVE_MAX gives us. */
/* If we can perform the copy efficiently with first doing all loads and
then all stores inline it that way. Currently efficiently means that
we can load all the memory with a single set operation and that the
total size is less than MOVE_MAX * MOVE_RATIO. */
src_align = get_pointer_alignment (src);
dest_align = get_pointer_alignment (dest);
if (tree_fits_uhwi_p (len)
&& compare_tree_int (len, MOVE_MAX) <= 0
&& (compare_tree_int
(len, (MOVE_MAX
* MOVE_RATIO (optimize_function_for_size_p (cfun))))
<= 0)
/* FIXME: Don't transform copies from strings with known length.
Until GCC 9 this prevented a case in gcc.dg/strlenopt-8.c
from being handled, and the case was XFAILed for that reason.
Expand Down Expand Up @@ -1000,6 +1005,7 @@ gimple_fold_builtin_memory_op (gimple_stmt_iterator *gsi,
if (type
&& is_a <scalar_int_mode> (TYPE_MODE (type), &mode)
&& GET_MODE_SIZE (mode) * BITS_PER_UNIT == ilen * 8
&& have_insn_for (SET, mode)
/* If the destination pointer is not aligned we must be able
to emit an unaligned store. */
&& (dest_align >= GET_MODE_ALIGNMENT (mode)
Expand Down

0 comments on commit 5f6a6c9

Please sign in to comment.