Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merge upstream, 2024-01-09 #2926

Merged
merged 2,385 commits into from
Mar 25, 2024
Merged

Merge upstream, 2024-01-09 #2926

merged 2,385 commits into from
Mar 25, 2024

Conversation

tschwinge
Copy link
Member

Merging in several stages, towards #2802 and further.

This must of course not be rebased by GitHub merge queue, but has to become a proper Git merge. (I'll handle that, once ready.)

edelsohn and others added 30 commits December 28, 2023 14:55
The template linkage2.C and linkage3.C testcases expect a
decoration that does not match AIX assembler syntax.  Expect failure.

gcc/testsuite/ChangeLog:
	* g++.dg/template/linkage2.C: XFAIL on AIX.
	* g++.dg/template/linkage3.C: Same.

Signed-off-by: David Edelsohn <dje.gcc@gmail.com>
Separate out -fdump-* options to the new section.  Sort by option name.

While there, document -save-temps intermediates.

gcc/fortran/ChangeLog:

	PR fortran/81615
	* invoke.texi: Add Developer Options section.  Move '-fdump-*'
	to it.  Add small examples about changed '-save-temps' behavior.

Signed-off-by: Rimvydas Jasinskas <rimvydas.jas@gmail.com>
…length is in range [0, 31]

Notice we have this following situation:

        vsetivli        zero,4,e32,m1,ta,ma
        vlseg4e32.v     v4,(a5)
        vlseg4e32.v     v12,(a3)
        vsetvli a5,zero,e32,m1,tu,ma             ---> This is redundant since VLMAX AVL = 4 when it is fixed-vlmax
        vfadd.vf        v3,v13,fa0
        vfadd.vf        v1,v12,fa1
        vfmul.vv        v17,v3,v5
        vfmul.vv        v16,v1,v5

The rootcause is that we transform COND_LEN_xxx into VLMAX AVL when len == NUNITS blindly.
However, we don't need to transform all of them since when len is range of [0,31], we don't need to
consume scalar registers.

After this patch:

	vsetivli	zero,4,e32,m1,tu,ma
	addi	a4,a5,400
	vlseg4e32.v	v12,(a3)
	vfadd.vf	v3,v13,fa0
	vfadd.vf	v1,v12,fa1
	vlseg4e32.v	v4,(a4)
	vfadd.vf	v2,v14,fa1
	vfmul.vv	v17,v3,v5
	vfmul.vv	v16,v1,v5

Tested on both RV32 and RV64 no regression.

Ok for trunk ?

gcc/ChangeLog:

	* config/riscv/riscv-v.cc (is_vlmax_len_p): New function.
	(expand_load_store): Disallow transformation into VLMAX when len is in range of [0,31]
	(expand_cond_len_op): Ditto.
	(expand_gather_scatter): Ditto.
	(expand_lanes_load_store): Ditto.
	(expand_fold_extract_last): Ditto.

gcc/testsuite/ChangeLog:

	* gcc.target/riscv/rvv/autovec/post-ra-avl.c: Adapt test.
	* gcc.target/riscv/rvv/base/vf_avl-2.c: New test.
The redudant dump check is fragile and easily changed, not necessary.

Tested on both RV32/RV64 no regression.

Remove it and committed.

gcc/testsuite/ChangeLog:

	* gcc.dg/vect/costmodel/riscv/rvv/pr113112-1.c: Remove redundant checks.
This fixes the gcc.dg/tree-ssa/gen-vect-26.c testcase by adding
`#pragma GCC novector` in front of the loop that is doing the checking
of the result. We only want to test the first loop to see if it can be
vectorize.

Committed as obvious after testing on x86_64-linux-gnu with -m32.

gcc/testsuite/ChangeLog:

	PR testsuite/113167
	* gcc.dg/tree-ssa/gen-vect-26.c: Mark the test/check loop
	as novector.

Signed-off-by: Andrew Pinski <quic_apinski@quicinc.com>
…[PR113133]

The post-reload splitter currently allows xmm16+ registers with TARGET_EVEX512.
The splitter changes SFmode of the output operand to V4SFmode, but the vector
mode is currently unsupported in xmm16+ without TARGET_AVX512VL. lowpart_subreg
returns NULL_RTX in this case and the compilation fails with invalid RTX.

The patch removes support for x/ymm16+ registers with TARGET_EVEX512.  The
support should be restored once ix86_hard_regno_mode_ok is fixed to allow
16-byte modes in x/ymm16+ with TARGET_EVEX512.

	PR target/113133

gcc/ChangeLog:

	* config/i386/i386.md
	(TARGET_USE_VECTOR_FP_CONVERTS SF->DF float_extend splitter):
	Do not handle xmm16+ with TARGET_EVEX512.

gcc/testsuite/ChangeLog:

	* gcc.target/i386/pr113133-1.c: New test.
	* gcc.target/i386/pr113133-2.c: New test.
…e2 with combine

The problem with peephole2 is it uses a naive sliding-window algorithm
and misses many cases.  For example:

    float a[10000];
    float t() { return a[0] + a[8000]; }

is compiled to:

    la.local    $r13,a
    la.local    $r12,a+32768
    fld.s       $f1,$r13,0
    fld.s       $f0,$r12,-768
    fadd.s      $f0,$f1,$f0

by trunk.  But as we've explained in r14-4851, the following would be
better with -mexplicit-relocs=auto:

    pcalau12i   $r13,%pc_hi20(a)
    pcalau12i   $r12,%pc_hi20(a+32000)
    fld.s       $f1,$r13,%pc_lo12(a)
    fld.s       $f0,$r12,%pc_lo12(a+32000)
    fadd.s      $f0,$f1,$f0

However the sliding-window algorithm just won't detect the pcalau12i/fld
pair to be optimized.  Use a define_insn_and_rewrite in combine pass
will work around the issue.

gcc/ChangeLog:

	* config/loongarch/predicates.md
	(symbolic_pcrel_offset_operand): New define_predicate.
	(mem_simple_ldst_operand): Likewise.
	* config/loongarch/loongarch-protos.h
	(loongarch_rewrite_mem_for_simple_ldst): Declare.
	* config/loongarch/loongarch.cc
	(loongarch_rewrite_mem_for_simple_ldst): Implement.
	* config/loongarch/loongarch.md (simple_load<mode>): New
	define_insn_and_rewrite.
	(simple_load_<su>ext<SUBDI:mode><GPR:mode>): Likewise.
	(simple_store<mode>): Likewise.
	(define_peephole2): Remove la.local/[f]ld peepholes.

gcc/testsuite/ChangeLog:

	* gcc.target/loongarch/explicit-relocs-auto-single-load-store-2.c:
	New test.
	* gcc.target/loongarch/explicit-relocs-auto-single-load-store-3.c:
	New test.
gcc/ChangeLog:

	* config/loongarch/loongarch.md (bstrins_<mode>_for_ior_mask):
	For the condition, remove unneeded trailing "\" and move "&&" to
	follow GNU coding style.  NFC.
In gimple the operation

short _8;
double _9;
_9 = (double) _8;

denotes two operations on AArch64.  First we have to widen from short to
long and then convert this integer to a double.

Currently however we only count the widen/truncate operations:

(double) _5 6 times vec_promote_demote costs 12 in body
(double) _5 12 times vec_promote_demote costs 24 in body

but not the actual conversion operation, which needs an additional 12
instructions in the attached testcase.   Without this the attached testcase ends
up incorrectly thinking that it's beneficial to vectorize the loop at a very
high VF = 8 (4x unrolled).

Because we can't change the mid-end to account for this the costing code in the
backend now keeps track of whether the previous operation was a
promotion/demotion and ajdusts the expected number of instructions to:

1. If it's the first FLOAT_EXPR and the precision of the lhs and rhs are
   different, double it, since we need to convert and promote.
2. If it's the previous operation was a demonition/promotion then reduce the
   cost of the current operation by the amount we added extra in the last.

with the patch we get:

(double) _5 6 times vec_promote_demote costs 24 in body
(double) _5 12 times vec_promote_demote costs 36 in body

which correctly accounts for 30 operations.

This fixes the 16% regression in imagick in SPECCPU 2017 reported on Neoverse N2
and using the new generic Armv9-a cost model.

gcc/ChangeLog:

	PR target/110625
	* config/aarch64/aarch64.cc (aarch64_vector_costs::add_stmt_cost):
	Adjust throughput and latency calculations for vector conversions.
	(class aarch64_vector_costs): Add m_num_last_promote_demote.

gcc/testsuite/ChangeLog:

	PR target/110625
	* gcc.target/aarch64/pr110625_4.c: New test.
	* gcc.target/aarch64/sve/unpack_fcvt_signed_1.c: Add
	--param aarch64-sve-compare-costs=0.
	* gcc.target/aarch64/sve/unpack_fcvt_unsigned_1.c: Likewise
this patch disables use of FMA in matrix multiplication loop for generic (for
x86-64-v3) and zen4.  I tested this on zen4 and Xenon Gold Gold 6212U.

For Intel this is neutral both on the matrix multiplication microbenchmark
(attached) and spec2k17 where the difference was within noise for Core.

On core the micro-benchmark runs as follows:

With FMA:

       578,500,241      cycles:u                         #    3.645 GHz
                ( +-  0.12% )
       753,318,477      instructions:u                   #    1.30  insn per
cycle              ( +-  0.00% )
       125,417,701      branches:u                       #  790.227 M/sec
                ( +-  0.00% )
          0.159146 +- 0.000363 seconds time elapsed  ( +-  0.23% )

No FMA:

       577,573,960      cycles:u                         #    3.514 GHz
                ( +-  0.15% )
       878,318,479      instructions:u                   #    1.52  insn per
cycle              ( +-  0.00% )
       125,417,702      branches:u                       #  763.035 M/sec
                ( +-  0.00% )
          0.164734 +- 0.000321 seconds time elapsed  ( +-  0.19% )

So the cycle count is unchanged and discrete multiply+add takes same time as
FMA.

While on zen:

With FMA:
         484875179      cycles:u                         #    3.599 GHz
             ( +-  0.05% )  (82.11%)
         752031517      instructions:u                   #    1.55  insn per
cycle
         125106525      branches:u                       #  928.712 M/sec
             ( +-  0.03% )  (85.09%)
            128356      branch-misses:u                  #    0.10% of all
branches          ( +-  0.06% )  (83.58%)

No FMA:
         375875209      cycles:u                         #    3.592 GHz
             ( +-  0.08% )  (80.74%)
         875725341      instructions:u                   #    2.33  insn per
cycle
         124903825      branches:u                       #    1.194 G/sec
             ( +-  0.04% )  (84.59%)
          0.105203 +- 0.000188 seconds time elapsed  ( +-  0.18% )

The diffrerence is that Cores understand the fact that fmadd does not need
all three parameters to start computation, while Zen cores doesn't.

Since this seems noticeable win on zen and not loss on Core it seems like good
default for generic.

float a[SIZE][SIZE];
float b[SIZE][SIZE];
float c[SIZE][SIZE];

void init(void)
{
   int i, j, k;
   for(i=0; i<SIZE; ++i)
   {
      for(j=0; j<SIZE; ++j)
      {
         a[i][j] = (float)i + j;
         b[i][j] = (float)i - j;
         c[i][j] = 0.0f;
      }
   }
}

void mult(void)
{
   int i, j, k;

   for(i=0; i<SIZE; ++i)
   {
      for(j=0; j<SIZE; ++j)
      {
         for(k=0; k<SIZE; ++k)
         {
            c[i][j] += a[i][k] * b[k][j];
         }
      }
   }
}

int main(void)
{
   clock_t s, e;

   init();
   s=clock();
   mult();
   e=clock();
   printf("        mult took %10d clocks\n", (int)(e-s));

   return 0;

}

gcc/ChangeLog:

	* config/i386/x86-tune.def (X86_TUNE_AVOID_128FMA_CHAINS,
	X86_TUNE_AVOID_256FMA_CHAINS): Enable for znver4 and Core.
There will be another update in January.

	* MAINTAINERS: Update my email address.
This fixes the test gcc.dg/gnu23-tag-4.c introduced by commit 23fee88
which fails for -march=... because the DECL_FIELD_BIT_OFFSET are set
inconsistently for types with and without variable-sized field.  This
is fixed by testing for DECL_ALIGN instead.  The code is further
simplified by removing some unnecessary conditions, i.e. anon_field is
set unconditionaly and all fields are assumed to be DECL_FIELDs.

gcc/c:
	* c-typeck.cc (tagged_types_tu_compatible_p): Revise.

gcc/testsuite:
	* gcc.dg/c23-tag-9.c: New test.
Add benches on insert with hint and before begin cache.

libstdc++-v3/ChangeLog:

	* testsuite/performance/23_containers/insert/54075.cc: Add lookup on unknown entries
	w/o copy to see potential impact of memory fragmentation enhancements.
	* testsuite/performance/23_containers/insert/unordered_multiset_hint.cc: Enhance hash
	functor to make it perfect, exactly 1 entry per bucket. Also use hash functor tagged as
	slow or not to bench w/o hash code cache.
	* testsuite/performance/23_containers/insert/unordered_set_hint.cc: New test case. Like
	previous one but using std::unordered_set.
	* testsuite/performance/23_containers/insert/unordered_set_range_insert.cc: New test case.
	Check performance of range-insertion compared to individual insertions.
	* testsuite/performance/23_containers/insert_erase/unordered_small_size.cc: Add same bench
	but after a copy to demonstrate impact of enhancements regarding memory fragmentation.
A number of methods were still not using the small size optimization which
is to prefer an O(N) research to a hash computation as long as N is small.

libstdc++-v3/ChangeLog:

	* include/bits/hashtable.h: Move comment about all equivalent values
	being next to each other in the class documentation header.
	(_M_reinsert_node, _M_merge_unique): Implement small size optimization.
	(_M_find_tr, _M_count_tr, _M_equal_range_tr): Likewise.
Testing for mmix (a 64-bit target using Knuth's simulator).  The test
is largely pruned for simulators, but still needs 5m57s on my laptop
from 3.5 years ago to run to successful completion.  Perhaps slow
hosted targets could also have problems so increasing the timeout
limit, not just for simulators but for everyone, and by more than a
factor 2.

	* testsuite/20_util/hash/quality.cc: Increase timeout by a factor 3.
…644-2.c

This patch resolves the failure of pr43644-2.c in the testsuite, a code
quality test I added back in July, that started failing as the code GCC
generates for 128-bit values (and their parameter passing) has been in
flux.

The function:

unsigned __int128 foo(unsigned __int128 x, unsigned long long y) {
  return x+y;
}

currently generates:

foo:    movq    %rdx, %rcx
        movq    %rdi, %rax
        movq    %rsi, %rdx
        addq    %rcx, %rax
        adcq    $0, %rdx
        ret

and with this patch, we now generate:

foo:	movq    %rdi, %rax
        addq    %rdx, %rax
        movq    %rsi, %rdx
        adcq    $0, %rdx

which is optimal.

2023-12-31  Uros Bizjak  <ubizjak@gmail.com>
	    Roger Sayle  <roger@nextmovesoftware.com>

gcc/ChangeLog
	PR target/43644
	* config/i386/i386.md (*add<dwi>3_doubleword_concat_zext): Tweak
	order of instructions after split, to minimize number of moves.

gcc/testsuite/ChangeLog
	PR target/43644
	* gcc.target/i386/pr43644-2.c: Expect 2 movq instructions.
…L cost model

This patch fixes the following choosing unexpected big LMUL which cause register spillings.

Before this patch, choosing LMUL = 4:

	addi	sp,sp,-160
	addiw	t1,a2,-1
	li	a5,7
	bleu	t1,a5,.L16
	vsetivli	zero,8,e64,m4,ta,ma
	vmv.v.x	v4,a0
	vs4r.v	v4,0(sp)                        ---> spill to the stack.
	vmv.v.x	v4,a1
	addi	a5,sp,64
	vs4r.v	v4,0(a5)                        ---> spill to the stack.

The root cause is the following codes:

                  if (poly_int_tree_p (var)
                      || (is_gimple_val (var)
                         && !POINTER_TYPE_P (TREE_TYPE (var))))

We count the variable as consuming a RVV reg group when it is not POINTER_TYPE.

It is right for load/store STMT for example:

_1 = (MEM)*addr -->  addr won't be allocated an RVV vector group.

However, we find it is not right for non-load/store STMT:

_3 = _1 == x_8(D);

_1 is pointer type too but we does allocate a RVV register group for it.

So after this patch, we are choosing the perfect LMUL for the testcase in this patch:

	ble	a2,zero,.L17
	addiw	a7,a2,-1
	li	a5,3
	bleu	a7,a5,.L15
	srliw	a5,a7,2
	slli	a6,a5,1
	add	a6,a6,a5
	lui	a5,%hi(replacements)
	addi	t1,a5,%lo(replacements)
	slli	a6,a6,5
	lui	t4,%hi(.LANCHOR0)
	lui	t3,%hi(.LANCHOR0+8)
	lui	a3,%hi(.LANCHOR0+16)
	lui	a4,%hi(.LC1)
	vsetivli	zero,4,e16,mf2,ta,ma
	addi	t4,t4,%lo(.LANCHOR0)
	addi	t3,t3,%lo(.LANCHOR0+8)
	addi	a3,a3,%lo(.LANCHOR0+16)
	addi	a4,a4,%lo(.LC1)
	add	a6,t1,a6
	addi	a5,a5,%lo(replacements)
	vle16.v	v18,0(t4)
	vle16.v	v17,0(t3)
	vle16.v	v16,0(a3)
	vmsgeu.vi	v25,v18,4
	vadd.vi	v24,v18,-4
	vmsgeu.vi	v23,v17,4
	vadd.vi	v22,v17,-4
	vlm.v	v21,0(a4)
	vmsgeu.vi	v20,v16,4
	vadd.vi	v19,v16,-4
	vsetvli	zero,zero,e64,m2,ta,mu
	vmv.v.x	v12,a0
	vmv.v.x	v14,a1
.L4:
	vlseg3e64.v	v6,(a5)
	vmseq.vv	v2,v6,v12
	vmseq.vv	v0,v8,v12
	vmsne.vv	v1,v8,v12
	vmand.mm	v1,v1,v2
	vmerge.vvm	v2,v8,v14,v0
	vmv1r.v	v0,v1
	addi	a4,a5,24
	vmerge.vvm	v6,v6,v14,v0
	vmerge.vim	v2,v2,0,v0
	vrgatherei16.vv	v4,v6,v18
	vmv1r.v	v0,v25
	vrgatherei16.vv	v4,v2,v24,v0.t
	vs1r.v	v4,0(a5)
	addi	a3,a5,48
	vmv1r.v	v0,v21
	vmv2r.v	v4,v2
	vcompress.vm	v4,v6,v0
	vs1r.v	v4,0(a4)
	vmv1r.v	v0,v23
	addi	a4,a5,72
	vrgatherei16.vv	v4,v6,v17
	vrgatherei16.vv	v4,v2,v22,v0.t
	vs1r.v	v4,0(a3)
	vmv1r.v	v0,v20
	vrgatherei16.vv	v4,v6,v16
	addi	a5,a5,96
	vrgatherei16.vv	v4,v2,v19,v0.t
	vs1r.v	v4,0(a4)
	bne	a6,a5,.L4

No spillings, no "sp" register used.

Tested on both RV32 and RV64, no regression.

Ok for trunk ?

	PR target/113112

gcc/ChangeLog:

	* config/riscv/riscv-vector-costs.cc (compute_nregs_for_mode): Fix
	pointer type liveness count.

gcc/testsuite/ChangeLog:

	* gcc.dg/vect/costmodel/riscv/rvv/pr113112-4.c: New test.
Co-Authored by: Songhe Zhu <zhusonghe@eswincomputing.com>
Co-Authored by: Ciyan Pan <panciyan@eswincomputing.com>
gcc/ChangeLog:

	* config/riscv/iterators.md: Add rotate insn name.
	* config/riscv/riscv.md: Add new insns name for crypto vector.
	* config/riscv/vector-iterators.md: Add new iterators for crypto vector.
	* config/riscv/vector.md: Add the corresponding attr for crypto vector.
	* config/riscv/vector-crypto.md: New file.The machine descriptions for crypto vector.
Check whether the assembler supports tls le relax. If it supports it, the assembly
instruction sequence of tls le relax will be generated by default.

The original way to obtain the tls le symbol address:
    lu12i.w $rd, %le_hi20(sym)
    ori $rd, $rd, %le_lo12(sym)
    add.{w/d} $rd, $rd, $tp

If the assembler supports tls le relax, the following sequence is generated:

    lu12i.w $rd, %le_hi20_r(sym)
    add.{w/d} $rd,$rd,$tp,%le_add_r(sym)
    addi.{w/d} $rd,$rd,%le_lo12_r(sym)

gcc/ChangeLog:

	* config.in: Regenerate.
	* config/loongarch/loongarch-opts.h (HAVE_AS_TLS_LE_RELAXATION): Define.
	* config/loongarch/loongarch.cc (loongarch_legitimize_tls_address):
	Added TLS Le Relax support.
	(loongarch_print_operand_reloc): Add the output string of TLS Le Relax.
	* config/loongarch/loongarch.md (@add_tls_le_relax<mode>): New template.
	* configure: Regenerate.
	* configure.ac: Check if binutils supports TLS le relax.

gcc/testsuite/ChangeLog:

	* lib/target-supports.exp: Add a function to check whether binutil supports
	TLS Le Relax.
	* gcc.target/loongarch/tls-le-relax.c: New test.
Committed.

gcc/ChangeLog:

	* config/riscv/riscv-vector-costs.cc: Move STMT_VINFO_TYPE (...) to local.
gcc/ChangeLog:
	* config/riscv/vector-crypto.md: Modify copyright year.
This patch adds a new tuning option
'AARCH64_EXTRA_TUNE_FULLY_PIPELINED_FMA', to consider fully
pipelined FMAs in reassociation. Also, set this option by default
for Ampere CPUs.

gcc/ChangeLog:

	* config/aarch64/aarch64-tuning-flags.def
	(AARCH64_EXTRA_TUNING_OPTION): New tuning option
	AARCH64_EXTRA_TUNE_FULLY_PIPELINED_FMA.
	* config/aarch64/aarch64.cc
	(aarch64_override_options_internal): Set
	param_fully_pipelined_fma according to tuning option.
	* config/aarch64/tuning_models/ampere1.h: Add
	AARCH64_EXTRA_TUNE_FULLY_PIPELINED_FMA to tune_flags.
	* config/aarch64/tuning_models/ampere1a.h: Likewise.
	* config/aarch64/tuning_models/ampere1b.h: Likewise.
…attern

In https://gcc.gnu.org/git/?p=gcc.git;a=commit;h=d1eacedc6d9ba9f5522f2c8d49ccfdf7939ad72d
I optimize COND_LEN_xxx pattern with dummy len and dummy mask with too simply solution which
causes redundant vsetvli in the following case:

	vsetvli	a5,a2,e8,m1,ta,ma
	vle32.v	v8,0(a0)
	vsetivli	zero,16,e32,m4,tu,mu   ----> We should apply VLMAX instead of a CONST_INT AVL
	slli	a4,a5,2
	vand.vv	v0,v8,v16
	vand.vv	v4,v8,v12
	vmseq.vi	v0,v0,0
	sub	a2,a2,a5
	vneg.v	v4,v8,v0.t
	vsetvli	zero,a5,e32,m4,ta,ma

The root cause above is the following codes:

is_vlmax_len_p (...)
   return poly_int_rtx_p (len, &value)
        && known_eq (value, GET_MODE_NUNITS (mode))
        && !satisfies_constraint_K (len);            ---> incorrect check.

Actually, we should not elide the VLMAX situation that has AVL in range of [0,31].

After removing the the check above, we will have this following issue:

        vsetivli        zero,4,e32,m1,ta,ma
        vlseg4e32.v     v4,(a5)
        vlseg4e32.v     v12,(a3)
        vsetvli a5,zero,e32,m1,tu,ma             ---> This is redundant since VLMAX AVL = 4 when it is fixed-vlmax
        vfadd.vf        v3,v13,fa0
        vfadd.vf        v1,v12,fa1
        vfmul.vv        v17,v3,v5
        vfmul.vv        v16,v1,v5

Since all the following operations (vfadd.vf ... etc) are COND_LEN_xxx with dummy len and dummy mask,
we add the simplification operations dummy len and dummy mask into VLMAX TA and MA policy.

So, after this patch. Both cases are optimal codegen now:

case 1:
	vsetvli	a5,a2,e32,m1,ta,mu
	vle32.v	v2,0(a0)
	slli	a4,a5,2
	vand.vv	v1,v2,v3
	vand.vv	v0,v2,v4
	sub	a2,a2,a5
	vmseq.vi	v0,v0,0
	vneg.v	v1,v2,v0.t
	vse32.v	v1,0(a1)

case 2:
	vsetivli zero,4,e32,m1,tu,ma
	addi a4,a5,400
	vlseg4e32.v v12,(a3)
	vfadd.vf v3,v13,fa0
	vfadd.vf v1,v12,fa1
	vlseg4e32.v v4,(a4)
	vfadd.vf v2,v14,fa1
	vfmul.vv v17,v3,v5
	vfmul.vv v16,v1,v5

This patch is just additional fix of previous approved patch.
Tested on both RV32 and RV64 newlib no regression. Committed.

gcc/ChangeLog:

	* config/riscv/riscv-v.cc (is_vlmax_len_p): Remove satisfies_constraint_K.
	(expand_cond_len_op): Add simplification of dummy len and dummy mask.

gcc/testsuite/ChangeLog:

	* gcc.target/riscv/rvv/base/vf_avl-3.c: New test.
With new glibc one more loop can be vectorized via simd exp in libmvec.

Found by the Linaro TCWG CI.

gcc/testsuite/ChangeLog:

	* gfortran.dg/vect/vect-8.f90: Accept more vectorized loops.
libsanitizer:
	* configure.tgt (riscv64-*-linux*): Enable LSan and TSan.
tschwinge and others added 29 commits March 16, 2024 23:32
…or."

This is not applicable for GCC/Rust, which does use GitHub.

This reverts commit 2b9778c.
PR109849 shows that a loop that heavily pushes and pops from a stack
implemented by a C++ std::vec results in slow code, mainly because the
vector structure is not split by SRA and so we end up in many loads
and stores into it.  This is because it is passed by reference
to (re)allocation methods and so needs to live in memory, even though
it does not escape from them and so we could SRA it if we
re-constructed it before the call and then separated it to distinct
replacements afterwards.

This patch does exactly that, first relaxing the selection of
candidates to also include those which are addressable but do not
escape and then adding code to deal with the calls.  The
micro-benchmark that is also the (scan-dump) testcase in this patch
runs twice as fast with it than with current trunk.  Honza measured
its effect on the libjxl benchmark and it almost closes the
performance gap between Clang and GCC while not requiring excessive
inlining and thus code growth.

The patch disallows creation of replacements for such aggregates which
are also accessed with a precision smaller than their size because I
have observed that this led to excessive zero-extending of data
leading to slow-downs of perlbench (on some CPUs).  Apart from this
case I have not noticed any regressions, at least not so far.

Gimple call argument flags can tell if an argument is unused (and then
we do not need to generate any statements for it) or if it is not
written to and then we do not need to generate statements loading
replacements from the original aggregate after the call statement.
Unfortunately, we cannot symmetrically use flags that an aggregate is
not read because to avoid re-constructing the aggregate before the
call because flags don't tell which what parts of aggregates were not
written to, so we load all replacements, and so all need to have the
correct value before the call.

This version of the patch also takes care to avoid attempts to modify
abnormal edges, something which was missing in the previosu version.

gcc/ChangeLog:

2023-11-23  Martin Jambor  <mjambor@suse.cz>

	PR middle-end/109849
	* tree-sra.cc (passed_by_ref_in_call): New.
	(sra_initialize): Allocate passed_by_ref_in_call.
	(sra_deinitialize): Free passed_by_ref_in_call.
	(create_access): Add decl pool candidates only if they are not
	already	candidates.
	(build_access_from_expr_1): Bail out on ADDR_EXPRs.
	(build_access_from_call_arg): New function.
	(asm_visit_addr): Rename to scan_visit_addr, change the
	disqualification dump message.
	(scan_function): Check taken addresses for all non-call statements,
	including phi nodes.  Process all call arguments, including the static
	chain, build_access_from_call_arg.
	(maybe_add_sra_candidate): Relax need_to_live_in_memory check to allow
	non-escaped local variables.
	(sort_and_splice_var_accesses): Disallow smaller-than-precision
	replacements for aggregates passed by reference to functions.
	(sra_modify_expr): Use a separate stmt iterator for adding satements
	before the processed statement and after it.
	(enum out_edge_check): New type.
	(abnormal_edge_after_stmt_p): New function.
	(sra_modify_call_arg): New function.
	(sra_modify_assign): Adjust calls to sra_modify_expr.
	(sra_modify_function_body): Likewise, use sra_modify_call_arg to
	process call arguments, including the static chain.

gcc/testsuite/ChangeLog:

2023-11-23  Martin Jambor  <mjambor@suse.cz>

	PR middle-end/109849
	* g++.dg/tree-ssa/pr109849.C: New test.
	* g++.dg/tree-ssa/sra-eh-1.C: Likewise.
	* gcc.dg/tree-ssa/pr109849.c: Likewise.
	* gcc.dg/tree-ssa/sra-longjmp-1.c: Likewise.
	* gfortran.dg/pr43984.f90: Added -fno-tree-sra to dg-options.

(cherry picked from commit aae723d)
The following separates the escape solution for return stmts not
only during points-to solving but also for later querying.  This
requires adjusting the points-to-global tests to include escapes
through returns.  Technically the patch replaces the existing
post-processing which computes the transitive closure of the
returned value solution by a proper artificial variable with
transitive closure constraints.  Instead of adding the solution
to escaped we track it separately.

	PR tree-optimization/112653
	* gimple-ssa.h (gimple_df): Add escaped_return solution.
	* tree-ssa.cc (init_tree_ssa): Reset it.
	(delete_tree_ssa): Likewise.
	* tree-ssa-structalias.cc (escaped_return_id): New.
	(find_func_aliases): Handle non-IPA return stmts by
	adding to ESCAPED_RETURN.
	(set_uids_in_ptset): Adjust HEAP escaping to also cover
	escapes through return.
	(init_base_vars): Initialize ESCAPED_RETURN.
	(compute_points_to_sets): Replace ESCAPED post-processing
	with recording the ESCAPED_RETURN solution.
	* tree-ssa-alias.cc (ref_may_alias_global_p_1): Check
	the ESCAPED_RETUNR solution.
	(dump_alias_info): Dump it.
	* cfgexpand.cc (update_alias_info_with_stack_vars): Update it.
	* ipa-icf.cc (sem_item_optimizer::fixup_points_to_sets):
	Likewise.
	* tree-inline.cc (expand_call_inline): Reset it.
	* tree-parloops.cc (parallelize_loops): Likewise.
	* tree-sra.cc (maybe_add_sra_candidate): Check it.

	* gcc.dg/tree-ssa/pta-return-1.c: New testcase.

(cherry picked from commit f7884f7)
gcc/rust/ChangeLog:

	* ast/rust-macro.h: Use proper node id instead of the one in the base
	Expr class - which is uninitialized.
…6ef014effb17af27ab7baf0d87bc8bc48a0c' into HEAD [#2916]

Merge "macro: Use MacroInvocation's node_id in ExternalItem constructor", to
avoid bootstrap failure:

    [...]
    In file included from [...]/source-gcc/gcc/rust/ast/rust-expr.h:6,
                     from [...]/source-gcc/gcc/rust/ast/rust-ast-full.h:24,
                     from [...]/source-gcc/gcc/rust/ast/rust-ast.cc:24:
    In copy constructor ‘Rust::AST::MacroInvocation::MacroInvocation(const Rust::AST::MacroInvocation&)’,
        inlined from ‘Rust::AST::MacroInvocation* Rust::AST::MacroInvocation::clone_macro_invocation_impl() const’ at [...]/source-gcc/gcc/rust/ast/rust-\
    macro.h:798:38:
    [...]/source-gcc/gcc/rust/ast/rust-macro.h:734:39: error: ‘*(Rust::AST::MacroInvocation*)<unknown>.Rust::AST::MacroInvocation::Rust::AST::ExprWithout\
    Block.Rust::AST::ExprWithoutBlock::Rust::AST::Expr.Rust::AST::Expr::node_id’ is used uninitialized [-Werror=uninitialized]
      734 |       builtin_kind (other.builtin_kind)
          |                                       ^
    cc1plus: all warnings being treated as errors
    make[3]: *** [[...]/source-gcc/gcc/rust/Make-lang.in:423: rust/rust-ast.o] Error 1
    [...]
    In file included from [...]/source-gcc/gcc/rust/ast/rust-expr.h:6,
                     from [...]/source-gcc/gcc/rust/ast/rust-item.h:27,
                     from [...]/source-gcc/gcc/rust/parse/rust-parse.h:20,
                     from [...]/source-gcc/gcc/rust/expand/rust-macro-expand.h:24,
                     from [...]/source-gcc/gcc/rust/expand/rust-macro-expand.cc:19:
    In copy constructor ‘Rust::AST::MacroInvocation::MacroInvocation(const Rust::AST::MacroInvocation&)’,
        inlined from ‘Rust::AST::MacroInvocation* Rust::AST::MacroInvocation::clone_macro_invocation_impl() const’ at [...]/source-gcc/gcc/rust/ast/rust-\
    macro.h:798:38:
    [...]/source-gcc/gcc/rust/ast/rust-macro.h:734:39: error: ‘*(Rust::AST::MacroInvocation*)<unknown>.Rust::AST::MacroInvocation::Rust::AST::ExprWithout\
    Block.Rust::AST::ExprWithoutBlock::Rust::AST::Expr.Rust::AST::Expr::node_id’ is used uninitialized [-Werror=uninitialized]
      734 |       builtin_kind (other.builtin_kind)
          |                                       ^
    cc1plus: all warnings being treated as errors
    make[3]: *** [[...]/source-gcc/gcc/rust/Make-lang.in:433: rust/rust-macro-expand.o] Error 1
    [...]
Accordingly also adjust #2086 "break rust 💥" code, to avoid:

    [...]/source-gcc/gcc/rust/resolve/rust-ast-resolve-expr.cc: In member function ‘virtual void Rust::Resolver::ResolveExpr::visit(Rust::AST::IdentifierExpr&)’:
    [...]/source-gcc/gcc/rust/resolve/rust-ast-resolve-expr.cc:164:42: error: invalid conversion from ‘void (*)(diagnostic_context*, diagnostic_info*, diagnostic_t)’ to ‘diagnostic_finalizer_fn’ {aka ‘void (*)(diagnostic_context*, con
iagnostic_info*, diagnostic_t)’} [-fpermissive]
      164 |       diagnostic_finalizer (global_dc) = funny_ice_finalizer;
          |                                          ^~~~~~~~~~~~~~~~~~~
          |                                          |
          |                                          void (*)(diagnostic_context*, diagnostic_info*, diagnostic_t)
Reformat the upstream GCC commit 61644ae
"gccrs: tokenize Unicode identifiers" change to 'gcc/rust/lex/rust-lex.cc'
to clang-format's liking.

	gcc/rust/
	* lex/rust-lex.cc (is_identifier_start): Placate clang-format.
@tschwinge tschwinge merged commit d74728e into master Mar 25, 2024
7 of 9 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet