Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[hwasan] Add intrinsics for fixed shadow on Aarch64 #89319

Merged
merged 12 commits into from
Apr 22, 2024

Conversation

thurstond
Copy link
Contributor

This separates out the first half of "Optimize outlined memaccess for fixed shadow on Aarch64" (#88544). This patch does not meaningfully affect the behavior of HWASan, since the second half of that patch has the changes that will make HWASan use these intrinsics.

This patch introduces HWASan memaccess intrinsics that assume a fixed shadow (with the offset provided by --hwasan-mapping-offset=...), with and without short granule support.

We currently only support lowering the LLVM IR intrinsic to AArch64.

The test case is adapted from hwasan-check-memaccess.ll.

@llvmbot
Copy link
Collaborator

llvmbot commented Apr 18, 2024

@llvm/pr-subscribers-backend-aarch64
@llvm/pr-subscribers-compiler-rt-sanitizer
@llvm/pr-subscribers-llvm-transforms

@llvm/pr-subscribers-llvm-ir

Author: Thurston Dang (thurstond)

Changes

This separates out the first half of "Optimize outlined memaccess for fixed shadow on Aarch64" (#88544). This patch does not meaningfully affect the behavior of HWASan, since the second half of that patch has the changes that will make HWASan use these intrinsics.

This patch introduces HWASan memaccess intrinsics that assume a fixed shadow (with the offset provided by --hwasan-mapping-offset=...), with and without short granule support.

We currently only support lowering the LLVM IR intrinsic to AArch64.

The test case is adapted from hwasan-check-memaccess.ll.


Full diff: https://github.com/llvm/llvm-project/pull/89319.diff

6 Files Affected:

  • (modified) llvm/include/llvm/IR/Intrinsics.td (+19)
  • (modified) llvm/include/llvm/Transforms/Instrumentation/HWAddressSanitizer.h (+2)
  • (modified) llvm/lib/Target/AArch64/AArch64AsmPrinter.cpp (+39-9)
  • (modified) llvm/lib/Target/AArch64/AArch64InstrInfo.td (+14)
  • (modified) llvm/lib/Transforms/Instrumentation/HWAddressSanitizer.cpp (+8)
  • (added) llvm/test/CodeGen/AArch64/hwasan-check-memaccess-fixedshadow.ll (+144)
diff --git a/llvm/include/llvm/IR/Intrinsics.td b/llvm/include/llvm/IR/Intrinsics.td
index bdd8465883fcff..9d784fa1aba546 100644
--- a/llvm/include/llvm/IR/Intrinsics.td
+++ b/llvm/include/llvm/IR/Intrinsics.td
@@ -2362,13 +2362,32 @@ def int_load_relative: DefaultAttrsIntrinsic<[llvm_ptr_ty], [llvm_ptr_ty, llvm_a
 def int_asan_check_memaccess :
   Intrinsic<[],[llvm_ptr_ty, llvm_i32_ty], [ImmArg<ArgIndex<1>>]>;
 
+// HWASan intrinsics to test whether a pointer is addressable.
+// Parameters: Shadow base, pointer to be checked for validity, AccessInfo
+// (AccessInfo is defined in HWAddressSanitizer.h)
 def int_hwasan_check_memaccess :
   Intrinsic<[], [llvm_ptr_ty, llvm_ptr_ty, llvm_i32_ty],
             [ImmArg<ArgIndex<2>>]>;
+
+// Same as memaccess but supports short granule checks.
+// Parameters: Shadow base, pointer to be checked for validity, AccessInfo
 def int_hwasan_check_memaccess_shortgranules :
   Intrinsic<[], [llvm_ptr_ty, llvm_ptr_ty, llvm_i32_ty],
             [ImmArg<ArgIndex<2>>]>;
 
+// Same as memaccess but assumes a fixed shadow offset,
+// which no longer needs to be passed as a parameter.
+// Parameters: Pointer to be checked for validity, AccessInfo
+def int_hwasan_check_memaccess_fixedshadow :
+  Intrinsic<[], [llvm_ptr_ty, llvm_i32_ty],
+            [ImmArg<ArgIndex<1>>]>;
+
+// Same as memaccess but supports short granule checks and assumes a fixed
+// shadow offset, which no longer needs to be passed as a parameter.
+def int_hwasan_check_memaccess_shortgranules_fixedshadow :
+  Intrinsic<[], [llvm_ptr_ty, llvm_i32_ty],
+            [ImmArg<ArgIndex<1>>]>;
+
 // Xray intrinsics
 //===----------------------------------------------------------------------===//
 // Custom event logging for x-ray.
diff --git a/llvm/include/llvm/Transforms/Instrumentation/HWAddressSanitizer.h b/llvm/include/llvm/Transforms/Instrumentation/HWAddressSanitizer.h
index 11ea66780d8c5d..ae4ca0037d1140 100644
--- a/llvm/include/llvm/Transforms/Instrumentation/HWAddressSanitizer.h
+++ b/llvm/include/llvm/Transforms/Instrumentation/HWAddressSanitizer.h
@@ -67,6 +67,8 @@ enum { RuntimeMask = 0xffff };
 
 } // namespace HWASanAccessInfo
 
+std::optional<unsigned long long> getFixedShadowBase(void);
+
 } // namespace llvm
 
 #endif
diff --git a/llvm/lib/Target/AArch64/AArch64AsmPrinter.cpp b/llvm/lib/Target/AArch64/AArch64AsmPrinter.cpp
index f6ccd0ecfdc893..c122a275ed9e77 100644
--- a/llvm/lib/Target/AArch64/AArch64AsmPrinter.cpp
+++ b/llvm/lib/Target/AArch64/AArch64AsmPrinter.cpp
@@ -64,6 +64,7 @@
 #include <cstdint>
 #include <map>
 #include <memory>
+#include <optional>
 
 using namespace llvm;
 
@@ -117,6 +118,7 @@ class AArch64AsmPrinter : public AsmPrinter {
   void LowerPATCHABLE_EVENT_CALL(const MachineInstr &MI, bool Typed);
 
   typedef std::tuple<unsigned, bool, uint32_t> HwasanMemaccessTuple;
+  std::optional<unsigned long long> HwasanFixedShadowBase = std::nullopt;
   std::map<HwasanMemaccessTuple, MCSymbol *> HwasanMemaccessSymbols;
   void LowerKCFI_CHECK(const MachineInstr &MI);
   void LowerHWASAN_CHECK_MEMACCESS(const MachineInstr &MI);
@@ -551,8 +553,16 @@ void AArch64AsmPrinter::LowerKCFI_CHECK(const MachineInstr &MI) {
 void AArch64AsmPrinter::LowerHWASAN_CHECK_MEMACCESS(const MachineInstr &MI) {
   Register Reg = MI.getOperand(0).getReg();
   bool IsShort =
-      MI.getOpcode() == AArch64::HWASAN_CHECK_MEMACCESS_SHORTGRANULES;
+      ((MI.getOpcode() == AArch64::HWASAN_CHECK_MEMACCESS_SHORTGRANULES) ||
+       (MI.getOpcode() ==
+        AArch64::HWASAN_CHECK_MEMACCESS_SHORTGRANULES_FIXEDSHADOW));
   uint32_t AccessInfo = MI.getOperand(1).getImm();
+
+  if ((MI.getOpcode() == AArch64::HWASAN_CHECK_MEMACCESS_FIXEDSHADOW) ||
+      (MI.getOpcode() ==
+       AArch64::HWASAN_CHECK_MEMACCESS_SHORTGRANULES_FIXEDSHADOW))
+    HwasanFixedShadowBase = getFixedShadowBase();
+
   MCSymbol *&Sym =
       HwasanMemaccessSymbols[HwasanMemaccessTuple(Reg, IsShort, AccessInfo)];
   if (!Sym) {
@@ -625,14 +635,32 @@ void AArch64AsmPrinter::emitHwasanMemaccessSymbols(Module &M) {
                                      .addImm(4)
                                      .addImm(55),
                                  *STI);
-    OutStreamer->emitInstruction(
-        MCInstBuilder(AArch64::LDRBBroX)
-            .addReg(AArch64::W16)
-            .addReg(IsShort ? AArch64::X20 : AArch64::X9)
-            .addReg(AArch64::X16)
-            .addImm(0)
-            .addImm(0),
-        *STI);
+
+    if (HwasanFixedShadowBase.has_value()) {
+      OutStreamer->emitInstruction(
+          MCInstBuilder(AArch64::MOVZXi)
+              .addReg(AArch64::X17)
+              .addImm(HwasanFixedShadowBase.value() >> 32)
+              .addImm(32),
+          *STI);
+      OutStreamer->emitInstruction(MCInstBuilder(AArch64::LDRBBroX)
+                                       .addReg(AArch64::W16)
+                                       .addReg(AArch64::X17)
+                                       .addReg(AArch64::X16)
+                                       .addImm(0)
+                                       .addImm(0),
+                                   *STI);
+    } else {
+      OutStreamer->emitInstruction(
+          MCInstBuilder(AArch64::LDRBBroX)
+              .addReg(AArch64::W16)
+              .addReg(IsShort ? AArch64::X20 : AArch64::X9)
+              .addReg(AArch64::X16)
+              .addImm(0)
+              .addImm(0),
+          *STI);
+    }
+
     OutStreamer->emitInstruction(
         MCInstBuilder(AArch64::SUBSXrs)
             .addReg(AArch64::XZR)
@@ -1765,6 +1793,8 @@ void AArch64AsmPrinter::emitInstruction(const MachineInstr *MI) {
 
   case AArch64::HWASAN_CHECK_MEMACCESS:
   case AArch64::HWASAN_CHECK_MEMACCESS_SHORTGRANULES:
+  case AArch64::HWASAN_CHECK_MEMACCESS_FIXEDSHADOW:
+  case AArch64::HWASAN_CHECK_MEMACCESS_SHORTGRANULES_FIXEDSHADOW:
     LowerHWASAN_CHECK_MEMACCESS(*MI);
     return;
 
diff --git a/llvm/lib/Target/AArch64/AArch64InstrInfo.td b/llvm/lib/Target/AArch64/AArch64InstrInfo.td
index 3bf90778363c6c..f6dc168e10a992 100644
--- a/llvm/lib/Target/AArch64/AArch64InstrInfo.td
+++ b/llvm/lib/Target/AArch64/AArch64InstrInfo.td
@@ -1818,6 +1818,20 @@ def HWASAN_CHECK_MEMACCESS_SHORTGRANULES : Pseudo<
   Sched<[]>;
 }
 
+let Defs = [ X16, X17, LR, NZCV ] in {
+def HWASAN_CHECK_MEMACCESS_FIXEDSHADOW : Pseudo<
+  (outs), (ins GPR64noip:$ptr, i32imm:$accessinfo),
+  [(int_hwasan_check_memaccess_fixedshadow GPR64noip:$ptr, (i32 timm:$accessinfo))]>,
+  Sched<[]>;
+}
+
+let Defs = [ X16, X17, LR, NZCV ] in {
+def HWASAN_CHECK_MEMACCESS_SHORTGRANULES_FIXEDSHADOW : Pseudo<
+  (outs), (ins GPR64noip:$ptr, i32imm:$accessinfo),
+  [(int_hwasan_check_memaccess_shortgranules_fixedshadow GPR64noip:$ptr, (i32 timm:$accessinfo))]>,
+  Sched<[]>;
+}
+
 // The virtual cycle counter register is CNTVCT_EL0.
 def : Pat<(readcyclecounter), (MRS 0xdf02)>;
 
diff --git a/llvm/lib/Transforms/Instrumentation/HWAddressSanitizer.cpp b/llvm/lib/Transforms/Instrumentation/HWAddressSanitizer.cpp
index 3890aa8ca6ee60..d284d96438efac 100644
--- a/llvm/lib/Transforms/Instrumentation/HWAddressSanitizer.cpp
+++ b/llvm/lib/Transforms/Instrumentation/HWAddressSanitizer.cpp
@@ -448,6 +448,14 @@ class HWAddressSanitizer {
 
 } // end anonymous namespace
 
+namespace llvm {
+std::optional<unsigned long long> getFixedShadowBase(void) {
+  if (ClMappingOffset.getNumOccurrences() > 0)
+    return ClMappingOffset;
+  return std::nullopt;
+}
+} // namespace llvm
+
 PreservedAnalyses HWAddressSanitizerPass::run(Module &M,
                                               ModuleAnalysisManager &MAM) {
   const StackSafetyGlobalInfo *SSI = nullptr;
diff --git a/llvm/test/CodeGen/AArch64/hwasan-check-memaccess-fixedshadow.ll b/llvm/test/CodeGen/AArch64/hwasan-check-memaccess-fixedshadow.ll
new file mode 100644
index 00000000000000..89996405bec4a7
--- /dev/null
+++ b/llvm/test/CodeGen/AArch64/hwasan-check-memaccess-fixedshadow.ll
@@ -0,0 +1,144 @@
+; RUN: llc --hwasan-mapping-offset=4398046511104 < %s | FileCheck %s
+
+target triple = "aarch64--linux-android"
+
+define ptr @f1(ptr %x0, ptr %x1) {
+  ; CHECK: f1:
+  ; CHECK: str x30, [sp, #-16]!
+  ; CHECK-NEXT: .cfi_def_cfa_offset 16
+  ; CHECK-NEXT: .cfi_offset w30, -16
+  ; CHECK-NEXT: bl __hwasan_check_x1_1
+  ; CHECK-NEXT: mov x0, x1
+  ; CHECK-NEXT: ldr x30, [sp], #16
+  ; CHECK-NEXT: ret
+  call void @llvm.hwasan.check.memaccess.fixedshadow(ptr %x1, i32 1)
+  ret ptr %x1
+}
+
+define ptr @f2(ptr %x0, ptr %x1) {
+  ; CHECK: f2:
+  ; CHECK: str x30, [sp, #-16]!
+  ; CHECK-NEXT: .cfi_def_cfa_offset 16
+  ; CHECK-NEXT: .cfi_offset w30, -16
+  ; CHECK-NEXT: bl __hwasan_check_x0_2_short_v2
+  ; CHECK-NEXT: ldr x30, [sp], #16
+  ; CHECK-NEXT: ret
+  call void @llvm.hwasan.check.memaccess.shortgranules.fixedshadow(ptr %x0, i32 2)
+  ret ptr %x0
+}
+
+define void @f3(ptr %x0, ptr %x1) {
+  ; 0x3ff0000 (kernel, match-all = 0xff)
+  call void @llvm.hwasan.check.memaccess.fixedshadow(ptr %x1, i32 67043328)
+  ret void
+}
+
+define void @f4(ptr %x0, ptr %x1) {
+  ; 0x1000010 (access-size-index = 0, is-write = 1, match-all = 0x0)
+  call void @llvm.hwasan.check.memaccess.shortgranules.fixedshadow(ptr %x1, i32 16777232)
+  ret void
+}
+
+declare void @llvm.hwasan.check.memaccess.fixedshadow(ptr, i32)
+declare void @llvm.hwasan.check.memaccess.shortgranules.fixedshadow(ptr, i32)
+
+; CHECK:      .section .text.hot,"axG",@progbits,__hwasan_check_x0_2_short_v2,comdat
+; CHECK-NEXT: .type __hwasan_check_x0_2_short_v2,@function
+; CHECK-NEXT: .weak __hwasan_check_x0_2_short_v2
+; CHECK-NEXT: .hidden __hwasan_check_x0_2_short_v2
+; CHECK-NEXT: __hwasan_check_x0_2_short_v2:
+; CHECK-NEXT: sbfx x16, x0, #4, #52
+; CHECK-NEXT: mov x17, #4398046511104
+; CHECK-NEXT: ldrb w16, [x17, x16]
+; CHECK-NEXT: cmp x16, x0, lsr #56
+; CHECK-NEXT: b.ne .Ltmp0
+; CHECK-NEXT: .Ltmp1:
+; CHECK-NEXT: ret
+; CHECK-NEXT: .Ltmp0:
+; CHECK-NEXT: cmp w16, #15
+; CHECK-NEXT: b.hi .Ltmp2
+; CHECK-NEXT: and x17, x0, #0xf
+; CHECK-NEXT: add x17, x17, #3
+; CHECK-NEXT: cmp w16, w17
+; CHECK-NEXT: b.ls .Ltmp2
+; CHECK-NEXT: orr x16, x0, #0xf
+; CHECK-NEXT: ldrb w16, [x16]
+; CHECK-NEXT: cmp x16, x0, lsr #56
+; CHECK-NEXT: b.eq .Ltmp1
+; CHECK-NEXT: .Ltmp2:
+; CHECK-NEXT: stp x0, x1, [sp, #-256]!
+; CHECK-NEXT: stp x29, x30, [sp, #232]
+; CHECK-NEXT: mov x1, #2
+; CHECK-NEXT: adrp  x16, :got:__hwasan_tag_mismatch_v2
+; CHECK-NEXT: ldr x16, [x16, :got_lo12:__hwasan_tag_mismatch_v2]
+; CHECK-NEXT: br  x16
+
+
+; CHECK:      .section .text.hot,"axG",@progbits,__hwasan_check_x1_1,comdat
+; CHECK-NEXT: .type __hwasan_check_x1_1,@function
+; CHECK-NEXT: .weak __hwasan_check_x1_1
+; CHECK-NEXT: .hidden __hwasan_check_x1_1
+; CHECK-NEXT: __hwasan_check_x1_1:
+; CHECK-NEXT: sbfx x16, x1, #4, #52
+; CHECK-NEXT: mov x17, #4398046511104
+; CHECK-NEXT: ldrb w16, [x17, x16]
+; CHECK-NEXT: cmp x16, x1, lsr #56
+; CHECK-NEXT: b.ne .Ltmp3
+; CHECK-NEXT: .Ltmp4:
+; CHECK-NEXT: ret
+; CHECK-NEXT: .Ltmp3:
+; CHECK-NEXT: stp x0, x1, [sp, #-256]!
+; CHECK-NEXT: stp x29, x30, [sp, #232]
+; CHECK-NEXT: mov x0, x1
+; CHECK-NEXT: mov x1, #1
+; CHECK-NEXT: adrp  x16, :got:__hwasan_tag_mismatch
+; CHECK-NEXT: ldr x16, [x16, :got_lo12:__hwasan_tag_mismatch]
+; CHECK-NEXT: br  x16
+
+; CHECK:      __hwasan_check_x1_67043328:
+; CHECK-NEXT: sbfx x16, x1, #4, #52
+; CHECK-NEXT: mov x17, #4398046511104
+; CHECK-NEXT: ldrb w16, [x17, x16]
+; CHECK-NEXT: cmp x16, x1, lsr #56
+; CHECK-NEXT: b.ne .Ltmp5
+; CHECK-NEXT: .Ltmp6:
+; CHECK-NEXT: ret
+; CHECK-NEXT: .Ltmp5:
+; CHECK-NEXT: lsr x17, x1, #56
+; CHECK-NEXT: cmp x17, #255
+; CHECK-NEXT: b.eq .Ltmp6
+; CHECK-NEXT: stp x0, x1, [sp, #-256]!
+; CHECK-NEXT: stp x29, x30, [sp, #232]
+; CHECK-NEXT: mov x0, x1
+; CHECK-NEXT: mov x1, #0
+; CHECK-NEXT: b __hwasan_tag_mismatch
+
+; CHECK:      __hwasan_check_x1_16777232_short_v2:
+; CHECK-NEXT: sbfx	x16, x1, #4, #52
+; CHECK-NEXT: mov x17, #4398046511104
+; CHECK-NEXT: ldrb w16, [x17, x16]
+; CHECK-NEXT: cmp	x16, x1, lsr #56
+; CHECK-NEXT: b.ne	.Ltmp7
+; CHECK-NEXT: .Ltmp8:
+; CHECK-NEXT: ret
+; CHECK-NEXT: .Ltmp7:
+; CHECK-NEXT: lsr	x17, x1, #56
+; CHECK-NEXT: cmp	x17, #0
+; CHECK-NEXT: b.eq	.Ltmp8
+; CHECK-NEXT: cmp	w16, #15
+; CHECK-NEXT: b.hi	.Ltmp9
+; CHECK-NEXT: and	x17, x1, #0xf
+; CHECK-NEXT: cmp	w16, w17
+; CHECK-NEXT: b.ls	.Ltmp9
+; CHECK-NEXT: orr	x16, x1, #0xf
+; CHECK-NEXT: ldrb	w16, [x16]
+; CHECK-NEXT: cmp	x16, x1, lsr #56
+; CHECK-NEXT: b.eq	.Ltmp8
+; CHECK-NEXT: .Ltmp9:
+; CHECK-NEXT: stp	x0, x1, [sp, #-256]!
+; CHECK-NEXT: stp	x29, x30, [sp, #232]
+; CHECK-NEXT: mov	x0, x1
+; CHECK-NEXT: mov	x1, #16
+; CHECK-NEXT: adrp	x16, :got:__hwasan_tag_mismatch_v2
+; CHECK-NEXT: ldr	x16, [x16, :got_lo12:__hwasan_tag_mismatch_v2]
+; CHECK-NEXT: br	x16

thurstond added a commit to thurstond/llvm-project that referenced this pull request Apr 18, 2024
The test case is based on hwasan-check-memaccess.ll (albeit updated
using update_llc_test_checks) with --hwasan-mapping-offset=...

--hwasan-mapping-offset=... actually doesn't affect the LLVM IR
at the moment; future work will introduce memaccess fixed shadow
intrinsics. (llvm#89319)
thurstond added a commit that referenced this pull request Apr 18, 2024
…89328)

The test case is based on hwasan-check-memaccess.ll (albeit updated
using update_llc_test_checks) with --hwasan-mapping-offset=...

--hwasan-mapping-offset=... actually doesn't affect the LLVM IR at the
moment; future work will introduce memaccess fixed shadow intrinsics.
(#89319)
This separates out the first half of "Optimize outlined memaccess for fixed shadow on Aarch64" (llvm#88544).
This patch does not meaningfully affect the behavior of HWASan, since the second half of that patch has the changes
that will make HWASan use these intrinsics.

This patch introduces HWASan memaccess intrinsics that assume a fixed shadow
(with the offset provided by --hwasan-mapping-offset=...), with and without
short granule support.

We currently only support lowering the LLVM IR intrinsic to AArch64.

The test case is adapted from hwasan-check-memaccess.ll.
Copy link
Collaborator

@vitalybuka vitalybuka left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM
up to @fmayer if we need error message on !.has_value()

@fmayer
Copy link
Contributor

fmayer commented Apr 19, 2024

LGTM up to @fmayer if we need error message on !.has_value()

Yeah, please do that. I don't think it's a good idea to just randomly compile with 0 as shadow address, which I think is unlikely to be correct and will randomly crash at runtime later.

Copy link

github-actions bot commented Apr 22, 2024

✅ With the latest revision this PR passed the C/C++ code formatter.

@thurstond

This comment was marked as resolved.

llvm/include/llvm/IR/Intrinsics.td Outdated Show resolved Hide resolved
llvm/lib/Target/AArch64/AArch64AsmPrinter.cpp Outdated Show resolved Hide resolved
llvm/lib/Target/AArch64/AArch64AsmPrinter.cpp Show resolved Hide resolved
@thurstond thurstond merged commit 365bddf into llvm:main Apr 22, 2024
3 of 4 checks passed
if (IsFixedShadow) {
// Aarch64 makes it difficult to embed large constants in the code.
// Fortuitously, kShadowBaseAlignment == 32, so we use the 32-bit
// left-shift option in the MOV instruction. Combined with the 16-bit
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

16-bit immediate looks misleading?
It's i64imm:$fixed_shadow

// immediate, this is enough to represent any offset up to 2**48.
OutStreamer->emitInstruction(MCInstBuilder(AArch64::MOVZXi)
.addReg(AArch64::X17)
.addImm(FixedShadowOffset >> 32)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we probably want a diagnostics
((FixedShadowOffset >> 32) << 32) == FixedShadowOffset
and request shadow 32bit aligned

or change intrinsic to 16imm for real

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

or add if (!aligned) and insert longer less efficient but correct sequence

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants