Permalink
Browse files

Improve ATOMIC_BARRIER documentation

  • Loading branch information...
1 parent d650182 commit 5d0c8dd814059f06859d53142bb41f55b96b4244 @ledvinap ledvinap committed Dec 30, 2016
Showing with 51 additions and 10 deletions.
  1. +44 −3 docs/development/Atomic Barrier.md
  2. +7 −7 src/main/build/atomic.h
@@ -1,8 +1,49 @@
+# Atomic Barrier implementation
+
+```
+static int markme_bar = 0;
+static int markme = 0;
+
+markme++;
+// (1) markme is read into register, but not changed
+markme_bar++;
+// markme_bar is read from memory and incremented
+ATOMIC_BLOCK_NB(NVIC_PRIO_TIMER) {
+ ATOMIC_BARRIER(markme_bar);
+// start of ATOMIC_BLOCK_NB scope:
+// markme_bar is stored into memory (it is input/output - "+m" output operand - of asm volatile)
+// BASEPRI is saved into temporary variable
+// BASEPRI_MAX is decremented to NVIC_PRIO_TIMER (if it is higher than NVIC_PRIO_TIMER or zero; lower number means higher priority on ARM)
+ markme++;
+// nothing happens, markme value is not needed yet
+ markme_bar++;
+// (2) markme_bar re-read from memory (ATOMIC_BARRIER marked it as modified - "+m" output operand of asm volatile)
+// and incremented
+
+// end of ATOMIC_BLOCK_NB scope:
+// markme_bar is stored into memory (cleanup function from ATOMIC_BARRIER) / input "m" operand), but kept for later use in register
+// (actually markme_bar+1 is stored and pre-increment value kept in register)
+// BASEPRI value is restored
+};
+
+markme++;
+// register value read in (1) is incremented by 3
+markme_bar++;
+// register value read in (2) is incremented (actually +=2, because register contains pre-increment value)
+
+// markme and markme_bar are stored into memory
+```
+
# Atomic Barrier Warning
+
+The ATOMIC_BLOCK/ATOMIC_BARRIER construction is dependent on gcc extensions. I relies on gcc cleanup function (`attribute ((cleanup))`) and assumes that cleanup handler is called, when leaving block, even when associated variable is eliminated.
+
+There is (a bit paranoid) safeguard warning to make sure that generated assembly is hand-checked on new gcc version. It is assumed that only major gcc version versions need to be checked.
+
If GCC is upgraded and a warning appears when compiling then the generated asm source must be verified.
-e.g.
+e.g.
```
%% serial_softserial.c
warning "Please verify that ATOMIC_BARRIER works as intended"
@@ -40,12 +81,12 @@ pass `-save-temps=obj` (or `-save-temps=cwd`, but lots of files will end up in s
# (markme value should be cached in register on next increment)
```
-The # barrier(markme) must surround access code and must be inside MSR basepri instructions ..
+The # barrier(markme) must surround access code and must be inside MSR basepri instructions .
Similar approach is used for ATOMIC_BLOCK in avr libraries, so gcc should not break this behavior.
IMO attribute(cleanup) and asm volatile is defined in a way that should guarantee this.
attribute(cleanup) is probably safer way to implement atomic sections - another possibility is to explicitly place barriers in code, but that can (and will eventually) lead to missed barrier/basepri restore on same path creating very hard to find bug.
-The MEMORY_BARRIER() code can be omitted and use ATOMIC_BLOCK with full memory barriers, but IMO it is better to explicitly state what memory is protected by barrier and gcc can use this knowledge to greatly improve generated code in future.
+The 'MEMORY_BARRIER()' code can be omitted when 'ATOMIC_BLOCK' (with full memory barriers) is used, but it is better to explicitly state what memory is protected by barrier. gcc 5 can use this knowledge to greatly improve generated code.
@@ -69,25 +69,25 @@ static inline uint8_t __basepriSetRetVal(uint8_t prio)
#define ATOMIC_BLOCK(prio) for ( uint8_t __basepri_save __attribute__((__cleanup__(__basepriRestoreMem))) = __get_BASEPRI(), \
__ToDo = __basepriSetMemRetVal(prio); __ToDo ; __ToDo = 0 )
-// Run block with elevated BASEPRI (using BASEPRI_MAX), but do not create any (explicit) memory barrier.
+// Run block with elevated BASEPRI (using BASEPRI_MAX), but do not create memory barrier.
// Be careful when using this, you must use some method to prevent optimizer form breaking things
// - lto is used for Cleanflight compilation, so function call is not memory barrier
-// - use ATOMIC_BARRIER or proper volatile to protect used variables
+// - use ATOMIC_BARRIER or volatile to protect used variables
// - gcc 4.8.4 does write all values in registers to memory before 'asm volatile', so this optimization does not help much
-// but that can change in future versions
+// - gcc 5 and later works as intended, generating quite optimal code
#define ATOMIC_BLOCK_NB(prio) for ( uint8_t __basepri_save __attribute__((__cleanup__(__basepriRestore))) = __get_BASEPRI(), \
__ToDo = __basepriSetRetVal(prio); __ToDo ; __ToDo = 0 ) \
// ATOMIC_BARRIER
// Create memory barrier
-// - at the beginning (all data must be reread from memory)
-// - at exit of block (all exit paths) (all data must be written, but may be cached in register for subsequent use)
-// ideally this would only protect memory passed as parameter (any type should work), but gcc is currently creating almost full barrier
+// - at the beginning of containing block (value of parameter must be reread from memory)
+// - at exit of block (all exit paths) (parameter value if written into memory, but may be cached in register for subsequent use)
+// On gcc 5 and higher, this protects only memory passed as parameter (any type should work)
// this macro can be used only ONCE PER LINE, but multiple uses per block are fine
#if (__GNUC__ > 5)
#warning "Please verify that ATOMIC_BARRIER works as intended"
-// increment version number is BARRIER works
+// increment version number if BARRIER works
// TODO - use flag to disable ATOMIC_BARRIER and use full barrier instead
// you should check that local variable scope with cleanup spans entire block
#endif

0 comments on commit 5d0c8dd

Please sign in to comment.