Skip to content

Commit

Permalink
Always provide atomic CAS for MSP430 and AVR
Browse files Browse the repository at this point in the history
  • Loading branch information
taiki-e committed Aug 12, 2022
1 parent a4aec1e commit 668c029
Show file tree
Hide file tree
Showing 7 changed files with 567 additions and 174 deletions.
4 changes: 4 additions & 0 deletions CHANGELOG.md
Expand Up @@ -10,6 +10,10 @@ Note: In this file, do not use the hard wrap in the middle of a sentence for com

## [Unreleased]

- Always provide atomic CAS for MSP430 and AVR.

This previously required unsafe cfg `portable_atomic_unsafe_assume_single_core`, but since all MSP430 and AVR are single-core, we can safely provide atomic CAS based on disabling interrupts.

- Support `fence` and `compiler_fence` on MSP430.

- Update safety requirements for unsafe cfg `portable_atomic_unsafe_assume_single_core` to mention use of privileged instructions to disable interrupts.
Expand Down
6 changes: 4 additions & 2 deletions README.md
Expand Up @@ -14,7 +14,7 @@ Portable atomic types including support for 128-bit atomics, atomic float, etc.
- Provide `AtomicF32` and `AtomicF64`. (optional)
<!-- - Provide generic `Atomic<T>` type. (optional) -->
- Provide atomic load/store for targets where atomic is not available at all in the standard library. (RISC-V without A-extension, MSP430, AVR)
- Provide atomic CAS for targets where atomic CAS is not available in the standard library. (thumbv6m, RISC-V without A-extension, MSP430, AVR) (optional, [single-core only](#optional-cfg))
- Provide atomic CAS for targets where atomic CAS is not available in the standard library. (thumbv6m, RISC-V without A-extension, MSP430, AVR) (optional and [single-core only](#optional-cfg) for ARM and RISC-V, always enabled for MSP430 and AVR)
- Provide equivalents on the target that the standard library's atomic-related APIs cause LLVM errors. (fence/compiler_fence on MSP430)
- Provide stable equivalents of the standard library atomic types' unstable APIs, such as [`AtomicPtr::fetch_*`](https://github.com/rust-lang/rust/issues/99108), [`AtomicBool::fetch_not`](https://github.com/rust-lang/rust/issues/98485).
- Make features that require newer compilers, such as [fetch_max](https://doc.rust-lang.org/std/sync/atomic/struct.AtomicUsize.html#method.fetch_max), [fetch_min](https://doc.rust-lang.org/std/sync/atomic/struct.AtomicUsize.html#method.fetch_min), [fetch_update](https://doc.rust-lang.org/std/sync/atomic/struct.AtomicPtr.html#method.fetch_update), and [stronger CAS failure ordering](https://github.com/rust-lang/rust/pull/98383) available on Rust 1.34+.
Expand Down Expand Up @@ -79,7 +79,9 @@ See [this list](https://github.com/taiki-e/portable-atomic/issues/10#issuecommen

Enabling this cfg for targets that have atomic CAS will result in a compile error.

ARMv6-M (thumbv6m), RISC-V without A-extension, MSP430, and AVR are currently supported. See [#26] for support of no-std pre-v6 ARM and multi-core systems.
ARMv6-M (thumbv6m), RISC-V without A-extension are currently supported. See [#26] for support of no-std pre-v6 ARM and multi-core systems.

Since all MSP430 and AVR are single-core, we always provide atomic CAS for them without this cfg.

Feel free to submit an issue if your target is not supported yet.

Expand Down
38 changes: 7 additions & 31 deletions src/imp/interrupt/mod.rs
Expand Up @@ -15,6 +15,7 @@
//
// AVR, which is single core[^avr1] and LLVM also generates code that disables
// interrupts [^avr2] in atomic ops by default, is considered the latter.
// MSP430 as well.
//
// [^avr1]: https://github.com/llvm/llvm-project/blob/llvmorg-15.0.0-rc1/llvm/lib/Target/AVR/AVRExpandPseudoInsts.cpp#L1008
// [^avr2]: https://github.com/llvm/llvm-project/blob/llvmorg-15.0.0-rc1/llvm/test/CodeGen/AVR/atomics/load16.ll#L5
Expand Down Expand Up @@ -138,7 +139,6 @@ impl AtomicBool {
}
}

#[cfg(any(test, portable_atomic_unsafe_assume_single_core))]
#[inline]
pub(crate) fn swap(&self, val: bool, _order: Ordering) -> bool {
// SAFETY: any data races are prevented by disabling interrupts (see
Expand All @@ -147,7 +147,6 @@ impl AtomicBool {
with(|| unsafe { self.v.get().replace(val as u8) != 0 })
}

#[cfg(any(test, portable_atomic_unsafe_assume_single_core))]
#[inline]
pub(crate) fn compare_exchange(
&self,
Expand All @@ -171,7 +170,6 @@ impl AtomicBool {
})
}

#[cfg(any(test, portable_atomic_unsafe_assume_single_core))]
#[inline]
pub(crate) fn compare_exchange_weak(
&self,
Expand All @@ -183,7 +181,6 @@ impl AtomicBool {
self.compare_exchange(current, new, success, failure)
}

#[cfg(any(test, portable_atomic_unsafe_assume_single_core))]
#[inline]
pub(crate) fn fetch_and(&self, val: bool, _order: Ordering) -> bool {
// SAFETY: any data races are prevented by disabling interrupts (see
Expand All @@ -196,7 +193,6 @@ impl AtomicBool {
})
}

#[cfg(any(test, portable_atomic_unsafe_assume_single_core))]
#[inline]
pub(crate) fn fetch_nand(&self, val: bool, order: Ordering) -> bool {
if val {
Expand All @@ -210,7 +206,6 @@ impl AtomicBool {
}
}

#[cfg(any(test, portable_atomic_unsafe_assume_single_core))]
#[inline]
pub(crate) fn fetch_or(&self, val: bool, _order: Ordering) -> bool {
// SAFETY: any data races are prevented by disabling interrupts (see
Expand All @@ -223,7 +218,6 @@ impl AtomicBool {
})
}

#[cfg(any(test, portable_atomic_unsafe_assume_single_core))]
#[inline]
pub(crate) fn fetch_xor(&self, val: bool, _order: Ordering) -> bool {
// SAFETY: any data races are prevented by disabling interrupts (see
Expand Down Expand Up @@ -317,7 +311,6 @@ impl<T> AtomicPtr<T> {
}
}

#[cfg(any(test, portable_atomic_unsafe_assume_single_core))]
#[inline]
pub(crate) fn swap(&self, ptr: *mut T, _order: Ordering) -> *mut T {
// SAFETY: any data races are prevented by disabling interrupts (see
Expand All @@ -326,7 +319,6 @@ impl<T> AtomicPtr<T> {
with(|| unsafe { self.p.get().replace(ptr) })
}

#[cfg(any(test, portable_atomic_unsafe_assume_single_core))]
#[inline]
pub(crate) fn compare_exchange(
&self,
Expand All @@ -350,7 +342,6 @@ impl<T> AtomicPtr<T> {
})
}

#[cfg(any(test, portable_atomic_unsafe_assume_single_core))]
#[inline]
pub(crate) fn compare_exchange_weak(
&self,
Expand Down Expand Up @@ -472,7 +463,6 @@ macro_rules! atomic_int {
};
(cas, $atomic_type:ident, $int_type:ident, $align:expr) => {
impl $atomic_type {
#[cfg(any(test, portable_atomic_unsafe_assume_single_core))]
#[inline]
pub(crate) fn swap(&self, val: $int_type, _order: Ordering) -> $int_type {
// SAFETY: any data races are prevented by disabling interrupts (see
Expand All @@ -481,7 +471,6 @@ macro_rules! atomic_int {
with(|| unsafe { self.v.get().replace(val) })
}

#[cfg(any(test, portable_atomic_unsafe_assume_single_core))]
#[inline]
pub(crate) fn compare_exchange(
&self,
Expand All @@ -505,7 +494,6 @@ macro_rules! atomic_int {
})
}

#[cfg(any(test, portable_atomic_unsafe_assume_single_core))]
#[inline]
pub(crate) fn compare_exchange_weak(
&self,
Expand All @@ -517,7 +505,6 @@ macro_rules! atomic_int {
self.compare_exchange(current, new, success, failure)
}

#[cfg(any(test, portable_atomic_unsafe_assume_single_core))]
#[inline]
pub(crate) fn fetch_add(&self, val: $int_type, _order: Ordering) -> $int_type {
// SAFETY: any data races are prevented by disabling interrupts (see
Expand All @@ -530,7 +517,6 @@ macro_rules! atomic_int {
})
}

#[cfg(any(test, portable_atomic_unsafe_assume_single_core))]
#[inline]
pub(crate) fn fetch_sub(&self, val: $int_type, _order: Ordering) -> $int_type {
// SAFETY: any data races are prevented by disabling interrupts (see
Expand All @@ -543,7 +529,6 @@ macro_rules! atomic_int {
})
}

#[cfg(any(test, portable_atomic_unsafe_assume_single_core))]
#[inline]
pub(crate) fn fetch_and(&self, val: $int_type, _order: Ordering) -> $int_type {
// SAFETY: any data races are prevented by disabling interrupts (see
Expand All @@ -556,7 +541,6 @@ macro_rules! atomic_int {
})
}

#[cfg(any(test, portable_atomic_unsafe_assume_single_core))]
#[inline]
pub(crate) fn fetch_nand(&self, val: $int_type, _order: Ordering) -> $int_type {
// SAFETY: any data races are prevented by disabling interrupts (see
Expand All @@ -569,7 +553,6 @@ macro_rules! atomic_int {
})
}

#[cfg(any(test, portable_atomic_unsafe_assume_single_core))]
#[inline]
pub(crate) fn fetch_or(&self, val: $int_type, _order: Ordering) -> $int_type {
// SAFETY: any data races are prevented by disabling interrupts (see
Expand All @@ -582,7 +565,6 @@ macro_rules! atomic_int {
})
}

#[cfg(any(test, portable_atomic_unsafe_assume_single_core))]
#[inline]
pub(crate) fn fetch_xor(&self, val: $int_type, _order: Ordering) -> $int_type {
// SAFETY: any data races are prevented by disabling interrupts (see
Expand All @@ -595,7 +577,6 @@ macro_rules! atomic_int {
})
}

#[cfg(any(test, portable_atomic_unsafe_assume_single_core))]
#[inline]
pub(crate) fn fetch_max(&self, val: $int_type, _order: Ordering) -> $int_type {
// SAFETY: any data races are prevented by disabling interrupts (see
Expand All @@ -608,7 +589,6 @@ macro_rules! atomic_int {
})
}

#[cfg(any(test, portable_atomic_unsafe_assume_single_core))]
#[inline]
pub(crate) fn fetch_min(&self, val: $int_type, _order: Ordering) -> $int_type {
// SAFETY: any data races are prevented by disabling interrupts (see
Expand Down Expand Up @@ -647,34 +627,30 @@ atomic_int!(load_store_atomic, AtomicI16, i16, 2);
atomic_int!(load_store_atomic, AtomicU16, u16, 2);

#[cfg(not(target_pointer_width = "16"))]
#[cfg(any(test, portable_atomic_unsafe_assume_single_core))]
atomic_int!(load_store_atomic, AtomicI32, i32, 4);
#[cfg(not(target_pointer_width = "16"))]
#[cfg(any(test, portable_atomic_unsafe_assume_single_core))]
atomic_int!(load_store_atomic, AtomicU32, u32, 4);
#[cfg(target_pointer_width = "16")]
#[cfg(any(test, all(feature = "fallback", portable_atomic_unsafe_assume_single_core)))]
#[cfg(any(test, feature = "fallback"))]
atomic_int!(load_store_critical_session, AtomicI32, i32, 4);
#[cfg(target_pointer_width = "16")]
#[cfg(any(test, all(feature = "fallback", portable_atomic_unsafe_assume_single_core)))]
#[cfg(any(test, feature = "fallback"))]
atomic_int!(load_store_critical_session, AtomicU32, u32, 4);

#[cfg(not(any(target_pointer_width = "16", target_pointer_width = "32")))]
#[cfg(any(test, portable_atomic_unsafe_assume_single_core))]
atomic_int!(load_store_atomic, AtomicI64, i64, 8);
#[cfg(not(any(target_pointer_width = "16", target_pointer_width = "32")))]
#[cfg(any(test, portable_atomic_unsafe_assume_single_core))]
atomic_int!(load_store_atomic, AtomicU64, u64, 8);
#[cfg(any(target_pointer_width = "16", target_pointer_width = "32"))]
#[cfg(any(test, all(feature = "fallback", portable_atomic_unsafe_assume_single_core)))]
#[cfg(any(test, feature = "fallback"))]
atomic_int!(load_store_critical_session, AtomicI64, i64, 8);
#[cfg(any(target_pointer_width = "16", target_pointer_width = "32"))]
#[cfg(any(test, all(feature = "fallback", portable_atomic_unsafe_assume_single_core)))]
#[cfg(any(test, feature = "fallback"))]
atomic_int!(load_store_critical_session, AtomicU64, u64, 8);

#[cfg(any(test, all(feature = "fallback", portable_atomic_unsafe_assume_single_core)))]
#[cfg(any(test, feature = "fallback"))]
atomic_int!(load_store_critical_session, AtomicI128, i128, 16);
#[cfg(any(test, all(feature = "fallback", portable_atomic_unsafe_assume_single_core)))]
#[cfg(any(test, feature = "fallback"))]
atomic_int!(load_store_critical_session, AtomicU128, u128, 16);

#[cfg(test)]
Expand Down
44 changes: 25 additions & 19 deletions src/imp/mod.rs
Expand Up @@ -98,10 +98,12 @@ mod fallback;
// On AVR, we always use critical section based fallback implementation.
// AVR can be safely assumed to be single-core, so this is sound.
// https://github.com/llvm/llvm-project/blob/llvmorg-15.0.0-rc1/llvm/lib/Target/AVR/AVRExpandPseudoInsts.cpp#L1008
// MSP430 as well.
#[cfg(any(
all(test, target_os = "none"),
portable_atomic_unsafe_assume_single_core,
target_arch = "avr"
target_arch = "avr",
target_arch = "msp430",
))]
#[cfg_attr(portable_atomic_no_cfg_target_has_atomic, cfg(any(test, portable_atomic_no_atomic_cas)))]
#[cfg_attr(
Expand Down Expand Up @@ -147,12 +149,6 @@ mod interrupt;
pub(crate) use self::core_atomic::{
AtomicBool, AtomicI16, AtomicI8, AtomicIsize, AtomicPtr, AtomicU16, AtomicU8, AtomicUsize,
};
// MSP430
#[cfg(not(portable_atomic_unsafe_assume_single_core))]
#[cfg(target_arch = "msp430")]
pub(crate) use self::msp430::{
AtomicBool, AtomicI16, AtomicI8, AtomicIsize, AtomicPtr, AtomicU16, AtomicU8, AtomicUsize,
};
// RISC-V without A-extension
#[cfg(not(portable_atomic_unsafe_assume_single_core))]
#[cfg(any(target_arch = "riscv32", target_arch = "riscv64"))]
Expand All @@ -162,15 +158,13 @@ pub(crate) use self::riscv::{
AtomicBool, AtomicI16, AtomicI8, AtomicIsize, AtomicPtr, AtomicU16, AtomicU8, AtomicUsize,
};
// no core Atomic{Isize,Usize,Bool,Ptr}/Atomic{I,U}{8,16} & assume single core => critical section based fallback
#[cfg(any(portable_atomic_unsafe_assume_single_core, target_arch = "avr"))]
#[cfg_attr(
portable_atomic_no_cfg_target_has_atomic,
cfg(any(portable_atomic_no_atomic_cas, target_arch = "avr"))
)]
#[cfg_attr(
not(portable_atomic_no_cfg_target_has_atomic),
cfg(any(not(target_has_atomic = "ptr"), target_arch = "avr"))
)]
#[cfg(any(
portable_atomic_unsafe_assume_single_core,
target_arch = "avr",
target_arch = "msp430"
))]
#[cfg_attr(portable_atomic_no_cfg_target_has_atomic, cfg(portable_atomic_no_atomic_cas))]
#[cfg_attr(not(portable_atomic_no_cfg_target_has_atomic), cfg(not(target_has_atomic = "ptr")))]
pub(crate) use self::interrupt::{
AtomicBool, AtomicI16, AtomicI8, AtomicIsize, AtomicPtr, AtomicU16, AtomicU8, AtomicUsize,
};
Expand Down Expand Up @@ -210,7 +204,11 @@ pub(crate) use self::core_atomic::{AtomicI32, AtomicU32};
pub(crate) use self::riscv::{AtomicI32, AtomicU32};
// no core Atomic{I,U}32 & no CAS & assume single core => critical section based fallback
#[cfg(any(not(target_pointer_width = "16"), feature = "fallback"))]
#[cfg(portable_atomic_unsafe_assume_single_core)]
#[cfg(any(
portable_atomic_unsafe_assume_single_core,
target_arch = "avr",
target_arch = "msp430"
))]
#[cfg_attr(portable_atomic_no_cfg_target_has_atomic, cfg(portable_atomic_no_atomic_cas))]
#[cfg_attr(not(portable_atomic_no_cfg_target_has_atomic), cfg(not(target_has_atomic = "ptr")))]
pub(crate) use self::interrupt::{AtomicI32, AtomicU32};
Expand Down Expand Up @@ -251,7 +249,11 @@ pub(crate) use self::fallback::{AtomicI64, AtomicU64};
not(any(target_pointer_width = "16", target_pointer_width = "32")),
feature = "fallback"
))]
#[cfg(portable_atomic_unsafe_assume_single_core)]
#[cfg(any(
portable_atomic_unsafe_assume_single_core,
target_arch = "avr",
target_arch = "msp430"
))]
#[cfg_attr(portable_atomic_no_cfg_target_has_atomic, cfg(portable_atomic_no_atomic_cas))]
#[cfg_attr(not(portable_atomic_no_cfg_target_has_atomic), cfg(not(target_has_atomic = "ptr")))]
pub(crate) use self::interrupt::{AtomicI64, AtomicU64};
Expand Down Expand Up @@ -316,7 +318,11 @@ pub(crate) use self::s390x::{AtomicI128, AtomicU128};
pub(crate) use self::fallback::{AtomicI128, AtomicU128};
// no core Atomic{I,U}128 & no CAS & assume_single_core => critical section based fallback
#[cfg(feature = "fallback")]
#[cfg(portable_atomic_unsafe_assume_single_core)]
#[cfg(any(
portable_atomic_unsafe_assume_single_core,
target_arch = "avr",
target_arch = "msp430"
))]
#[cfg_attr(portable_atomic_no_cfg_target_has_atomic, cfg(portable_atomic_no_atomic_cas))]
#[cfg_attr(not(portable_atomic_no_cfg_target_has_atomic), cfg(not(target_has_atomic = "ptr")))]
pub(crate) use self::interrupt::{AtomicI128, AtomicU128};

0 comments on commit 668c029

Please sign in to comment.