Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add methods for atomic access to Bytes #115

Merged
merged 4 commits into from
Sep 29, 2020
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
16 changes: 12 additions & 4 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,15 +1,23 @@
# Changelog
## [Unreleased]

### Fixed
- [[#106]](https://github.com/rust-vmm/vm-memory/issues/106): Asserts trigger
on zero-length access.

### Added

- [[#109]](https://github.com/rust-vmm/vm-memory/pull/109): Added `build_raw` to
`MmapRegion` which can be used to operate on externally created mappings.
- [[#101]](https://github.com/rust-vmm/vm-memory/pull/101): Added `check_range` for
GuestMemory which could be used to validate a range of guest memory.
- [[#115]](https://github.com/rust-vmm/vm-memory/pull/115): Add methods for atomic
access to `Bytes`.

### Fixed

- [[#106]](https://github.com/rust-vmm/vm-memory/issues/106): Asserts trigger
on zero-length access.

### Removed

- `integer-atomics` is no longer a distinct feature of the crate.

## [v0.2.0]

Expand Down
1 change: 0 additions & 1 deletion Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,6 @@ autobenches = false

[features]
default = []
integer-atomics = []
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems like a breaking change, should we bump the minor version number for next release?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds good. This also reminds me I have to update the changelog.

backend-mmap = []
backend-atomic = ["arc-swap"]

Expand Down
2 changes: 1 addition & 1 deletion coverage_config_x86_64.json
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
{
"coverage_score": 84.8,
"coverage_score": 85.4,
"exclude_path": "mmap_windows.rs",
"crate_features": "backend-mmap,backend-atomic"
}
90 changes: 90 additions & 0 deletions src/atomic_integer.rs
Original file line number Diff line number Diff line change
@@ -0,0 +1,90 @@
// Copyright 2020 Amazon.com, Inc. or its affiliates. All Rights Reserved.
// SPDX-License-Identifier: Apache-2.0 OR BSD-3-Clause

use std::sync::atomic::Ordering;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you please help to add a license header?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done


/// Objects that implement this trait must consist exclusively of atomic types
/// from [`std::sync::atomic`](https://doc.rust-lang.org/std/sync/atomic/), except for
/// [`AtomicPtr<T>`](https://doc.rust-lang.org/std/sync/atomic/struct.AtomicPtr.html) and
/// [`AtomicBool`](https://doc.rust-lang.org/std/sync/atomic/struct.AtomicBool.html).
pub unsafe trait AtomicInteger: Sync + Send {
/// The raw value type associated with the atomic integer (i.e. `u16` for `AtomicU16`).
type V;

/// Create a new instance of `Self`.
fn new(v: Self::V) -> Self;

/// Loads a value from the atomic integer.
fn load(&self, order: Ordering) -> Self::V;

/// Stores a value into the atomic integer.
fn store(&self, val: Self::V, order: Ordering);
}

macro_rules! impl_atomic_integer_ops {
($T:path, $V:ty) => {
unsafe impl AtomicInteger for $T {
type V = $V;

fn new(v: Self::V) -> Self {
Self::new(v)
}

fn load(&self, order: Ordering) -> Self::V {
self.load(order)
}

fn store(&self, val: Self::V, order: Ordering) {
self.store(val, order)
}
}
};
}

// TODO: Detect availability using #[cfg(target_has_atomic) when it is stabilized.
// Right now we essentially assume we're running on either x86 or Arm (32 or 64 bit). AFAIK,
// Rust starts using additional synchronization primitives to implement atomics when they're
// not natively available, and that doesn't interact safely with how we cast pointers to
// atomic value references. We should be wary of this when looking at a broader range of
// platforms.

impl_atomic_integer_ops!(std::sync::atomic::AtomicI8, i8);
impl_atomic_integer_ops!(std::sync::atomic::AtomicI16, i16);
impl_atomic_integer_ops!(std::sync::atomic::AtomicI32, i32);
#[cfg(any(target_arch = "x86_64", target_arch = "aarch64"))]
impl_atomic_integer_ops!(std::sync::atomic::AtomicI64, i64);

impl_atomic_integer_ops!(std::sync::atomic::AtomicU8, u8);
impl_atomic_integer_ops!(std::sync::atomic::AtomicU16, u16);
impl_atomic_integer_ops!(std::sync::atomic::AtomicU32, u32);
#[cfg(any(target_arch = "x86_64", target_arch = "aarch64"))]
impl_atomic_integer_ops!(std::sync::atomic::AtomicU64, u64);

impl_atomic_integer_ops!(std::sync::atomic::AtomicIsize, isize);
impl_atomic_integer_ops!(std::sync::atomic::AtomicUsize, usize);

#[cfg(test)]
mod tests {
use super::*;

use std::fmt::Debug;
use std::sync::atomic::AtomicU32;

fn check_atomic_integer_ops<A: AtomicInteger>()
where
A::V: Copy + Debug + From<u8> + PartialEq,
{
let v = A::V::from(0);
let a = A::new(v);
assert_eq!(a.load(Ordering::Relaxed), v);

let v2 = A::V::from(100);
a.store(v2, Ordering::Relaxed);
assert_eq!(a.load(Ordering::Relaxed), v2);
}

#[test]
fn test_atomic_integer_ops() {
check_atomic_integer_ops::<AtomicU32>()
}
}
85 changes: 80 additions & 5 deletions src/bytes.rs
Original file line number Diff line number Diff line change
Expand Up @@ -11,11 +11,14 @@
//! Define the `ByteValued` trait to mark that it is safe to instantiate the struct with random
//! data.

use crate::VolatileSlice;
use std::io::{Read, Write};
use std::mem::size_of;
use std::result::Result;
use std::slice::{from_raw_parts, from_raw_parts_mut};
use std::sync::atomic::Ordering;

use crate::atomic_integer::AtomicInteger;
use crate::VolatileSlice;

/// Types for which it is safe to initialize from raw data.
///
Expand Down Expand Up @@ -153,6 +156,41 @@ byte_valued_type!(i32);
byte_valued_type!(i64);
byte_valued_type!(isize);

/// A trait used to identify types which can be accessed atomically by proxy.
pub trait AtomicAccess:
ByteValued
// Could not find a more succinct way of stating that `Self` can be converted
// into `Self::A::V`, and the other way around.
+ From<<<Self as AtomicAccess>::A as AtomicInteger>::V>
+ Into<<<Self as AtomicAccess>::A as AtomicInteger>::V>
{
/// The `AtomicInteger` that atomic operations on `Self` are based on.
type A: AtomicInteger;
}

macro_rules! impl_atomic_access {
($T:ty, $A:path) => {
impl AtomicAccess for $T {
type A = $A;
}
};
}

impl_atomic_access!(i8, std::sync::atomic::AtomicI8);
impl_atomic_access!(i16, std::sync::atomic::AtomicI16);
impl_atomic_access!(i32, std::sync::atomic::AtomicI32);
#[cfg(any(target_arch = "x86_64", target_arch = "aarch64"))]
impl_atomic_access!(i64, std::sync::atomic::AtomicI64);

impl_atomic_access!(u8, std::sync::atomic::AtomicU8);
impl_atomic_access!(u16, std::sync::atomic::AtomicU16);
impl_atomic_access!(u32, std::sync::atomic::AtomicU32);
#[cfg(any(target_arch = "x86_64", target_arch = "aarch64"))]
impl_atomic_access!(u64, std::sync::atomic::AtomicU64);

impl_atomic_access!(isize, std::sync::atomic::AtomicIsize);
impl_atomic_access!(usize, std::sync::atomic::AtomicUsize);

/// A container to host a range of bytes and access its content.
///
/// Candidates which may implement this trait include:
Expand Down Expand Up @@ -269,16 +307,40 @@ pub trait Bytes<A> {
fn write_all_to<F>(&self, addr: A, dst: &mut F, count: usize) -> Result<(), Self::E>
where
F: Write;

/// Atomically store a value at the specified address.
fn store<T: AtomicAccess>(&self, val: T, addr: A, order: Ordering) -> Result<(), Self::E>;

/// Atomically load a value from the specified address.
fn load<T: AtomicAccess>(&self, addr: A, order: Ordering) -> Result<T, Self::E>;
}

#[cfg(test)]
mod tests {
use crate::{ByteValued, Bytes};
pub(crate) mod tests {
use super::*;

use std::fmt::Debug;
use std::io::{Read, Write};
use std::mem::{align_of, size_of};
use std::mem::align_of;
use std::slice;

// Helper method to test atomic accesses for a given `b: Bytes` that's supposed to be
// zero-initialized.
pub fn check_atomic_accesses<A, B>(b: B, addr: A, bad_addr: A)
where
A: Copy,
B: Bytes<A>,
B::E: Debug,
{
let val = 100u32;

assert_eq!(b.load::<u32>(addr, Ordering::Relaxed).unwrap(), 0);
b.store(val, addr, Ordering::Relaxed).unwrap();
assert_eq!(b.load::<u32>(addr, Ordering::Relaxed).unwrap(), val);

assert!(b.load::<u32>(bad_addr, Ordering::Relaxed).is_err());
assert!(b.store(val, bad_addr, Ordering::Relaxed).is_err());
}

fn check_byte_valued_type<T>()
where
T: ByteValued + PartialEq + Debug + Default,
Expand Down Expand Up @@ -409,6 +471,19 @@ mod tests {
{
unimplemented!()
}

fn store<T: AtomicAccess>(
&self,
_val: T,
_addr: usize,
_order: Ordering,
) -> Result<(), Self::E> {
unimplemented!()
}

fn load<T: AtomicAccess>(&self, _addr: usize, _order: Ordering) -> Result<T, Self::E> {
unimplemented!()
}
}

#[test]
Expand Down
27 changes: 26 additions & 1 deletion src/guest_memory.rs
Original file line number Diff line number Diff line change
Expand Up @@ -39,10 +39,11 @@ use std::fs::File;
use std::io::{self, Read, Write};
use std::ops::{BitAnd, BitOr, Deref};
use std::rc::Rc;
use std::sync::atomic::Ordering;
use std::sync::Arc;

use crate::address::{Address, AddressValue};
use crate::bytes::Bytes;
use crate::bytes::{AtomicAccess, Bytes};
use crate::volatile_memory;

static MAX_ACCESS_CHUNK: usize = 4096;
Expand Down Expand Up @@ -868,6 +869,20 @@ impl<T: GuestMemory> Bytes<GuestAddress> for T {
}
Ok(())
}

fn store<O: AtomicAccess>(&self, val: O, addr: GuestAddress, order: Ordering) -> Result<()> {
// `find_region` should really do what `to_region_addr` is doing right now, except
// it should keep returning a `Result`.
self.to_region_addr(addr)
.ok_or(Error::InvalidGuestAddress(addr))
.and_then(|(region, region_addr)| region.store(val, region_addr, order))
}

fn load<O: AtomicAccess>(&self, addr: GuestAddress, order: Ordering) -> Result<O> {
self.to_region_addr(addr)
.ok_or(Error::InvalidGuestAddress(addr))
.and_then(|(region, region_addr)| region.load(region_addr, order))
}
}

#[cfg(test)]
Expand Down Expand Up @@ -1081,4 +1096,14 @@ mod tests {
.write_all_to(addr, &mut Cursor::new(&mut image), 0)
.is_ok());
}

#[cfg(feature = "backend-mmap")]
#[test]
fn test_atomic_accesses() {
let addr = GuestAddress(0x1000);
let mem = GuestMemoryMmap::from_ranges(&[(addr, 0x1000)]).unwrap();
let bad_addr = addr.unchecked_add(0x1000);

crate::bytes::tests::check_atomic_accesses(mem, addr, bad_addr);
}
}
19 changes: 11 additions & 8 deletions src/lib.rs
Original file line number Diff line number Diff line change
Expand Up @@ -23,8 +23,16 @@
pub mod address;
pub use address::{Address, AddressValue};

#[cfg(feature = "backend-atomic")]
pub mod atomic;
#[cfg(feature = "backend-atomic")]
pub use atomic::{GuestMemoryAtomic, GuestMemoryLoadGuard};

mod atomic_integer;
pub use atomic_integer::AtomicInteger;

pub mod bytes;
pub use bytes::{ByteValued, Bytes};
pub use bytes::{AtomicAccess, ByteValued, Bytes};

pub mod endian;
pub use endian::{Be16, Be32, Be64, BeSize, Le16, Le32, Le64, LeSize};
Expand All @@ -46,13 +54,8 @@ pub mod mmap;
#[cfg(feature = "backend-mmap")]
pub use mmap::{Error, GuestMemoryMmap, GuestRegionMmap, MmapRegion};

#[cfg(feature = "backend-atomic")]
pub mod atomic;
#[cfg(feature = "backend-atomic")]
pub use atomic::{GuestMemoryAtomic, GuestMemoryLoadGuard};

pub mod volatile_memory;
pub use volatile_memory::{
AtomicValued, Error as VolatileMemoryError, Result as VolatileMemoryResult, VolatileArrayRef,
VolatileMemory, VolatileRef, VolatileSlice,
Error as VolatileMemoryError, Result as VolatileMemoryResult, VolatileArrayRef, VolatileMemory,
VolatileRef, VolatileSlice,
};