Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
90 changes: 90 additions & 0 deletions cranelift/docs/testing.md
Original file line number Diff line number Diff line change
Expand Up @@ -367,3 +367,93 @@ Example:
}
; run
```

#### Environment directives

Some tests need additional resources to be provided by the filetest infrastructure.

When any of the following directives is present the first argument of the function is *required* to be a `i64 vmctx`.
The filetest infrastructure will then pass a pointer to the environment struct via this argument.

The environment struct is essentially a list of pointers with info about the resources requested by the directives. These
pointers are always 8 bytes, and laid out sequentially in memory. Even for 32 bit machines, where we only fill the first
4 bytes of the pointer slot.

Currently, we only support requesting heaps, however this is a generic mechanism that should
be able to introduce any sort of environment support that we may need later. (e.g. tables, global values, external functions)

##### `heap` directive

The `heap` directive allows a test to request a heap to be allocated and passed to the test via the environment struct.


A sample heap annotation is the following:
```
; heap: static, size=0x1000, ptr=vmctx+0, bound=vmctx+8
```

This indicates the following:
* `static`: We have requested a non-resizable and non-movable static heap.
* `size=0x1000`: It has to have a size of 4096 bytes.
* `ptr=vmctx+0`: The pointer to the address to the start of this heap is placed at offset 0 in the `vmctx` struct
* `bound=vmctx+8`: The pointer to the address to the end of this heap is placed at offset 8 in the `vmctx` struct

The `ptr` and `bound` arguments make explicit the placement of the pointers to the start and end of the heap memory in
the environment struct. `vmctx+0` means that at offset 0 of the environment struct there will be the pointer to the start
similarly, at offset 8 the pointer to the end.


You can combine multiple heap annotations, in which case, their pointers are laid out sequentially in memory in
the order that the annotations appear in the source file.

```
; heap: static, size=0x1000, ptr=vmctx+0, bound=vmctx+8
; heap: dynamic, size=0x1000, ptr=vmctx+16, bound=vmctx+24
```

An invalid or unexpected offset will raise an error when the test is run.

See the diagram below, on how the `vmctx` struct ends up if with multiple heaps:

```
┌─────────────────────┐ vmctx+0
│heap0: start address │
├─────────────────────┤ vmctx+8
│heap0: end address │
├─────────────────────┤ vmctx+16
│heap1: start address │
├─────────────────────┤ vmctx+24
│heap1: end address │
├─────────────────────┤ vmctx+32
│etc... │
└─────────────────────┘
```

With this setup, you can now use the global values to load heaps, and load / store to them.

Example:

```
function %heap_load_store(i64 vmctx, i64, i32) -> i32 {
gv0 = vmctx
gv1 = load.i64 notrap aligned gv0+0
gv2 = load.i64 notrap aligned gv0+8
heap0 = dynamic gv1, bound gv2, offset_guard 0, index_type i64

block0(v0: i64, v1: i64, v2: i32):
v3 = heap_addr.i64 heap0, v1, 4
store.i32 v2, v3
v4 = load.i32 v3
return v4
}
; heap: static, size=0x1000, ptr=vmctx+0, bound=vmctx+8
; run: %heap_load_store(0, 1) == 1
```


### `test interpret`

Test the CLIF interpreter

This test supports the same commands as `test run`, but runs the code in the cranelift
interpreter instead of the host machine.
170 changes: 170 additions & 0 deletions cranelift/filetests/filetests/runtests/heap.clif
Original file line number Diff line number Diff line change
@@ -0,0 +1,170 @@
test run
target x86_64 machinst
target s390x
target aarch64


function %static_heap_i64_load_store(i64 vmctx, i64, i32) -> i32 {
gv0 = vmctx
gv1 = load.i64 notrap aligned gv0+0
heap0 = static gv1, min 0x1000, bound 0x1_0000_0000, offset_guard 0, index_type i64

block0(v0: i64, v1: i64, v2: i32):
v3 = heap_addr.i64 heap0, v1, 4
store.i32 v2, v3
v4 = load.i32 v3
return v4
}
; heap: static, size=0x1000, ptr=vmctx+0, bound=vmctx+8
; run: %static_heap_i64_load_store(0, 1) == 1
; run: %static_heap_i64_load_store(0, -1) == -1
; run: %static_heap_i64_load_store(16, 1) == 1
; run: %static_heap_i64_load_store(16, -1) == -1


function %static_heap_i32_load_store(i64 vmctx, i32, i32) -> i32 {
gv0 = vmctx
gv1 = load.i64 notrap aligned gv0+0
heap0 = static gv1, min 0x1000, bound 0x1_0000_0000, offset_guard 0, index_type i32

block0(v0: i64, v1: i32, v2: i32):
v3 = heap_addr.i64 heap0, v1, 4
store.i32 v2, v3
v4 = load.i32 v3
return v4
}
; heap: static, size=0x1000, ptr=vmctx+0, bound=vmctx+8
; run: %static_heap_i32_load_store(0, 1) == 1
; run: %static_heap_i32_load_store(0, -1) == -1
; run: %static_heap_i32_load_store(16, 1) == 1
; run: %static_heap_i32_load_store(16, -1) == -1


function %static_heap_i32_load_store_no_min(i64 vmctx, i32, i32) -> i32 {
gv0 = vmctx
gv1 = load.i64 notrap aligned gv0+0
heap0 = static gv1, bound 0x1_0000_0000, offset_guard 0, index_type i32

block0(v0: i64, v1: i32, v2: i32):
v3 = heap_addr.i64 heap0, v1, 4
store.i32 v2, v3
v4 = load.i32 v3
return v4
}
; heap: static, size=0x1000, ptr=vmctx+0, bound=vmctx+8
; run: %static_heap_i32_load_store_no_min(0, 1) == 1
; run: %static_heap_i32_load_store_no_min(0, -1) == -1
; run: %static_heap_i32_load_store_no_min(16, 1) == 1
; run: %static_heap_i32_load_store_no_min(16, -1) == -1


function %dynamic_heap_i64_load_store(i64 vmctx, i64, i32) -> i32 {
gv0 = vmctx
gv1 = load.i64 notrap aligned gv0+0
gv2 = load.i64 notrap aligned gv0+8
heap0 = dynamic gv1, bound gv2, offset_guard 0, index_type i64

block0(v0: i64, v1: i64, v2: i32):
v3 = heap_addr.i64 heap0, v1, 4
store.i32 v2, v3
v4 = load.i32 v3
return v4
}
; heap: dynamic, size=0x1000, ptr=vmctx+0, bound=vmctx+8
; run: %dynamic_heap_i64_load_store(0, 1) == 1
; run: %dynamic_heap_i64_load_store(0, -1) == -1
; run: %dynamic_heap_i64_load_store(16, 1) == 1
; run: %dynamic_heap_i64_load_store(16, -1) == -1


function %dynamic_heap_i32_load_store(i64 vmctx, i32, i32) -> i32 {
gv0 = vmctx
gv1 = load.i64 notrap aligned gv0+0
gv2 = load.i64 notrap aligned gv0+8
heap0 = dynamic gv1, bound gv2, offset_guard 0, index_type i32

block0(v0: i64, v1: i32, v2: i32):
v3 = heap_addr.i64 heap0, v1, 4
store.i32 v2, v3
v4 = load.i32 v3
return v4
}
; heap: dynamic, size=0x1000, ptr=vmctx+0, bound=vmctx+8
; run: %dynamic_heap_i32_load_store(0, 1) == 1
; run: %dynamic_heap_i32_load_store(0, -1) == -1
; run: %dynamic_heap_i32_load_store(16, 1) == 1
; run: %dynamic_heap_i32_load_store(16, -1) == -1


function %multi_heap_load_store(i64 vmctx, i32, i32) -> i32 {
gv0 = vmctx
gv1 = load.i64 notrap aligned gv0+0
gv2 = load.i64 notrap aligned gv0+16
gv3 = load.i64 notrap aligned gv0+24
heap0 = static gv1, min 0x1000, bound 0x1_0000_0000, offset_guard 0, index_type i64
heap1 = dynamic gv2, bound gv3, offset_guard 0, index_type i32

block0(v0: i64, v1: i32, v2: i32):
v3 = iconst.i64 0
v4 = iconst.i32 0

; Store lhs in heap0
v5 = heap_addr.i64 heap0, v3, 4
store.i32 v1, v5

; Store rhs in heap1
v6 = heap_addr.i64 heap1, v4, 4
store.i32 v2, v6


v7 = load.i32 v5
v8 = load.i32 v6

v9 = iadd.i32 v7, v8
return v9
}
; heap: static, size=0x1000, ptr=vmctx+0, bound=vmctx+8
; heap: dynamic, size=0x1000, ptr=vmctx+16, bound=vmctx+24
; run: %multi_heap_load_store(1, 2) == 3
; run: %multi_heap_load_store(4, 5) == 9



function %static_heap_i64_load_store_unaligned(i64 vmctx, i64, i32) -> i32 {
gv0 = vmctx
gv1 = load.i64 notrap aligned gv0+0
heap0 = static gv1, min 0x1000, bound 0x1_0000_0000, offset_guard 0, index_type i64

block0(v0: i64, v1: i64, v2: i32):
v3 = heap_addr.i64 heap0, v1, 4
store.i32 v2, v3
v4 = load.i32 v3
return v4
}
; heap: static, size=0x1000, ptr=vmctx+0, bound=vmctx+8
; run: %static_heap_i64_load_store_unaligned(0, 1) == 1
; run: %static_heap_i64_load_store_unaligned(0, -1) == -1
; run: %static_heap_i64_load_store_unaligned(1, 1) == 1
; run: %static_heap_i64_load_store_unaligned(1, -1) == -1
; run: %static_heap_i64_load_store_unaligned(2, 1) == 1
; run: %static_heap_i64_load_store_unaligned(2, -1) == -1
; run: %static_heap_i64_load_store_unaligned(3, 1) == 1
; run: %static_heap_i64_load_store_unaligned(3, -1) == -1


; This stores data in the place of the pointer in the vmctx struct, not in the heap itself.
function %static_heap_i64_iadd_imm(i64 vmctx, i32) -> i32 {
gv0 = vmctx
gv1 = iadd_imm.i64 gv0, 0
heap0 = static gv1, min 0x1000, bound 0x1_0000_0000, offset_guard 0x8000_0000, index_type i64

block0(v0: i64, v1: i32):
v2 = iconst.i64 0
v3 = heap_addr.i64 heap0, v2, 4
store.i32 v1, v3
v4 = load.i32 v3
return v4
}
; heap: static, size=0x1000, ptr=vmctx+0, bound=vmctx+8
; run: %static_heap_i64_iadd_imm(1) == 1
; run: %static_heap_i64_iadd_imm(-1) == -1
1 change: 1 addition & 0 deletions cranelift/filetests/src/lib.rs
Original file line number Diff line number Diff line change
Expand Up @@ -34,6 +34,7 @@ pub mod function_runner;
mod match_directive;
mod runner;
mod runone;
mod runtest_environment;
mod subtest;

mod test_binemit;
Expand Down
111 changes: 111 additions & 0 deletions cranelift/filetests/src/runtest_environment.rs
Original file line number Diff line number Diff line change
@@ -0,0 +1,111 @@
use anyhow::anyhow;
use cranelift_codegen::data_value::DataValue;
use cranelift_codegen::ir::Type;
use cranelift_reader::parse_heap_command;
use cranelift_reader::{Comment, HeapCommand};

/// Stores info about the expected environment for a test function.
#[derive(Debug, Clone)]
pub struct RuntestEnvironment {
pub heaps: Vec<HeapCommand>,
}

impl RuntestEnvironment {
/// Parse the environment from a set of comments
pub fn parse(comments: &[Comment]) -> anyhow::Result<Self> {
let mut env = RuntestEnvironment { heaps: Vec::new() };

for comment in comments.iter() {
if let Some(heap_command) = parse_heap_command(comment.text)? {
let heap_index = env.heaps.len() as u64;
let expected_ptr = heap_index * 16;
if Some(expected_ptr) != heap_command.ptr_offset.map(|p| p.into()) {
return Err(anyhow!(
"Invalid ptr offset, expected vmctx+{}",
expected_ptr
));
}

let expected_bound = (heap_index * 16) + 8;
if Some(expected_bound) != heap_command.bound_offset.map(|p| p.into()) {
return Err(anyhow!(
"Invalid bound offset, expected vmctx+{}",
expected_bound
));
}

env.heaps.push(heap_command);
};
}

Ok(env)
}

pub fn is_active(&self) -> bool {
!self.heaps.is_empty()
}

/// Allocates a struct to be injected into the test.
pub fn runtime_struct(&self) -> RuntestContext {
RuntestContext::new(&self)
}
}

type HeapMemory = Vec<u8>;

/// A struct that provides info about the environment to the test
#[derive(Debug, Clone)]
pub struct RuntestContext {
/// Store the heap memory alongside the context info so that we don't accidentally deallocate
/// it too early.
heaps: Vec<HeapMemory>,

/// This is the actual struct that gets passed into the `vmctx` argument of the tests.
/// It has a specific memory layout that all tests agree with.
///
/// Currently we only have to store heap info, so we store the heap start and end addresses in
/// a 64 bit slot for each heap.
///
/// ┌────────────┐
/// │heap0: start│
/// ├────────────┤
/// │heap0: end │
/// ├────────────┤
/// │heap1: start│
/// ├────────────┤
/// │heap1: end │
/// ├────────────┤
/// │etc... │
/// └────────────┘
context_struct: Vec<u64>,
}

impl RuntestContext {
pub fn new(env: &RuntestEnvironment) -> Self {
let heaps: Vec<HeapMemory> = env
.heaps
.iter()
.map(|cmd| {
let size: u64 = cmd.size.into();
vec![0u8; size as usize]
})
.collect();

let context_struct = heaps
.iter()
.flat_map(|heap| [heap.as_ptr(), heap.as_ptr().wrapping_add(heap.len())])
.map(|p| p as usize as u64)
.collect();

Self {
heaps,
context_struct,
}
}

/// Creates a [DataValue] with a target isa pointer type to the context struct.
pub fn pointer(&self, ty: Type) -> DataValue {
let ptr = self.context_struct.as_ptr() as usize as i128;
DataValue::from_integer(ptr, ty).expect("Failed to cast pointer to native target size")
}
}
Loading