Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Miri: convert to/from apfloat instead of host floats #61673

Merged
merged 7 commits into from
Jun 11, 2019

Conversation

RalfJung
Copy link
Member

@RalfJung RalfJung commented Jun 8, 2019

@rust-highfive
Copy link
Collaborator

r? @varkor

(rust_highfive has picked a reviewer for you, use r? to override)

@rust-highfive rust-highfive added the S-waiting-on-review Status: Awaiting review from the assignee but also interested parties. label Jun 8, 2019
@RalfJung RalfJung changed the title Miri: don't use host floats Miri: don't convert to/from host floats Jun 8, 2019
@RalfJung RalfJung changed the title Miri: don't convert to/from host floats Miri: convert to/from apfloat instead of host floats Jun 8, 2019
src/librustc/mir/interpret/value.rs Outdated Show resolved Hide resolved
src/librustc/mir/interpret/value.rs Outdated Show resolved Hide resolved
@RalfJung
Copy link
Member Author

RalfJung commented Jun 9, 2019

Interesting that this passed... seems like we are missing a case from our test suite, namely casting a multivariant integer enum to an integer.

@RalfJung
Copy link
Member Author

RalfJung commented Jun 9, 2019

I opened #61702 for the missing test; this PR here is good to go I think.

Div => (l / r).value.into(),
Rem => (l % r).value.into(),
_ => bug!("invalid float op: `{:?}`", bin_op),
};
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Much nicer!

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I love this. :) If only we had a similar trait for integers.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All integer operations can be implemented with a runtime bitwidth n and an u128 to hold the value, though (maybe i128 for signed).

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Like, LLVM also has an APInt, not just APFloat, and APFloat uses APInt for the significand, but I didn't port APInt as its own thing, just added a bunch of functions, because of how relatively simple it is:

/// Implementation details of IeeeFloat significands, such as big integer arithmetic.
/// As a rule of thumb, no functions in this module should dynamically allocate.
mod sig {
use std::cmp::Ordering;
use std::mem;
use super::{ExpInt, Limb, LIMB_BITS, limbs_for_bits, Loss};
pub(super) fn is_all_zeros(limbs: &[Limb]) -> bool {
limbs.iter().all(|&l| l == 0)
}
/// One, not zero, based LSB. That is, returns 0 for a zeroed significand.
pub(super) fn olsb(limbs: &[Limb]) -> usize {
limbs.iter().enumerate().find(|(_, &limb)| limb != 0).map_or(0,
|(i, limb)| i * LIMB_BITS + limb.trailing_zeros() as usize + 1)
}
/// One, not zero, based MSB. That is, returns 0 for a zeroed significand.
pub(super) fn omsb(limbs: &[Limb]) -> usize {
limbs.iter().enumerate().rfind(|(_, &limb)| limb != 0).map_or(0,
|(i, limb)| (i + 1) * LIMB_BITS - limb.leading_zeros() as usize)
}
/// Comparison (unsigned) of two significands.
pub(super) fn cmp(a: &[Limb], b: &[Limb]) -> Ordering {
assert_eq!(a.len(), b.len());
for (a, b) in a.iter().zip(b).rev() {
match a.cmp(b) {
Ordering::Equal => {}
o => return o,
}
}
Ordering::Equal
}
/// Extracts the given bit.
pub(super) fn get_bit(limbs: &[Limb], bit: usize) -> bool {
limbs[bit / LIMB_BITS] & (1 << (bit % LIMB_BITS)) != 0
}
/// Sets the given bit.
pub(super) fn set_bit(limbs: &mut [Limb], bit: usize) {
limbs[bit / LIMB_BITS] |= 1 << (bit % LIMB_BITS);
}
/// Clear the given bit.
pub(super) fn clear_bit(limbs: &mut [Limb], bit: usize) {
limbs[bit / LIMB_BITS] &= !(1 << (bit % LIMB_BITS));
}
/// Shifts `dst` left `bits` bits, subtract `bits` from its exponent.
pub(super) fn shift_left(dst: &mut [Limb], exp: &mut ExpInt, bits: usize) {
if bits > 0 {
// Our exponent should not underflow.
*exp = exp.checked_sub(bits as ExpInt).unwrap();
// Jump is the inter-limb jump; shift is the intra-limb shift.
let jump = bits / LIMB_BITS;
let shift = bits % LIMB_BITS;
for i in (0..dst.len()).rev() {
let mut limb;
if i < jump {
limb = 0;
} else {
// dst[i] comes from the two limbs src[i - jump] and, if we have
// an intra-limb shift, src[i - jump - 1].
limb = dst[i - jump];
if shift > 0 {
limb <<= shift;
if i > jump {
limb |= dst[i - jump - 1] >> (LIMB_BITS - shift);
}
}
}
dst[i] = limb;
}
}
}
/// Shifts `dst` right `bits` bits noting lost fraction.
pub(super) fn shift_right(dst: &mut [Limb], exp: &mut ExpInt, bits: usize) -> Loss {
let loss = Loss::through_truncation(dst, bits);
if bits > 0 {
// Our exponent should not overflow.
*exp = exp.checked_add(bits as ExpInt).unwrap();
// Jump is the inter-limb jump; shift is the intra-limb shift.
let jump = bits / LIMB_BITS;
let shift = bits % LIMB_BITS;
// Perform the shift. This leaves the most significant `bits` bits
// of the result at zero.
for i in 0..dst.len() {
let mut limb;
if i + jump >= dst.len() {
limb = 0;
} else {
limb = dst[i + jump];
if shift > 0 {
limb >>= shift;
if i + jump + 1 < dst.len() {
limb |= dst[i + jump + 1] << (LIMB_BITS - shift);
}
}
}
dst[i] = limb;
}
}
loss
}
/// Copies the bit vector of width `src_bits` from `src`, starting at bit SRC_LSB,
/// to `dst`, such that the bit SRC_LSB becomes the least significant bit of `dst`.
/// All high bits above `src_bits` in `dst` are zero-filled.
pub(super) fn extract(dst: &mut [Limb], src: &[Limb], src_bits: usize, src_lsb: usize) {
if src_bits == 0 {
return;
}
let dst_limbs = limbs_for_bits(src_bits);
assert!(dst_limbs <= dst.len());
let src = &src[src_lsb / LIMB_BITS..];
dst[..dst_limbs].copy_from_slice(&src[..dst_limbs]);
let shift = src_lsb % LIMB_BITS;
let _: Loss = shift_right(&mut dst[..dst_limbs], &mut 0, shift);
// We now have (dst_limbs * LIMB_BITS - shift) bits from `src`
// in `dst`. If this is less that src_bits, append the rest, else
// clear the high bits.
let n = dst_limbs * LIMB_BITS - shift;
if n < src_bits {
let mask = (1 << (src_bits - n)) - 1;
dst[dst_limbs - 1] |= (src[dst_limbs] & mask) << (n % LIMB_BITS);
} else if n > src_bits && src_bits % LIMB_BITS > 0 {
dst[dst_limbs - 1] &= (1 << (src_bits % LIMB_BITS)) - 1;
}
// Clear high limbs.
for x in &mut dst[dst_limbs..] {
*x = 0;
}
}
/// We want the most significant PRECISION bits of `src`. There may not
/// be that many; extract what we can.
pub(super) fn from_limbs(dst: &mut [Limb], src: &[Limb], precision: usize) -> (Loss, ExpInt) {
let omsb = omsb(src);
if precision <= omsb {
extract(dst, src, precision, omsb - precision);
(
Loss::through_truncation(src, omsb - precision),
omsb as ExpInt - 1,
)
} else {
extract(dst, src, omsb, 0);
(Loss::ExactlyZero, precision as ExpInt - 1)
}
}
/// For every consecutive chunk of `bits` bits from `limbs`,
/// going from most significant to the least significant bits,
/// call `f` to transform those bits and store the result back.
pub(super) fn each_chunk<F: FnMut(Limb) -> Limb>(limbs: &mut [Limb], bits: usize, mut f: F) {
assert_eq!(LIMB_BITS % bits, 0);
for limb in limbs.iter_mut().rev() {
let mut r = 0;
for i in (0..LIMB_BITS / bits).rev() {
r |= f((*limb >> (i * bits)) & ((1 << bits) - 1)) << (i * bits);
}
*limb = r;
}
}
/// Increment in-place, return the carry flag.
pub(super) fn increment(dst: &mut [Limb]) -> Limb {
for x in dst {
*x = x.wrapping_add(1);
if *x != 0 {
return 0;
}
}
1
}
/// Decrement in-place, return the borrow flag.
pub(super) fn decrement(dst: &mut [Limb]) -> Limb {
for x in dst {
*x = x.wrapping_sub(1);
if *x != !0 {
return 0;
}
}
1
}
/// `a += b + c` where `c` is zero or one. Returns the carry flag.
pub(super) fn add(a: &mut [Limb], b: &[Limb], mut c: Limb) -> Limb {
assert!(c <= 1);
for (a, &b) in a.iter_mut().zip(b) {
let (r, overflow) = a.overflowing_add(b);
let (r, overflow2) = r.overflowing_add(c);
*a = r;
c = (overflow | overflow2) as Limb;
}
c
}
/// `a -= b + c` where `c` is zero or one. Returns the borrow flag.
pub(super) fn sub(a: &mut [Limb], b: &[Limb], mut c: Limb) -> Limb {
assert!(c <= 1);
for (a, &b) in a.iter_mut().zip(b) {
let (r, overflow) = a.overflowing_sub(b);
let (r, overflow2) = r.overflowing_sub(c);
*a = r;
c = (overflow | overflow2) as Limb;
}
c
}
/// `a += b` or `a -= b`. Does not preserve `b`.
pub(super) fn add_or_sub(
a_sig: &mut [Limb],
a_exp: &mut ExpInt,
a_sign: &mut bool,
b_sig: &mut [Limb],
b_exp: ExpInt,
b_sign: bool,
) -> Loss {
// Are we bigger exponent-wise than the RHS?
let bits = *a_exp - b_exp;
// Determine if the operation on the absolute values is effectively
// an addition or subtraction.
// Subtraction is more subtle than one might naively expect.
if *a_sign ^ b_sign {
let (reverse, loss);
if bits == 0 {
reverse = cmp(a_sig, b_sig) == Ordering::Less;
loss = Loss::ExactlyZero;
} else if bits > 0 {
loss = shift_right(b_sig, &mut 0, (bits - 1) as usize);
shift_left(a_sig, a_exp, 1);
reverse = false;
} else {
loss = shift_right(a_sig, a_exp, (-bits - 1) as usize);
shift_left(b_sig, &mut 0, 1);
reverse = true;
}
let borrow = (loss != Loss::ExactlyZero) as Limb;
if reverse {
// The code above is intended to ensure that no borrow is necessary.
assert_eq!(sub(b_sig, a_sig, borrow), 0);
a_sig.copy_from_slice(b_sig);
*a_sign = !*a_sign;
} else {
// The code above is intended to ensure that no borrow is necessary.
assert_eq!(sub(a_sig, b_sig, borrow), 0);
}
// Invert the lost fraction - it was on the RHS and subtracted.
match loss {
Loss::LessThanHalf => Loss::MoreThanHalf,
Loss::MoreThanHalf => Loss::LessThanHalf,
_ => loss,
}
} else {
let loss = if bits > 0 {
shift_right(b_sig, &mut 0, bits as usize)
} else {
shift_right(a_sig, a_exp, -bits as usize)
};
// We have a guard bit; generating a carry cannot happen.
assert_eq!(add(a_sig, b_sig, 0), 0);
loss
}
}
/// `[low, high] = a * b`.
///
/// This cannot overflow, because
///
/// `(n - 1) * (n - 1) + 2 * (n - 1) == (n - 1) * (n + 1)`
///
/// which is less than n<sup>2</sup>.
pub(super) fn widening_mul(a: Limb, b: Limb) -> [Limb; 2] {
let mut wide = [0, 0];
if a == 0 || b == 0 {
return wide;
}
const HALF_BITS: usize = LIMB_BITS / 2;
let select = |limb, i| (limb >> (i * HALF_BITS)) & ((1 << HALF_BITS) - 1);
for i in 0..2 {
for j in 0..2 {
let mut x = [select(a, i) * select(b, j), 0];
shift_left(&mut x, &mut 0, (i + j) * HALF_BITS);
assert_eq!(add(&mut wide, &x, 0), 0);
}
}
wide
}
/// `dst = a * b` (for normal `a` and `b`). Returns the lost fraction.
pub(super) fn mul<'a>(
dst: &mut [Limb],
exp: &mut ExpInt,
mut a: &'a [Limb],
mut b: &'a [Limb],
precision: usize,
) -> Loss {
// Put the narrower number on the `a` for less loops below.
if a.len() > b.len() {
mem::swap(&mut a, &mut b);
}
for x in &mut dst[..b.len()] {
*x = 0;
}
for i in 0..a.len() {
let mut carry = 0;
for j in 0..b.len() {
let [low, mut high] = widening_mul(a[i], b[j]);
// Now add carry.
let (low, overflow) = low.overflowing_add(carry);
high += overflow as Limb;
// And now `dst[i + j]`, and store the new low part there.
let (low, overflow) = low.overflowing_add(dst[i + j]);
high += overflow as Limb;
dst[i + j] = low;
carry = high;
}
dst[i + b.len()] = carry;
}
// Assume the operands involved in the multiplication are single-precision
// FP, and the two multiplicants are:
// a = a23 . a22 ... a0 * 2^e1
// b = b23 . b22 ... b0 * 2^e2
// the result of multiplication is:
// dst = c48 c47 c46 . c45 ... c0 * 2^(e1+e2)
// Note that there are three significant bits at the left-hand side of the
// radix point: two for the multiplication, and an overflow bit for the
// addition (that will always be zero at this point). Move the radix point
// toward left by two bits, and adjust exponent accordingly.
*exp += 2;
// Convert the result having "2 * precision" significant-bits back to the one
// having "precision" significant-bits. First, move the radix point from
// poision "2*precision - 1" to "precision - 1". The exponent need to be
// adjusted by "2*precision - 1" - "precision - 1" = "precision".
*exp -= precision as ExpInt + 1;
// In case MSB resides at the left-hand side of radix point, shift the
// mantissa right by some amount to make sure the MSB reside right before
// the radix point (i.e., "MSB . rest-significant-bits").
//
// Note that the result is not normalized when "omsb < precision". So, the
// caller needs to call IeeeFloat::normalize() if normalized value is
// expected.
let omsb = omsb(dst);
if omsb <= precision {
Loss::ExactlyZero
} else {
shift_right(dst, exp, omsb - precision)
}
}
/// `quotient = dividend / divisor`. Returns the lost fraction.
/// Does not preserve `dividend` or `divisor`.
pub(super) fn div(
quotient: &mut [Limb],
exp: &mut ExpInt,
dividend: &mut [Limb],
divisor: &mut [Limb],
precision: usize,
) -> Loss {
// Normalize the divisor.
let bits = precision - omsb(divisor);
shift_left(divisor, &mut 0, bits);
*exp += bits as ExpInt;
// Normalize the dividend.
let bits = precision - omsb(dividend);
shift_left(dividend, exp, bits);
// Division by 1.
let olsb_divisor = olsb(divisor);
if olsb_divisor == precision {
quotient.copy_from_slice(dividend);
return Loss::ExactlyZero;
}
// Ensure the dividend >= divisor initially for the loop below.
// Incidentally, this means that the division loop below is
// guaranteed to set the integer bit to one.
if cmp(dividend, divisor) == Ordering::Less {
shift_left(dividend, exp, 1);
assert_ne!(cmp(dividend, divisor), Ordering::Less)
}
// Helper for figuring out the lost fraction.
let lost_fraction = |dividend: &[Limb], divisor: &[Limb]| {
match cmp(dividend, divisor) {
Ordering::Greater => Loss::MoreThanHalf,
Ordering::Equal => Loss::ExactlyHalf,
Ordering::Less => {
if is_all_zeros(dividend) {
Loss::ExactlyZero
} else {
Loss::LessThanHalf
}
}
}
};
// Try to perform a (much faster) short division for small divisors.
let divisor_bits = precision - (olsb_divisor - 1);
macro_rules! try_short_div {
($W:ty, $H:ty, $half:expr) => {
if divisor_bits * 2 <= $half {
// Extract the small divisor.
let _: Loss = shift_right(divisor, &mut 0, olsb_divisor - 1);
let divisor = divisor[0] as $H as $W;
// Shift the dividend to produce a quotient with the unit bit set.
let top_limb = *dividend.last().unwrap();
let mut rem = (top_limb >> (LIMB_BITS - (divisor_bits - 1))) as $H;
shift_left(dividend, &mut 0, divisor_bits - 1);
// Apply short division in place on $H (of $half bits) chunks.
each_chunk(dividend, $half, |chunk| {
let chunk = chunk as $H;
let combined = ((rem as $W) << $half) | (chunk as $W);
rem = (combined % divisor) as $H;
(combined / divisor) as $H as Limb
});
quotient.copy_from_slice(dividend);
return lost_fraction(&[(rem as Limb) << 1], &[divisor as Limb]);
}
}
}
try_short_div!(u32, u16, 16);
try_short_div!(u64, u32, 32);
try_short_div!(u128, u64, 64);
// Zero the quotient before setting bits in it.
for x in &mut quotient[..limbs_for_bits(precision)] {
*x = 0;
}
// Long division.
for bit in (0..precision).rev() {
if cmp(dividend, divisor) != Ordering::Less {
sub(dividend, divisor, 0);
set_bit(quotient, bit);
}
shift_left(dividend, &mut 0, 1);
}
lost_fraction(dividend, divisor)
}
}

Copy link
Member Author

@RalfJung RalfJung Jun 10, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hm, I feel like at least for the signed/unsigned distinction this will become ugly when done "untyped".

That "simple" thing you pointed to is still way more complicated than what we currently do for integers ops in CTFE.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@RalfJung Yes, because it handles arbitrary-size integers, while you have only one "limb".
What you do is more or less what I mean.

@eddyb
Copy link
Member

eddyb commented Jun 10, 2019

@oli-obk r=me unless you have miri-specific comments

@oli-obk
Copy link
Contributor

oli-obk commented Jun 11, 2019

@bors r=eddyb,oli-obk

@bors
Copy link
Contributor

bors commented Jun 11, 2019

📌 Commit 8dfc8db has been approved by eddyb,oli-obk

@bors bors added S-waiting-on-bors Status: Waiting on bors to run and complete tests. Bors will change the label on completion. and removed S-waiting-on-review Status: Awaiting review from the assignee but also interested parties. labels Jun 11, 2019
@RalfJung
Copy link
Member Author

Let's make Miri work again.

@bors p=1

@bors
Copy link
Contributor

bors commented Jun 11, 2019

⌛ Testing commit 8dfc8db with merge 912d22e...

bors added a commit that referenced this pull request Jun 11, 2019
Miri: convert to/from apfloat instead of host floats

Cc @oli-obk @eddyb
@bors
Copy link
Contributor

bors commented Jun 11, 2019

☀️ Test successful - checks-travis, status-appveyor
Approved by: eddyb,oli-obk
Pushing 912d22e to master...

@bors bors added the merged-by-bors This PR was explicitly merged by bors. label Jun 11, 2019
@bors bors merged commit 8dfc8db into rust-lang:master Jun 11, 2019
@rust-highfive
Copy link
Collaborator

📣 Toolstate changed by #61673!

Tested on commit 912d22e.
Direct link to PR: #61673

🎉 rls on linux: test-fail → test-pass (cc @Xanewok, @rust-lang/infra).

rust-highfive added a commit to rust-lang-nursery/rust-toolstate that referenced this pull request Jun 11, 2019
Tested on commit rust-lang/rust@912d22e.
Direct link to PR: <rust-lang/rust#61673>

🎉 rls on linux: test-fail → test-pass (cc @Xanewok, @rust-lang/infra).
Centril added a commit to Centril/rust that referenced this pull request Jun 17, 2019
test more variants of enum-int-casting

As I learned in rust-lang#61673 (comment), there is a code path we are not testing yet. Looks like enum-int-casting with and without an intermediate let-binding is totally different.

EDIT: The reason for this is to get rid of the cycle in definitions such as:
```rust
enum Foo {
    A = 0,
    B = Foo::A as isize + 2,
}
```
This has historically been supported, so a hack adding special treatment to `Enum::Variant as _` was added to keep supporting it.
@RalfJung RalfJung deleted the miri-no-hard-float branch June 21, 2019 07:07
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
merged-by-bors This PR was explicitly merged by bors. S-waiting-on-bors Status: Waiting on bors to run and complete tests. Bors will change the label on completion.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants