-
-
Notifications
You must be signed in to change notification settings - Fork 2.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fixed-point number support? #1974
Comments
LLVM has
https://llvm.org/docs/LangRef.html#fixed-point-arithmetic-intrinsics But that's it. Interesting. I'll have to learn why these intrinsics are provided. It may be that some architectures provide some fixed point multiplication operations. The null hypothesis here is that you would have to implement fixed-point in userland, and yes operations would be function calls rather than overloaded operators. |
Several architectures provide fixed-point multiplication operations. Atmel's AVR UC3 provides fixed-point DSP arithmetic, but has no support for floating point representations. This means that e.g. FFT's are more performant than in comparable microcontrollers with a FPU. This is not unusual for a microcontroller specialized for DSP applications. |
Most processors that I use at work don't even have a hardware multiply. Generally we work in ADC counts, but sometimes we will work in fixed-point encoded in 8 or 16 bit integers. It seems to me that, if LLVM IR has minimal support for fixed-point numbers (only data type, no operations), that it wouldn't be terribly difficult to map fixed-point operations to sequences of bit-shifts and integer operations. |
Fixedpoint NumbersA smooth upgrade for integers which approximates real numbers and maximizes zen. Why do we want this?They are useful for What is this and how do I use it?Fixedpoint numbers are an alternative approximation of real numbers, compared to the floats you're probably familiar with. They're integer hardware under the hood, but with a specific number of bits allocated to represent the integer part, and a specific number to represent the fractional part. In concept, they look like this
In code, they look like this: // -1.5, in signed fixedpoint, with 4 bits allocated for the integer,
// and 4 for the fractional
const negative_one_point_five: i4_4 = -1.5;
// 0.25, in unsigned fixedpoint, with zero bits allocated for the integer,
// and 2 for the fractional
const one_quarter: u0_2 = 0.25
// Vector of 8 signed fixedpoints, with 20 bits allocated for the integer part,
// and 12 for the fractional part, with each lane initialized to the closest
// possible value to 1337.1337
const lots_of_numbers: @Vector(8, u20_12) = @splat(@truncateLow(u20_12, 1337.1337));
Zig supports arbitrary bit-width fixedpoint. The maximum allowed total bit-width of a fixedpoint type (i.e., the sum of the integer and fractional bit-widths) is 65535. If an assigned comptime value does not fit perfectly into the format, it is a compile error. const impossible: i0_2 = 0.125; // Compile error: literal cannot fit into fixedpoint format If the format has a fractional bit-width of 0, it is called an integer. In Zig, fixedpoint numbers and integers are actually one and the same! const one_point_two_five: i8_8 = 1.25;
const golly_gee: i8 = @fixCast(one_point_two_five); // 1.0 Some operations require 'integers' as input. If you try to use a non-integer fixedpoint in them, you'll get a compile error. const a: i2_4 = 3.25;
const shift_amount: i5_3 = 2.5;
const oops = a << shift_amount; // Compile error: shift_amount must be an integer Fixedpoint numbers have the following operations defined:
These operations are also implemented on Our Greasy ForearmsFixedpoint numbers add an additional approximation of real numbers. The Zig compiler currently makes some assumptions that all real number representations must be floats. The following breaking changes will be prudent:
Furthermore, a number of operations that assume floats should now be made generic,
Fixedpoint numbers also extend the current integer codebase, causing integers (e.g. LLVM's builtins for fixedpoint math should be used whenever possible, otherwise we implement it within Zig atop the existing LLVM integer code. Thankfully, basic operations are quite trivial to implement: // I know this definition is, with mild imagination, recursive.
// Please mentally substitude llvm integer intrinsics.
const i8_8 = struct {
bits: i16,
const integer_bit_width: usize = 8;
const fractional_bit_width: usize = 8;
// Add is the same as integers
const add = fn (self: Self, other: Self) Self {
return self.bits + other.bits;
};
// Sub is the same as integers
const sub = fn (self: Self, other: Self) Self {
return self.bits - other.bits;
};
// Multiplication requires the factors be widened and
// the product be left-shifted by the fractional bitcount,
// to offset scaling
const mul = fn (self: Self, other: Self) Self {
return Self {
bits: fixCast(i16, (fixCast(i32, self) * fixCast(i32, other) >> Self.fractional_bit_width)
};
}
// Division requires the dividend be widened and
// left-shifted by the fractional bitcount to offset scaling
const div = fn (self: Self, other: Self) Self {
return Self {
bits: fixCast(i16, (fixCast(i32, self) << Self.fractional_bit_width) / fixCast(i32, other))
};
}
} However, stdlib trigonometric functions and square root must, for the purpose of the users sanity, return the mathematically correct answer for the number of bits. It is suggested that whatever methods are used (taylor series, slippery polynomial approximations, newtons method), the expansion and/or iterations be comptime tailored to the fixedpoint format. These implementations are not for those uninclined toward numerical analysis. Drawbacks
Rationale and alternativesAlternative SyntaxThe proposed const foo: i12_20 = 32.4; // not unexpected, really. Alternatively, const foo: i12.20 = 32.4; // the platonic ideal! If special-casing the const foo: i12f20 = 32.4; // is this an integer12/float20 lovechild? Variations of the Q syntax are also possible, such as Finally, we could simply keep integer syntax at status quo and require any fractionals to use const foo: std.meta.FixedPoint(12, 20) = 32.4; // where's the romance? Alternative ImplementationsThe trignometric functions and sqrt are likely to fall into an awkward position where users expect them to be 100% accurate to the bit-width, but in practice 100% accuracy is often second-place to performance for most numerics applications. It may be practical to provide functions for users to build custom approximations suited to their use-case. // Returns the value of sin(x), exact to the bit-width.
// If hardware support is unavailable, uses @sinPoly with an
// internally specified order, then iterates from an initial guess using
// newton's method.
const @sinExact = fn(value: var) var { ... };
// Returns the approximate sin of value, via a polynomial of the
// specified order.
// See Prior Art #5 and #6 for example implementations
const @sinPoly = fn(comptime order: comptime_int, value: var) var { ... }; This pattern could be expanded to other expensive functions, like: // Returns the quotient of dividend and divisor.
// Caller guarantees denominator != 0 and
// @divTrunc(numerator, denominator) * denominator == numerator.
// If hardware support is unavailable, computes via @divPoly with an
// internally specified order, then iterates from an initial guess using
// newton's method.
// See Prior Art #7 for a hardware-accelerated example implementation
const @divExact = fn(dividend: var, divisor: var) var { ... };
// Returns the approximate quotient of dividend and divisor, via
// polynomials of the specified order.
// See Prior Art #7 for a hardware-accelerated example implementation
const @divPoly = fn(comptime order: comptime_int, dividend: var, divisor: var) var { ... }; While mirroring the reality of nontrivial fixedpoint numerics, and immensely practical for many situations, this increases api surface area and may feel a bit too 'batteries included' for Zig. Alternative FormatsFixedpoint numbers are the most performance-efficient alternative approximation of real numbers beside hardware floats. They are not the only alternative, however.
If we don't implement fixedpoint into the language, userspace libraries will rise to the challenge, but be hindered by Zig's lack of ǒ̷̹̙̠̲p̶̞͗̐̐͑ḙ̴̡͘r̴̞̂͆ạ̵̠͚̬̅́̾͒ť̶̛̗̈́o̸̡͕̼͊͘r̶̟̝̗̐̽̽̚ ̷̞̤̻͂̀͆͠ó̴͓̼̺̺̒͐̉v̷̛̥e̶̢͔̲̪̔̓ŕ̴̞̆̚͜ļ̸̝̏̿ỏ̵̼̦͋̋́a̷̼̟͍̚̚͘d̴̩̮͎̅̄̋i̵̭͔͉̰̕n̵̝̩͂́g̸̤̈. As the elementary math operations for fixedpoint numbers use trivially more operations than hardware integer mathematics (an occasional shift), it is unlikely that any alternative formats would warrant language implementation, as they are too complex to be practical without hardware support. However, if hardware support is implemented for other alternative formats, such as Posits and Valids, implementing fixedpoint puts us well over the edge of the slipperly slope to implementing those as well. A reasonable alternative that lets us avoid the need to futureproof the compiler and our technical debt against any practical future number format, is to implement o̸̢̳̝̘͓̰̲̗̫͈͉͓̱͆̄̑̽̓͜p̷̛͔̹͚̅̈́͂̀̓̌͜ę̸̠͗͆̌̋̋̏̈́̋̏̈́̃́̚͠r̷͕̗͓͎̗͑̾͋͜a̶̹͇̯̒t̷̺̔̆̾͗͑͗̑͌͌͜͠ö̶̧̢̤̯̞̹̲̗̳͉͓̪͈͚̈̍͐̅̿̐̀̇̕̚͘ŕ̷̡̧̯̻̞̜̲̭͉͉̮͉̤̟̾̈́͂̏͋͌̔̆̇̐͆͘͘͝͠ͅ ̴͎̺̙͎̹̯̘͇̘̊̿̍̿́͑̉̾̒̾͐͊̓̐͘o̵̧̟͓̜̩͎̦͙̮̺͔̦̎͑̃̃̈̀̉͐́̈́̽̕v̵̬͍̙̰̘̟͚̦̱̞̠͈͔̈́e̴̡̦͓͙̤̜͇̮͉̪̲͖̘̐̀̅ͅr̶̜̳͖͇̮̟̖̱͇̼̀̒͂̏͑̕l̴̡̪̞̮̖̟̥̯̰̰̙͙̯̻̺̐̄́̓̀̎̇͆̚͝͝ͅo̴̩̜̟͍̽à̷̙̱̻͕͍̼̥̳͇͈̗͜d̸̹̜̯̭̙̠͋̈́́̊ï̷̢̲͈͈̱̠͔͔̪͋̽̏͐̈͂̎̾̈́̄͘͘͠n̷̢͉̣̺̝̺̥̲̩̈́͐̌́͝ͅǵ̴͚̹. Alternative Social ConstructsAside from syntax and overall grooviness, nothing we're doing here is really specific to Zig. Our software implementation could be upstreamed into LLVM for other languages to benefit from. Prior art
Unresolved questionsNone. Absolutely None. If you've read this far, how can you doubt? Future possibilitiesFixedpoint numbers' small size, cross-platform support, enforced determinism, and computational simplicity put them in an ideal niche that floats don't touch. The main barrier to wider adoption has been lack of ease of use - even if it makes more sense to use fixedpoint, finding a good fixedpoint library, or writing one's own, will make users "just use floats", unless they absolutely have to. Making these numerics first-class citizens in Zig with the power of comptime can open up a whole class of application development to less-specialized programmers. As Zig takes over the world, this will encourage hardware vendors to implement more fixedpoint functions in silicon, bringing us to a bright and happy future of |
Also, basic integer types could be thought of as special cases of fixed point numbers. i32 is equivalent to i32_0 in the proposed syntax. |
Interesting point. Operations like Edit: Updated the RFC accordingly. |
@floopfloopfloopfloopfloop any suggestions on how this might interact with #3806? |
I agree wholeheartedly with @floopfloopfloopfloopfloop's proposal, although I'd suggest one change to the syntax: |
Actually, one more: I'd suggest having |
@daurnimator I think that proposal will work as expected, but extended to four parameters rather than two? A builder struct seems like it'd become useful at that point. I don't have time to read the proposal deeply atm. @EleanorNB Having used libraries with both As for platform-sized ints, I disagree with names like
As for the concept of platform-sized fractional bitcounts, it can already either be solved with |
Just going to state something obvious here, because it wasn't explicitly mentioned above. It would mean it would be possible to write a function that could accept both floats and fixedpoints. I'm currently writing a library that generates random noise. It generates deterministic u32 noise, but then has functionality for normalizing that to a number 0..1 and doing operations on that. Currently, specifically generating an FBM based heightmap for a terrain. Some games will need this to be deterministic, and as we all know with floats, "it's complicated". I already support different float types with pub fn noise_fbm_2d(pos_x: anytype, pos_y: anytype, seed: u32, octave_count_c: u32, gain: @TypeOf(pos_x), feature_size: @TypeOf(pos_x)) @TypeOf(pos_x) { |
@Srekel "I already support different float types" What does this mean? There are many representations. Do you support both binary or decimal floating-point types or even more? Here is another fixpoint library by @geemili (discord game-dev). You might want to read this comment as well: |
Given that I'm still learning Zig, there may be a better way to do what I'm doing. But as far as I understand, since i use anytype, it'll accept anything that doesn't cause a compile error, and that is anything that I can 1) do general math operations on (+-*/) and that I can cast to an int, currently with |
Another use-case would be fonts and image formats which encode and compute using fixed-point representations. |
just ran into this recently implementing my own opentype/truetype parser and planned on implementing in userspace unless this was accepted/resolved |
FYI:
Trivia: This is why Nasdaq doesn't trade Berkshire Hathaway. The price exceeds 6 digits. |
See
libgcc includes bespoke runtime routines, which are the base for fixed-point number support. |
@TwoClocks What is the minimal subset in compiler_rt of this runtime routines to provide something usable? I would assume the conversion can be generalized at cost of efficiency (DSPs use custom hardware). |
In my use case I write a 2D platformer game on wasm4 with a screen resolution of 160x160, so even a single pixel error is noticeable on the screen. Initially I used all Imagine a situation where actor's coordinate is In the process of moving towards fixedpoint I have this example, a camera update function. First the status quo with pub fn update(self: *Camera, hero: Rectangle, scrolloff: Scrolloff, map_size: Rectangle) void {
const desired_min_x = hero.x - scrolloff.x;
const desired_max_x = hero.x + hero.w + scrolloff.x;
const allowed_min_x = map_size.x + self.offset.x;
const allowed_max_x = map_size.x + map_size.w - self.offset.x;
if (desired_min_x < self.target.x - self.offset.x) {
self.target.x = std.math.clamp(desired_min_x + self.offset.x, allowed_min_x, allowed_max_x);
} else if (desired_max_x > self.target.x + self.offset.x) {
self.target.x = std.math.clamp(desired_max_x - self.offset.x, allowed_min_x, allowed_max_x);
}
const desired_min_y = hero.y - scrolloff.y;
const desired_max_y = hero.y + hero.h + scrolloff.y;
const allowed_min_y = map_size.y + self.offset.y;
const allowed_max_y = map_size.y + map_size.h - self.offset.y;
if (desired_min_y < self.target.y - self.offset.y) {
self.target.y = std.math.clamp(desired_min_y + self.offset.y, allowed_min_y, allowed_max_y);
} else if (desired_max_y > self.target.y + self.offset.y) {
self.target.y = std.math.clamp(desired_max_y - self.offset.y, allowed_min_y, allowed_max_y);
}
} Then same with a user-space fixedpoint, thanks to @MasterQ32 for the implementation idea: fn FixedPoint(comptime T: type, comptime scaling: comptime_int) type {
return struct {
const FP = @This();
raw: T,
pub fn init(v: i32) FP {
return .{ .raw = scaling * v };
}
pub fn initFromFloat(v: f32) FP {
return .{ .raw = @floatToInt(T, scaling * v) };
}
pub fn unscale(fp: FP) i32 {
return fp.raw / scaling;
}
pub fn add(a: FP, b: FP) FP {
return .{ .raw = a.raw + b.raw };
}
pub fn sub(a: FP, b: FP) FP {
return .{ .raw = a.raw - b.raw };
}
pub fn mul(a: FP, b: FP) FP {
return .{ .raw = (a.raw * b.raw) / scaling };
}
pub fn div(a: FP, b: FP) FP {
return .{ .raw = (scaling * a.raw) / b.raw };
}
pub fn eq(a: FP, b: FP) bool {
return a.raw == b.raw;
}
pub fn lt(a: FP, b: FP) bool {
return a.raw < b.raw;
}
pub fn gt(a: FP, b: FP) bool {
return a.raw > b.raw;
}
pub fn clamp(val: FP, lower: FP, upper: FP) FP {
assert(lower.raw <= upper.raw);
return max(lower, min(val, upper));
}
pub fn min(a: FP, b: FP) FP {
return if (a.raw < b.raw) a else b;
}
pub fn max(a: FP, b: FP) FP {
return if (a.raw > b.raw) a else b;
}
};
}
pub const WorldCoordinate = FixedPoint(i32, 1000);
const WC = WorldCoordinate;
pub const WorldPosition = struct {
x: WorldCoordinate,
y: WorldCoordinate,
};
pub fn update(self: *Camera, hero: Rectangle, scrolloff: WorldPosition, map_size: Rectangle) void {
const desired_min_x = hero.x.sub(scrolloff.x);
const desired_max_x = hero.x.add(hero.w).add(scrolloff.x);
const allowed_min_x = map_size.x.add(self.offset.x);
const allowed_max_x = map_size.x.add(map_size.w).sub(self.offset.x);
if (desired_min_x.lt(self.target.x.sub(self.offset.x))) {
self.target.x = desired_min_x.add(self.offset.x).clamp(allowed_min_x, allowed_max_x);
} else if (desired_max_x.gt(self.target.x.add(self.offset.x))) {
self.target.x = desired_max_x.sub(self.offset.x).clamp(allowed_min_x, allowed_max_x);
}
const desired_min_y = hero.y.sub(scrolloff.y);
const desired_max_y = hero.y.add(hero.h).add(scrolloff.y);
const allowed_min_y = map_size.y.add(self.offset.y);
const allowed_max_y = map_size.y.add(map_size.h).sub(self.offset.y);
if (desired_min_y.lt(self.target.y.sub(self.offset.y))) {
self.target.y = desired_min_y.add(self.offset.y).clamp(allowed_min_y, allowed_max_y);
} else if (desired_max_y.gt(self.target.y.add(self.offset.y))) {
self.target.y = desired_max_y.sub(self.offset.y).clamp(allowed_min_y, allowed_max_y);
}
} Some subjective opinions:
When I imagine doing same for collision detection, I feel pain. Have you ever mixed Other solutions to the problem instead of a fixedpoint:
|
Not useful for financial. Fixed-point still uses a binary fraction, as diagrammed further down in the post. Financial systems expect a decimal fraction. There is no hardware support for decimal floating-point in commonly available CPUs used in most computers these days. Binary floating-point won due to being easier to implement in hardware, so once again speed won over usefulness. Fixed-point is not any more or less accurate than floating-point, they are just more limited in range for a given number of bits used to represent the values. Providing decimal floating-point support in Zig would be more useful than binary fixed-point. Providing programmers with a decimal floating-point type might get them to actually consider (or become aware of) the difference between decimal and binary floating point, and we could all sleep a little better at night. Edit: https://en.wikipedia.org/wiki/Fixed-point_arithmetic Using fixed-point in the sense that, for example, 1.23 would be stored with an integer (i32) as 1230, with an implicit scaling factor of 1000, works for things like financial and other such calculations. However, if the fixed-point representation is being used like a binary floating point format, i.e. base-2 exponents in the fraction, then the benefit of using fixed-point for fractional-decimal representation goes away. It is unclear to me which method of "fixed point" is being advocated for in the previous posts, so my comments may be just noise. However, the long post on April 27, 2020 shows a base-2 exponent, so that would not be suited for decimal fractions. |
To be fair, decimal floating points have not yet been optimized on performance and memory representation. So this would be research area: https://hal.archives-ouvertes.fr/hal-01021928v2/document "Comparison between binary and decimal You really dont want to unpack the numbers for every comparison, if possible. |
This depends on the use-case. Ignoring overflow and other special cases, fixed-point addition is always exact, while floating-point addition has an accuracy of On the other hand, fixed-point can be poorly suited for division. Something like
I assume you meant "decimal fixed-point" and not "decimal floating-point?" GnuCash is an example of financial software that uses the former, and @TwoClocks mentioned that Nasdaq uses a form of fixed-point as well. |
This really should not be a language problem, IMO, but if supporting complex data as native types in Zig is being considered, decimal floating point would be a good one. There are a lot of mistakes made in software because programmers do not understand what a float or double really are, and reach for those when doing financial kinds of computing, or calculating metric values with fractional components, etc. There are decimal floating point representations that have been optimized for performance as well as anyone is willing to make the effort. For example, one such library: https://www.speleotrove.com/decimal/ There are also mainframe systems that have native hardware (and software) support for decimal floating point, mainly because (thankfully) those kinds of computers are used for large financial applications. Performance always seems to take precedence over everything, to the point of doing things incorrectly. Commodity computer hardware does a crappy job at providing any kind of support for decimal floating point, so for now it will have to be 100% a software solution. Yes, it is slower than binary floating point, but that is the mess we have created for ourselves (as people making computers), and using a software lib with slower decimal floating point computations is simply the cost of doing financial and other calculations correctly.
Nope, I was saying I would rather have/see "decimal floating point" support in Zig, over "binary fixed point" support.
I have not looked at how GnuCash uses fixed point, so I cannot comment other than to say, just because something is implemented does not mean it is correct. I'm not saying GnuCash has done anything wrong, there are ways to do financial with integers and such (tracking thousandths of a penny as integers, for example). And when you implement you own fixed-point functions, you get to decide what the fractional exponent represents, so in the case of GnuCash maybe it tracks the fractions in base-10 rather than base-2. However, if I were considering using any lib for any serious financial application, I would audit the code to make sure I understood what it was doing before I used it.
I would not make any assumptions without seeing the whole Nasdaq specification. An unsigned 32-bit value cannot represent an entire 6.4 (10-digit) value, so I would be careful with how that quote is being interpreted. It seems more like a spec for data exchange where the data is being imported or exported as text representations of the numeric data, rather than describing how the Nasdaq stores internal values or performs calculations on their financial values. |
@dnotq I'm a little confused as to what exactly you are advocating. When you say "floating-point decimal" do you mean a fixed-width type like LLVM's and GCC's If you mean the former, there is already an issue for this (see #4221), and rounding errors are still an issue re: cumulative sums. (sidenote: this is part of why GnuCash switched to fixed-point integers in version 1.6) If you mean the latter, then I think your suggestion would be somewhat at odds with the Zig selling point of "no hidden allocations." Usually these kinds of types would be provided in Regardless, fixed-point integers are used for applications other than finance, so just because a decimal type would be better in some use-cases does not negate the need for fixed-point. |
It is both the price format for real-time communication to/from the exchange and the internal representation the exchange uses. It isn't just Nasdaq most of the US equities use some base10 implied decimal format that is either 32 to 64 bits long. For 64 bit formats, it's usually an implied "8.11" format. Which has enough resolution after the decimal for accurate bond prices. Most FX as well, but sometimes 8 whole digits isn't enough for some currency pairs. You are correct that it doesn't cover the whole range, but it doesn't need to. Nasdaq, and most other exchanges do not accept prices above 200K. So implied 6.4 works fine. The dirty secret to most exchanges: They don't do any math at all on prices. They just need to compare prices. As long as it compares like an int, the exchange doesn't really care. It's just a presentation/downstream issue. If you need to do * or / you either convert to a 64bit format or use something like java's BigDecimal. Floating point is a non-starter. Most libraries use n-bits for the implied decimal, like Rust "fixed" crate. I was just pointing out that there is a common base10 use case as well. I'm not sure what adding it to the language gets you through. Seems fine for a user-land lib, unless I'm missing something. |
@ryanavella I made an edit above in my original post, since it was unclear to me after re-reading the OP, what was being asked for to being with. All I'm mostly trying to stress is that the term "fixed point" is overloaded and does not always mean that it is safe to use for accurate decimal-fractions, since it really depends on the implementation. The post on April 27, 2020 says fixed-point is useful for financial systems, but then goes on to show a base-2 exponent example for the fractional part of the number, which is not useful for financial systems. In my experience binary floating point is very misunderstood and used incorrectly in places where accurate decimal-fractions are important (and sometimes critical). If a language provides a type that "looks like decimal", yet does not behave like decimal, it can be detrimental. Even though binary floating point is easy and fast to implement in hardware (which is why it was chosen), it was the wrong choice IMO, and software as a whole suffers for that decision.
Agreed. Although I'm not clear if Zig is trying to represent more complex non-CPU data types in the language or not? If Zig is still primarily "a better C", then I would say leave something like this to a lib. On the other hand, IIRC, C is getting native DecFP support in the spec. |
Upstream tracking issue for compiler_rt implementation is #15678. My (deleted from upstream due to maintenance churn) docs explain when to use what in https://github.com/matu3ba/matu3ba.github.io/blob/master/crt/crt_unofficial_zig_docs.md
We plan to be a full alternative, which includes fixed-point number support in compiler_rt and for symbol compatibility also decimal ones in both binary and decimal packed representation. Might take a while to get there.
Yes, C23 has them. gcc already has complete compiler_rt support and llvm/clang folks are working on it. See this very good overview https://discourse.llvm.org/t/rfc-decimal-floating-point-support-iso-iec-ts-18661-2-and-c23/62152. No blog writers really cared to mention them though (or can you mention some?), which makes me think that its more a superfluous "nice to have" to add to the C language. |
I discovered Zig yesterday and have spent pretty much all my time since then delving into it and reading various materials on it. Loving it so far.
One feature that I feel is missing currently is built-in support for fixed-point numbers. As the language doesn't allow operator overloading, this isn't something I could implement in user code and use like any other numeric type. For reference, the Embedded C standard specifies support for such numbers (refer to Annex A), although I'm not aware of any C compiler that supports them for a desktop target.
I found it interesting that Zig offers support for variable-width integers through
i#
andu#
. I wonder if this could be applied to fixed-point numbers by supporting some form of Q-notation, perhaps?The text was updated successfully, but these errors were encountered: