Right now error sets are hardcoded to codegen to a u16.
Ideally, zig would choose an appropriate sized integer based on the total number of error values in the compilation, but the user may do @sizeOf(SomeStruct) which has an error set type as a field before the entire compilation finishes analysis. This would create a circular dependency.
One idea is to have the error set integer type have a default of u32, but in the same way that you can override the default panic handler in the root source file, you can override the error set integer type in the root source file by doing:
pubconst ErrorIntegerType =u8;
Then you'd get a compile error if zig ran out of bits when choosing error values.
The text was updated successfully, but these errors were encountered:
Right now error sets are hardcoded to codegen to a
u16
.Ideally, zig would choose an appropriate sized integer based on the total number of error values in the compilation, but the user may do
@sizeOf(SomeStruct)
which has an error set type as a field before the entire compilation finishes analysis. This would create a circular dependency.One idea is to have the error set integer type have a default of
u32
, but in the same way that you can override the defaultpanic
handler in the root source file, you can override the error set integer type in the root source file by doing:Then you'd get a compile error if zig ran out of bits when choosing error values.
The text was updated successfully, but these errors were encountered: