-
Notifications
You must be signed in to change notification settings - Fork 456
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Replace [u]intN_t with [u]int_leastN_t #122
Comments
👍 Given that the specification is already rife with types of implementation-defined lengths (such as |
Without precisely sized objects how would ABI compatibility be guaranteed? The enums used as members of structs or function parameters should be the uint32_t typedefs ( |
@baldurk: You're right about |
I'm not sure I understand how it would be well-defined. Say for the sake of argument that compiler A decides to implement the |
Keep in mind that |
@baldurk
That would be the case with int_fastN_t. The compiler would try to select a faster type that is at least N bits long but we are talking about int_leastN_t.
Actually there are some such as VkImageViewType that are passed as enums. |
But it's really the task of the binding layer to define how the structures are defined and provide any translation from one to the other. Currently it's defined in C terms in however the platform's default ABI sets them up. |
I will be very surprised if anyone attempts to implement Vulkan on an architecture on which uint{8,16,32}_t cannot be supported as native types. The last commercial CPU architecture which didn't have a power of two word size was created about 20 years before OpenGL 1.0 and didn't have any staying power (I'll grant DEC-20s lasted into the mid-80s, and there are some weird DSP architectures out there but they are very, very niche). Several years ago we punted on OpenGL and made it implementable only on such architectures, as well. The memory aliasing implicit in mapping between CPU and GPU memory and having intelligible data layouts on both sides makes this even more challenging - there would be no plausible way to write Vulkan code which would actually run on both types of architectures in an intelligible and performant fashion. Abstract concerns about architectures that aren't actually relevant to us shouldn't force us to jump through hoops no implementation is going to care about. So my personal take is that we should WONTFIX this and make an explicit statement in the spec somewhere about the class of architectures/word sizes it is supportable on.Certainly what the vendors care about on the CPU side is (a) x86 (b) ARM (c) maybe, possibly, in some very specialized markets, MIPS - and their GPUs are designed to work with those CPUs. |
I'm not sure how hard it is to jump through a hoop that essentially boils down to You should expect these kinds of discussions to pop up from time to time. The problem is that, contrary to popular belief, C is "portable but also not portable™" (seriously, this is part of the the working group's design rationale). If you specify an API using C and expect ABI compatibility, you'll also invite all edge cases (or rather, stuff that other compilers do that you didn't think of) into the discussion. If you don't go either full portable (pick only types that have guaranteed behaviour) or full non-portable (make requirements regarding the C implementations), you'll end up with an API that's just as confused as C itself. |
Effectively what we're doing is deferring to the platform vendor's ABI choices. Google will define the Android ABI for Vulkan, Microsoft + vk_platform.h will define it for Windows, and the standard Linux compiler distributions (or I guess LSB, if LSB is actually still a thing of relevance) + vk_platform.h will define it for desktop Linux - because that's what the implementers will use to write their drivers. If there's some case where this approach actually would result in real undefined behavior on a platform of interest, that's a concern we need to address. The 16/32 bit enum thing is an (extremely unlikely) example of such an edge case we'll probably actually do something about, but uint32_t not existing on a Vulkan deployment platform is orders of magnitude less likely than that. Thank heavens we were able to limit support to compilers recent enough to actually have stdint.h defined - some of the contortions we went through for the GL header files boggle the mind. |
Not having power of two word or byte size is not the only thing that would cause them not being implementable. Having a byte size bigger than 8 bit (even if it is 16 bit, such as multiple Texas Instruments DSPs) will not allow uint8_t to be used (example of such use is VkPhysicalDeviceProperties). Moreover having a two’s complement representation is needed for the implementation of the signed types. Another reason to avoid uintN_t is about semantics: it implies that you need a type of exactly N bits and no padding, where uint_leastN_t implies that you just need a type of at least N bits. Since there is no need for exact types in the standard, I believe that [u]int_leastN_t would be more fitting. Not to mention that it imposes requirements that may limit the platforms where Vulkan is implementable (especially considering that [u]intN_t is considered optional in the C standard) Anyhow, I personally see nothing negative to this change as it does not actually break the API or the ABI in any of the platforms currently supporting Vulkan since they already have [u]intN_t with the same size as [u]int_leastN_t and the programs or drivers already written for Vulkan will not have to change anything for this exact reason. |
Often we do require types of exactly the specified sizes. The most blatant case is VkDraw/Dispatch*IndirectCommand, which are CPU structures corresponding exactly to data packed in GPU memory for consumption by the GPU. But there's a more general requirement that data can be freely shared between CPU and GPU memory mappings and be manipulated easily on both sides, including vertex buffers, data laid out for consumption or production by shaders in uniform buffers, various kinds of textures, the representation of signed and unsigned twos-complement integers, and so on. IEEE FP is not strictly required but to make that usable, you would have to have a CPU and GPU co-designed to share the same non-IEEE representation. I'll grant that whether a parameter of a Vulkan command is a uint32_t or a uint36_t - or a uint8_t / uint16_t - probably doesn't |
We added some text in the fundamentals chapter in the 1.0.7 spec update making clear that the range of architectures Vulkan is designed for includes only those which do in fact have natural 8/16/32/64-bit types addressable at those granularities. If anyone has a DEC-20 or an obscure DSP without 8-bit integers and wants to run Vulkan on it, they are welcome to join Khronos and lobby to try and change this situation, but for the time being, we don't really care about supporting every conceivable architecture in the world with a C compiler, and we're going to keep the uint_t types in the API as discussed above. |
The reason for this is that unlike the [u]int_leastN_t types, the [u]intN_t types are optional if the implementation can't "provide integer types with widths of 8, 16, 32, or 64 bits, no padding bits, and (for the signed types) that have a two’s complement representation". Also see section 7.20.1.1 of the C11 standard.
The text was updated successfully, but these errors were encountered: