Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Integer packing and arbitrary precision integers #184

Closed
bernhardmgruber opened this issue Mar 22, 2021 · 3 comments · Fixed by #427
Closed

Integer packing and arbitrary precision integers #184

bernhardmgruber opened this issue Mar 22, 2021 · 3 comments · Fixed by #427
Labels
enhancement New feature or request

Comments

@bernhardmgruber
Copy link
Member

bernhardmgruber commented Mar 22, 2021

LLAMA could allow to pack integers into fewer bits than their usual size. Especially memory footprint sensitive data layouts frequently use such types to save memory.
Another approach is the support of arbitrary precision integer types, which are also common in FPGA code, e.g. ap_int<N>: https://www.xilinx.com/html_docs/xilinx2020_2/vitis_doc/use_arbitrary_precision_data_type.html

E.g.: 3 12-bit integers forming an RGB value:

using RecordDim = llama::Record<
    llama::Field<R, llama::Int<12>>,
    llama::Field<G, llama::Int<12>>,
    llama::Field<B, llama::Int<12>>,
>;

An open design point is how a reference to such an object is formed, since a mapping may not place these objects at a byte-boundary. Thus, locations of such elements might not be addressable. A solution could be a proxy reference like in e.g. std::bitset<N>.

@bernhardmgruber
Copy link
Member Author

Related, here is a compiler solution for quantized simulations: https://www.youtube.com/watch?v=0jdrAQOxJlY

@bernhardmgruber bernhardmgruber added the enhancement New feature or request label Jul 9, 2021
@bernhardmgruber
Copy link
Member Author

By adopting N2709, C23 will likely get such an integer type built-in: _BitInt(N), where N is the number of bits.

@bernhardmgruber bernhardmgruber linked a pull request Dec 2, 2021 that will close this issue
@bernhardmgruber
Copy link
Member Author

With #420 we will get something similar that is actually better. The proposed bitpacking mappings leave the record dimension as is and only change the storage representation, which is what LLAMA should actually touch. The type in the record dimension is what will be used for computation and those types should stick with the fundamental types of the language.

@bernhardmgruber bernhardmgruber linked a pull request Dec 9, 2021 that will close this issue
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
1 participant