You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Also, somewhere in the doc we'll need a description of mappings between yardl types and C++ and other target languages. In particular, I believe you generate your own multi-dim array type as there still doesn't seem to be an std container sadly.
It could be useful to support a few existing multi-dim arrays to avoid copies in client-code (Boost.MultiArray and https://amypad.github.io/CuVec/ come to mind), but I can see that becoming very difficult. (If a mapping to a flat array is exposed somewhere, it'd need to be stated if row-major or column-major order is used).
These are good points. We currently use xtensor types for multidimensional arrays that we alias here. These have a .data() method that exposes the raw flat array.
I think we have some choices for this problem:
We implement our own ndarray types that provide the minimum API surface and aim to make interop with other libraries "easy".
We support a number of different libraries and generate different code depending on a setting in the _package.yaml.
Implement both of the above, since they are not mutually exclusive, with (1) being the default.
Related problem: in some instances, perhaps the memory should be allocated on the GPU. Should this be a be a property on the !array in yardl?
The text was updated successfully, but these errors were encountered:
Related problem: in some instances, perhaps the memory should be allocated on the GPU. Should this be a be a property on the !array in yardl?
I don't think so. Client code needs to decide what to optimise, not the data-spec. However, you might want to have that as an option in the (generated) "read" code, i.e. return an array that lives on the GPU (or at least is CUDA-shared)
Creating a separate issue based on #20 opened by @KrisThielemans
These are good points. We currently use xtensor types for multidimensional arrays that we alias here. These have a
.data()
method that exposes the raw flat array.I think we have some choices for this problem:
_package.yaml
.Related problem: in some instances, perhaps the memory should be allocated on the GPU. Should this be a be a property on the
!array
in yardl?The text was updated successfully, but these errors were encountered: