You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
For data where the metadata itself is large (> 10000 fields), doing a full in-memory reconstruction of a record batch may be impractical if the user's goal is to do random access on a potentially small subset of a batch.
I propose adding an API that enables "cheap" inspection of the record batch metadata and reconstruction of fields.
Because of the flattened buffer and field metadata, at the moment the complexity of random field access will scale with the number of fields – in the future we may devise strategies to mitigate this (e.g. storing a pre-computed buffer/field lookup table in the schema)
For data where the metadata itself is large (> 10000 fields), doing a full in-memory reconstruction of a record batch may be impractical if the user's goal is to do random access on a potentially small subset of a batch.
I propose adding an API that enables "cheap" inspection of the record batch metadata and reconstruction of fields.
Because of the flattened buffer and field metadata, at the moment the complexity of random field access will scale with the number of fields – in the future we may devise strategies to mitigate this (e.g. storing a pre-computed buffer/field lookup table in the schema)
Reporter: Wes McKinney / @wesm
Note: This issue was originally created as ARROW-567. Please see the migration documentation for further details.
The text was updated successfully, but these errors were encountered: