Thinking about the reader architecture: have you considered an Arrow-native path via ADBC alongside the existing ODBC reader?
Reasoning: Polars is already the internal DataFrame format, so an AdbcReader could hand Arrow record batches to Polars zero-copy, avoiding the row-by-row accumulation in OdbcReader's ColumnBuilder. Would complement rather than replace ODBC since ADBC driver coverage is narrower, but for backends that ship ADBC drivers it's a cleaner path.
This builds off another feature request (see [330] (#330) ) .Exasol ships exarrow-rs, an ADBC-compatible Rust driver, on crates.io. So an AdbcReader would unlock Exasol immediately without needing a dialect-specific module. Other warehouses are on the same trajectory (Snowflake, DuckDB, Postgres all have ADBC drivers shipping). Either way I'd love to contribute, but want to align with your team's technical direction!
Thinking about the reader architecture: have you considered an Arrow-native path via ADBC alongside the existing ODBC reader?
Reasoning: Polars is already the internal DataFrame format, so an
AdbcReadercould hand Arrow record batches to Polars zero-copy, avoiding the row-by-row accumulation inOdbcReader'sColumnBuilder. Would complement rather than replace ODBC since ADBC driver coverage is narrower, but for backends that ship ADBC drivers it's a cleaner path.This builds off another feature request (see [
330] (#330) ) .Exasol shipsexarrow-rs, an ADBC-compatible Rust driver, on crates.io. So anAdbcReaderwould unlock Exasol immediately without needing a dialect-specific module. Other warehouses are on the same trajectory (Snowflake, DuckDB, Postgres all have ADBC drivers shipping). Either way I'd love to contribute, but want to align with your team's technical direction!