You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm writing a CLI using turbdodbc's arrow support to dump the arrow arrays to disk as parquet files... I'm interested in contributed support to batch "K" rows of large tables this will allow for tables larger than memory. Would there be an easy path forward for adding this feature. Maybe I'm missing something but seems like there should be a fetcharrowbatch() method?
The text was updated successfully, but these errors were encountered:
Hi there! That sounds like a useful project of yours, and the feature you suggest makes perfect sense. There should be a fetcharrowbatches() similar to the existing fetchnumpybatches() that you can use for your tool.
I'm writing a CLI using turbdodbc's arrow support to dump the arrow arrays to disk as parquet files... I'm interested in contributed support to batch "K" rows of large tables this will allow for tables larger than memory. Would there be an easy path forward for adding this feature. Maybe I'm missing something but seems like there should be a fetcharrowbatch() method?
The text was updated successfully, but these errors were encountered: