You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi,
how is the buffer mechanism intended? Is it intended in terms of chunks that are buffered and the reading is continued, of is is actually the maximum file size that can be read?
I noticed two things:
a) the reading stopped in-between reading a larger file (>30.000 lines), with-out any hint that EOF was not reached. Using "a-csv" via npm module "read-csv-json" --> ... --> a-csv", (on Windows 7)
b) the CLI tool "a-csv" seems to ignore the "-l" parameter. changing it does not make a difference.
c) after increasing the "default buffer" size to 256+1024 I was able to read to my large file, but of cause on larger files that might fail again. Therefore the question, how the "buffer" usage is indented.
Thank you very much for you great tool!
Best Regards
Michael
The text was updated successfully, but these errors were encountered:
Hi,
how is the buffer mechanism intended? Is it intended in terms of chunks that are buffered and the reading is continued, of is is actually the maximum file size that can be read?
I noticed two things:
a) the reading stopped in-between reading a larger file (>30.000 lines), with-out any hint that EOF was not reached. Using "a-csv" via npm module "read-csv-json" --> ... --> a-csv", (on Windows 7)
b) the CLI tool "a-csv" seems to ignore the "-l" parameter. changing it does not make a difference.
c) after increasing the "default buffer" size to 256+1024 I was able to read to my large file, but of cause on larger files that might fail again. Therefore the question, how the "buffer" usage is indented.
Thank you very much for you great tool!
Best Regards
Michael
The text was updated successfully, but these errors were encountered: