You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The latest SVN snapshot of xar includes a xar_extract_tobuffer() function,
which is very nice.
However, if a very large file, such as one that was 1 GB in size, is stored
inside a xar archive, it
could be impractical to extract the entire file to a buffer, all at once. For
this reason, it would be
nice if there were a way to read a fixed number of bytes at once into a buffer,
looping until one had
read the entire file. This would make it much easier to pipe the data in a file
to another task, for
example.
Original issue reported on code.google.com by Charle...@gmail.com on 21 Jan 2007 at 12:14
The text was updated successfully, but these errors were encountered:
yeah, something similar to how the bzip and zlib callbacks work would be good.
I'll need to look back at the xar
io code to see how practical this would be.
Original issue reported on code.google.com by
Charle...@gmail.com
on 21 Jan 2007 at 12:14The text was updated successfully, but these errors were encountered: