Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[RFC] Wrapper/Proxy for Stream #55

Closed
ejguan opened this issue Oct 13, 2021 · 7 comments
Closed

[RFC] Wrapper/Proxy for Stream #55

ejguan opened this issue Oct 13, 2021 · 7 comments

Comments

@ejguan
Copy link
Contributor

ejguan commented Oct 13, 2021

🚀 Feature

Discussed with @NivekT about the wrapper class for all streams:
Pros:

  • We can add a __del__ method to close the file stream automatically when ref count becomes 0 for wrapper. It would eliminate all warnings.
  • A wrapper class can unify the reading API for file streams. (For OnDiskCache, I would prefer a unified API to read stream, otherwise I have to handle all different cases)
    • Local file stream, we can use read() to read everything into memory
    • When we set stream=True for large file, the requests.Response doesn't support read. It only supports iter_content or __iter__ to read chunk by chunk.

Cons:

  • Thanks to @NivekT, it needs extra care about magic methods.

Reference: #35 (comment), #65 (comment)

cc: @VitalyFedyunin

@ejguan
Copy link
Contributor Author

ejguan commented Oct 19, 2021

@NivekT

IIUC, HTTPRespond and GDriveRespond should also be wrapped by StreamWrapper

@NivekT
Copy link
Contributor

NivekT commented Oct 19, 2021

@NivekT

IIUC, HTTPRespond and GDriveRespond should also be wrapped by StreamWrapper

Yea, I believe they should, and they should still work as intended given that StreamWrapper delegates the method to the underlying object.

@ejguan
Copy link
Contributor Author

ejguan commented Oct 22, 2021

Another thing I want to mention is unified API would help us to implement downstream DataPipe:

  • Downloader
  • Extractor

These DataPipes are going to take file handle (stream) as input and yield either a filename or a new file stream. A unified API would help us to implement function to read data from streams.

@NivekT
Copy link
Contributor

NivekT commented Oct 22, 2021

Another thing I want to mention is unified API would help us to implement downstream DataPipe:

  • Downloader
  • Extractor

These DataPipes are going to take file handle (stream) as input and yield either a filename or a new file stream. A unified API would help us to implement function to read data from streams.

Doesn't Saver already take in a stream (IOBase) as input and yield a filename? How will Downloader be different?
Maybe Downloader can be the combination of HttpReader and Saver (URL -> Filename)?

For Extractor, will it be similar to what Decompressor does in TorchVision?

https://github.com/pytorch/vision/blob/7b1b68d7142aa2070a5592d7c1a3bff3485b5ec1/torchvision/prototype/datasets/utils/_internal.py#L243

@ejguan
Copy link
Contributor Author

ejguan commented Oct 22, 2021

You are right with Saver. I am talking about reading data from file stream. The current workflow is:

urls = IterableWrapper([URL])
fd = urls.open_url() # File handles
data = fd.map(fn=lambda x: x.read(), input_col=1) # <- This one is referred as Downloader in my mind
file = data.save_to_disk()

We can let user to do any map function to download data from file handle. But, if we are going to implement a DataPipe to do the same thing, we need to make sure all file handles (streams) sent to this DataPipe can be read.

The reason that I prefer a read method is the stream type varies:

  • String: ".join(fd)"
  • Bytes: b"".join(fd)

@ejguan
Copy link
Contributor Author

ejguan commented Oct 22, 2021

In the case of Extractor or Decompressor the yielded file handle should also be wrapped by the Wrapper to make sure fd is closed automatically and have a unified API like read

@ejguan
Copy link
Contributor Author

ejguan commented Dec 9, 2021

Closing this Issue as the Wrapper has landed.

@ejguan ejguan closed this as completed Dec 9, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
2 participants