Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support for multiple objects with storage_multiupload #51

Closed
anandkkumar opened this issue May 8, 2020 · 5 comments
Closed

Support for multiple objects with storage_multiupload #51

anandkkumar opened this issue May 8, 2020 · 5 comments
Labels
enhancement New feature or request

Comments

@anandkkumar
Copy link

First off, wanted to state that this package is great. Nice work.

Looking at the documentation it appears I can load individual objects with rawConnection and storage_upload or use multiple files with storage_multiupload.

Is it possible to use storage_multiupload to upload multiple objects that are in memory in R to blob/adls storage upstream? If not, is this something we can add as an enhancement if it's technically feasible.

I know I can write the objects to temp storage and then just multi-upload the individual files but it will incur unnecessary file I/O and can be inefficient especially for the large objects we have in memory.

thank you very much for your time.

anand

@hongooi73
Copy link
Collaborator

Hi, thanks for the comments!

This is a rather tricky task, mostly because of how to design the interface. Note that there is no UI to upload a single object directly to storage either: if you have say a dataframe, you have to serialise it first to a connection object or file, and then upload that.

I'm also not convinced of the utility of multiple uploads of objects. Even if you create a tempfile on disk, your Internet connection is much more likely to be the bottleneck. The exception would be if you are doing wholly within-Azure transfers (eg from a VM to a storage account) and everything is in the same region, but then your absolute transfer times are going to be pretty good in any case.

As for object size, you're limited by available memory so there's only so much you can upload at once anyway.

Feel free to continue iterating on this, I'd be interested in knowing more about your particular use case if you believe this is warranted.

@hongooi73 hongooi73 added the enhancement New feature or request label May 11, 2020
@anandkkumar
Copy link
Author

Hi, thanks for the quick response & sorry for the delay in responding.

At our firm, we run little to nothing on our laptops. All VMs (including the one hosting RStudio Pro) run on Azure and so yes, we are looking for the case of within-Azure transfers where network bandwidth is not a problem (from my experience this is fairly typical of most large enterprises with single cloud-providers)

We prefer to keep our disk sizes small on our VMs and so our /tmp storage is not that big. However, we have fairly sizable RAM, clock speeds and several cores. We spin VMs up & down as needed. So memory & CPU utilization are not typical bottlenecks but I prefer to avoid unnecessary disk I/O.

The use case is being able to take objects that we already have in memory (for example we generate several co-variance matrices using mclapply over all the cores) and then upload them to Azure blob storage in parallel.

Currently, we have to either

  • write all of them to /tmp (if we add space) resulting in unnecessary disk I/O and then use storage_multiupload
  • write some sort of apply loop ourselves and use storage_upload with rawConnection for each object

It would instead be beneficial to have something already built into your package, similar to like storage_mutliupload for parallel uploads but for a collection (list?) of objects that we provide, instead of file paths.

Hope that clarifies things. Let me know your thoughts.

Thanks

@hongooi73
Copy link
Collaborator

While you may have small disks on the VMs, I'd still be surprised if you don't have far more disk space than memory, so that shouldn't be a constraint. Note that even if /tmp is a limited filesystem, R's tempfile() lets you choose the directory, so you could write your files to, say tempfile(tempdir="~") to write to your home dir. Or if you have a data disk mounted, you can write there.

Similarly, the network may be fast for within-Azure transfers, but assuming you're using SSDs, writing to local storage should still be much faster. So any slowdown from writing to a tempfile should not be a major factor. In particular, any blob upload involves at least 2 API calls with associated latency, so there is a lower limit to how fast things can get.

If you are using mclapply to parallelise your computations, you can also insert the storage_upload as part of the call, rather than doing it separately after the compute is finished. That would save having to wait on the slowest job.

@anandkkumar
Copy link
Author

Yes, I am familiar with all of your points and know of all those possibilities. My only suggestion was to have the package to do all this (parallel loads et al) instead of the user implementing this themselves. If you don't this is something that will be implemented anytime soon or at all, I can just build something out myself.

Thanks for listening. I really like this package and thanks for your efforts on it.

@hongooi73
Copy link
Collaborator

Well, it's on the to-do list now, but my current focus for AzureR is on getting the Table storage/CosmosDB package done. So I wouldn't expect any major changes for AzureStor in the short term.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants