Skip to content
This repository has been archived by the owner on Jul 4, 2023. It is now read-only.

handling large-scale datasets with distributed dataloaders for iterative datasets #109

Open
rabeehk opened this issue Nov 7, 2020 · 0 comments

Comments

@rabeehk
Copy link

rabeehk commented Nov 7, 2020

Hi,
I have multiple large-scale datasets in TFDS format, which needs to be converted to iterative datasets, and I want to trani large-scale T5 model on TPUs with them, for this I need a distributed dataloader which can handle iterative datasets efficiently with pytorch XLA. Here is example when datasets are not iterative:

return DistributedSampler(dataset, num_replicas=xm.xrt_world_size(), rank=xm.get_ordinal())

I appreciate providing me with examples of how I can implement handling large-scale TFDS datasets and distributed dataloader to be able to train models with your library.

thanks.
Best
Rabeeh

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant