-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
--num-processes causes build to error #5278
Comments
Parallelization is done by processing each config in a separate process: datasets/tensorflow_datasets/scripts/cli/build.py Lines 314 to 315 in 69e781f
This means that there's no reason to set -num-processes higher than the number of distinct configs for your dataset.
The way multiprocessing works in Python it would pickle each builder (including references to You could make your dataset builder code discoverable by Python (e.g. including it into TFDS and reinstalling the library locally). Or perhaps you could consider implementing parallelization in your dataset builder or use Beam. |
#5279 should make children processes aware of your dataset builder. |
I can now run with the --num-processes flag which is pretty cool. Thank you for fixing that!
Is it possible to effectively split my dataset into different configs then merge them together when done?
My code unfortunately isn't the slow part. Its my abuse of TFDS due to me having 138 features in my dataset . My code takes 2.5 seconds per 3000 examples meanwhile my generation happens at 80 examples per second. |
Great, really glad that it worked for you! I'll be closing this issue then.
TFDS doesn't natively support mixing datasets, but you can use some other tools for that, e.g. https://github.com/google/seqio
It's usually very straightforward to parallelize examples generation: datasets/tensorflow_datasets/core/dataset_builders/huggingface_dataset_builder.py Lines 436 to 447 in 38727f7
Or with Beam: datasets/tensorflow_datasets/datasets/beir/beir_dataset_builder.py Lines 302 to 320 in 38727f7
|
Ok. Thanks for you for your help with this. I'll look into parallel example generation. and see if I can get that working with my dataset. |
Hi, I have unfortunately encountered the same issues and had no luck on trying However, I think I managed to bypass this using class Builder(tfds.core.GeneratorBasedBuilder, skip_registration=True):
VERSION = tfds.core.Version("1.0.0")
BUILDER_CONFIGS: ClassVar[list[tfds.core.BuilderConfig]] = [
tfds.core.BuilderConfig(name=str(group)) for group in range(1,11)
] then run the CLI per group one by one. tfds build my_dataset --config 1 Later we can load and concatenate the datasets as below ds1 = tfds.load('my_dataset/1', split='train')
ds2 = tfds.load('my_dataset/2', split='train')
ds = tf.data.Dataset.sample_from_datasets([ds1, ds2]) I tested locally, it does seem to work. Please let me know if there are any hidden traps 🙌 If not, hope it could help others who are facing same issues! |
What I need help with / What I was wondering
So I am trying to run tfds build myDataset with multi processing since the dataset is pretty wide and my bottle neck seems to be tensorflow itself.
When using
--num-processes 12
it hangs the following error being printed to the terminal.I haven't a clue in the slightest what is called before that to get the issue.
My dataset does build without the flag, it just takes for friggen ever.
What I've tried so far
Running it again.
Using
tfds-nightly
Restarting my computer.
using
tensorflow-dataset
Yelling at it.
Asking nicely.
Reading and failing to understand the source code
Complaining on the internet.
It would be nice if...
I may have missed it but it appears there is no documentation for
--num-processes
. Everything else I have found indicates that tfds local builds are single core only.Is it not fully implemented?
Environment information
(if applicable)
tensorflow-datasets
version: 4.9.4tensorflow
version: 2.14.0If ya'll need anymore information, please let me know.
The text was updated successfully, but these errors were encountered: