-
Notifications
You must be signed in to change notification settings - Fork 578
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use data generator to federated framework when train on large dataset #793
Comments
Thanks for asking on SO! Dropping a link here, will close when there is an accepted answer. |
Hi! Thanks so much for your reply on SO! Unfortunately I still can't work it out. Do you know any way to change the model training process on local clients? |
@zm17943 could you take a look at the example in this StackOverflow answer? This does not load all clients at once, only the clients in one round of competition are used at a time, and then the |
Thank you! I have looked into the StackOverflow answer, and adjusted my code to load each client at one time. However, I am still confused about the use of real-time data augmentation, for example, can I use tf.Data.Dataset.from_generator to load data into Federated? |
Hi, I tried to use tf.data.Dataset.from_generator to train federated model. But this step took forever.
I tried to reduce the batch_size and trainable parameters to get it fast, but still. I was wondering how to diagnose the training process? |
I have exactly
I have the exactly same issue! |
One thing that I might investigate here: try adding a If TFF is given an infinite I am thinking this way because if your generator never raises
which therefore implies: if there is no |
Yes, I am using |
We seem to be getting this question along multiple channel, so I think for ease of discoverability we would prefer to consolidate to stackoverflow. Please see the discussion here, and open a question there if that does not suit your needs. Thanks! |
Hi!
I was very glad to customize my own data and model to federated interfaces and the training converged!
Now I am confused about an issue that in an images classification task, the whole dataset is extreme large and it can't be stored in a single
federated_train_data
nor be imported to memory for one time. So I need to load the dataset from the hard disk in batches to memory real-timely and use Kerasmodel.fit_generator
instead ofmodel.fit
during training, the approach people use to deal with large data.I suppose in
iterative_process
shown in image classification tutorial, the model is fitted on a fixed set of data. Is there any way to adjust the code to let it fit to a data generator?I have looked into the source codes but still quite confused. Would be incredibly grateful for any hints.The text was updated successfully, but these errors were encountered: