Skip to content
This repository has been archived by the owner on Jul 22, 2024. It is now read-only.

Running FedMA with large input data shape #5

Open
jefersonf opened this issue Jul 31, 2020 · 0 comments
Open

Running FedMA with large input data shape #5

jefersonf opened this issue Jul 31, 2020 · 0 comments

Comments

@jefersonf
Copy link

Hi @hwang595, a few weeks ago I made some questions in another issue thread about I problem that I had when trying to train a model with input image shape greater or equal to 224x224. Since then, I tried to reduce the dimensions of my problem to the default size, i.e. 32x32, and it worked well! But when I run using 224x224, I'm still locked in this training part.

So I'm gonna ask my questions here again:

  • Is there such a relationship? Training input size and FedMA communication process? If that's true, what can we do about it?
  • By adding a different model, in which part of the code should I take care? Besides changing, for example, the input dimensions to 1x224x224?

Obs.: As I'm working with medical images it is critical resize them.

Thanks for the great work!

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant