Skip to content

Conversation

@ProgramadorArtificial
Copy link
Contributor

@ProgramadorArtificial ProgramadorArtificial commented May 30, 2024

@manujosephv. We talked in this PR some time ago #387 and you said that changing batch_size is important and want to keep this feature, but I did not understand how to do it so you committed the change to my fork to allow changes batch_size in tuner, now I tested it, but it is not working. :)

First, it is necessary to remove the ValueError:
image

But even after change it still does not working, I tried to train Node model with [64, 2048, 4096] batch size, which with 64 is possible, now with 2048 and 4096 was supposed to return OOM. But that did not happen, everyone was trained normally usign the batch size configured in TrainerConfig.

Can you check it for me? Otherwise, it is possible to do how I did (and you said the is slower) or remove this feature from tuner.


📚 Documentation preview 📚: https://pytorch-tabular--449.org.readthedocs.build/en/449/

@manujosephv
Copy link
Collaborator

Hmmm.... Holiding back the PR for batch_size doesn't make sense.. I say let's go ahead without batch_size as an allowable parameter.. Let's raise an error if "batch_size" is included as a parameter..

@ProgramadorArtificial
Copy link
Contributor Author

Show, I will review it and remove examples that use it

@ProgramadorArtificial
Copy link
Contributor Author

@manujosephv. I updated the tutorial without the trainer_config

@manujosephv manujosephv changed the title Change batch_size in tuner Add tuner return best model>>Change batch_size in tuner Jun 6, 2024
@manujosephv manujosephv changed the title Add tuner return best model>>Change batch_size in tuner Fix to Tuner change trainer and optimizer configs >>Change batch_size in tuner Jun 6, 2024
@manujosephv manujosephv merged commit 01d4003 into pytorch-tabular:main Jun 6, 2024
@manujosephv
Copy link
Collaborator

merged 🥳

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants