You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Any thought or intuition as to why we split the train data in half as opposed to the more common practice of [15-30] percent for validation split. Does the sampling have to come from the train set or can it come from a different distribution of validation data like on a standard train, validation, test split?
Feel free to close after answering.
Regards.
The text was updated successfully, but these errors were encountered:
We didn't try other ways to split the data. I think any kind of sampling/split would be fine as long as we don't touch the test set. Arguably, it would easier to distinguish the generalization ability of different architectures with a relatively small training/validation ratio.
Any thought or intuition as to why we split the train data in half as opposed to the more common practice of [15-30] percent for validation split. Does the sampling have to come from the train set or can it come from a different distribution of validation data like on a standard train, validation, test split?
Feel free to close after answering.
Regards.
The text was updated successfully, but these errors were encountered: