You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@benanne First of all, thank you for sharing your code ~ I am especially interested in your online(or real-time) data augmentation module which I would like to prevent from overfitting depend on this strategy. However, I am fully lost in realtime_augmentation.py. The most concern is: if I have load the training dataset X_train(should be a numpy.array), then how could I get a augmented data X_augment by your functions ? Thank you !
The text was updated successfully, but these errors were encountered:
It's a bit of a mess because the code grew organically as the competition progressed. The code is written specifically for the galaxy challenge dataset, and it loads the JPEGs on the fly every time to save memory. If you have an array with training datapoints already loaded, you will have to modify this. It's definitely possible, but unfortunately I don't have much time right now to provide guidance.
@benanne First of all, thank you for sharing your code ~ I am especially interested in your online(or real-time) data augmentation module which I would like to prevent from overfitting depend on this strategy. However, I am fully lost in
realtime_augmentation.py
. The most concern is: if I have load the training datasetX_train
(should be anumpy.array
), then how could I get a augmented dataX_augment
by your functions ? Thank you !The text was updated successfully, but these errors were encountered: