-
Notifications
You must be signed in to change notification settings - Fork 57
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Running net2 without a GPU #1
Comments
Update2: Okay, the issue is that train_test_split will split the incoming data. After that split, the number of samples still needs to be a multiple of the batch size. For regression, the KFold cross validator from sklearn is used. By setting eval_size of the neural net to the appropriate size, one can fix this problem. However, for classification, StratifiedKFold is used, which makes the size of the train/test split unpredictable. For that, I see no easy fix right now. Update1: The issue seems to be more complicated now. The remainder depends on the train_test_split and therefore behaves somewhat unpredictably. Still investigating... I had a similar problem once, maybe the same solution applies. Back then, the layer expected the batch sizes to always have the same size. If the iterator runs out of samples, it will yield the remaining number of samples (in your case that would be 48 remaining -- change the batch size and check if this number changes accordignly).
Hope this helps. A fix in Lasagne/nolearn would be much appreciated, Daniel :) |
@BenjaminBossan Thanks for your reply. Since in this tutorial the number of training data is 2140, I modified eval_size to 0.1, and then changed input_shape to input_shape=(214, 1, 96, 96). In addition, I also modified batch_iterator=BatchIterator(batch_size=128) to batch_iterator=BatchIterator(batch_size=214) in nolearn/lasagne.py. In the end, the net2 could start training. |
Anyone feel like making a pull request for |
There's now a flag called |
Forget about
|
First I really appreciate your effort in making this tutorial and nolearn wrappers. With nolearn I can just twist deep neural networks as easily as with scikit-learn.
When I am going through your tutorial I encounter a problem. This problem happens when I fit X, y in net2, namely net2.fit(X, y). I got an error message, "CudaNdarrayType only supports dtype float32 for now. Tried using dtype float64 for variable None". I don't have GPU on my Mac so I just modify the code from:
into:
Unfortunately new error occurs, and the error message is, "the batch size in the image (48) at run time is different than at build time (128) for the ConvOp."
I have already set the batch size to 128, but there is some weird batch size "48" in the error message.
Could you please give me some advice about this problem? Should I give up running the code on my Mac and find a GPU?
The text was updated successfully, but these errors were encountered: