-
Notifications
You must be signed in to change notification settings - Fork 384
-
Notifications
You must be signed in to change notification settings - Fork 384
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
4 channel test- Training dataset has fewer elements than batch size #1114
Comments
Please share the full command that you're using to run this. Also, can you look at the zip files in Another thing I would suggest is to change this chip_options = SemanticSegmentationChipOptions(
window_method=SemanticSegmentationWindowMethod.random_sample,
chips_per_scene=10) to chip_options = SemanticSegmentationChipOptions(window_method=SemanticSegmentationWindowMethod.sliding) and see if that helps. |
I am running
then I changed to Looking in /opt/data/output/chip: /valid/img/contains 50 .npy objects End of run before crash:
|
Can you share the output log for the train command? That is, everything after |
Thank you Adeel for superfast feedback! :-)
|
The error seems to be occurring because the train stage is incorrectly assuming that there are 3 channels only and therefore looking for .png files instead of .npy files. We can specify the correct number of channels explicitly by changing data=SemanticSegmentationImageDataConfig(), to data=SemanticSegmentationImageDataConfig(img_channels=len(channel_order)), I think this should fix the problem. |
Should I define something more in my code? Getting
|
You seem to have run up against an edge case in plotting samples from the dataset caused by batch size = 1. Good catch. This is another bug. Increasing the batch size should fix this particular error. I also notice that you are setting but not passing in the data=SemanticSegmentationImageDataConfig(img_channels=len(channel_order), channel_display_groups=channel_display_groups), |
Seems to work now. |
You need to enable GPU usage when running docker. Depending on your Docker version, you will need to pass in either |
The GPU on Docker is a bit cumbersome on Windows(working on it). But I uppdated RAM so the pipeline is running now. But now I get
|
You are setting but not passing in Change data=SemanticSegmentationImageDataConfig(img_channels=len(channel_order)), to data=SemanticSegmentationImageDataConfig(img_channels=len(channel_order), img_sz=img_sz), |
Works now. Thanks! |
Now and the running this script I get UnpicklingError: unpickling stack underflow. Why is that?
|
Maybe the download didn't complete? |
Hi!
I am trying to train on 4 channels (R,G,B, elevation). I am using the master branch in a Docker image with local data.
After many tries I get the same error when the run reach the train command : 'Training dataset has fewer elements than batch size.'
I tried to set batch size to 1 and increase number of epochs, I also tried to both train and validate on image 2 instead of image 3.
But I get the same error every time.
Can´t figure if it´s something in my code or the data I have to change?
Message:
File "/opt/src/rastervision_pytorch_learner/rastervision/pytorch_learner/learner.py", line 541, in setup_data
'Training dataset has fewer elements than batch size.')
My data:
https://drive.google.com/drive/folders/1ed0NpcjWOdkiSEuliszkDmytuLqVrdO5?usp=sharing
Image 1
Image 2
Image 3
The text was updated successfully, but these errors were encountered: