-
Notifications
You must be signed in to change notification settings - Fork 189
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
I wonder if i could get some help with my own RGB input #11
Comments
Hi. To test with your own images, simply read them in as a 4-dimensional pytorch floating point tensor of size 1x224x224x3. The raw rgb values [0,255] should be divided by a constant factor 255.0, such that all pixel values fall in the range of [0, 1]. |
I tried doing this but i've got this error "RuntimeError: Given groups=1, weight of size 32 3 3 3, expected input[1, 244, 244, 3] to have 3 channels, but got 244 channels instead". My image is 244x244 and i'm giving the right format, as you can see here:
So i don't know why this error is occurring, if i'm giving the exact same format. |
The PyTorch conv2d function assumes inputs to be in 'NCHW' format, meaning that the tensor you feed into the network should be of shape [1, 3, 224, 224]. From your code snippet, you may be using a 'NHWC' format -- try permuting the tensor dimensions to change to 'NCHW'. |
Also, the correct image size is 224 x 224, not 244 x 244 |
Have you divided the input RGB values by 255.0, as in this line? Line 56 in b1266da
|
Not exactly like this.
|
I believe it should be permutation of dimensions here, rather than reshaping (which breaks the data ordering). Please try |
Thanks for the work @dwofk @fangchangma.
The output from the first iteration looks good. But at each iteration, the output is different from the output of other iterations even with the same input image (See below pic). I print the pred values and find that it does differ from the previous iteration even with the same input image and the same model. Is there anything I missed for using the model? |
@GustavoCamargoRL Do you have the same issue? |
@mathmax12 Have you done this by using tvm apache? |
@LulaSan It turns out that this caused by the tvm . the latest tvm solved this |
@mathmax12 Ok thank you, can I ask you how do you visualize the results? By using their code visualize.py? |
You can save the results as https://github.com/dwofk/fast-depth/blob/master/main.py#L98 |
I'm trying to test with my own inputs, but i'm not quite sure how to do it.
I thought it was in the dataloader.py code, but when i tried debugging it apparently this class is for the NYU dataset right?
If you could explain how to proper do it, it will be very helpful.
Thanks!
The text was updated successfully, but these errors were encountered: