-
Notifications
You must be signed in to change notification settings - Fork 528
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Weird boundaries and fixed output size #33
Comments
net.blobs['data'].reshape(1, 3,200,200) also look up caffe net surgery for changing the output |
Thanks for your reply. The code I use is basically the tutorial. Both your suggestion and resizing the image in Python did the job of changing the size. But even if I use that, it does not remove the image boundary. In fact I have realized that the image is displaced by exactly 32 pixels to the bottom and right and then cropped, regardless of the image size.
|
I had the same problem when running the code. |
I have the exactlly same problem. |
I also have the same problem. The output edge map is not aligned to the input image. |
I also meet the same problem |
Same problem. |
Just find a solution:
crop_param { thanks to this post https://medium.com/@s1ddok/holistically-nested-edge-detection-on-ios-with-coreml-and-swift-e45df264cf66 |
I meet the same problem, can u provide some solutions? |
Hi,
I am running your provided model on arbitrary images. However I get a weird boundaries on top and on the left side. I could obviously just crop it out, but the errors seem to propagate to lower levels of resolution:
Do you know how to fix this problem? Furthermore is it possible to change the output size of a network without retraining? I realized that changing the input size of the image in the prototxt file does not change anything.
The text was updated successfully, but these errors were encountered: