You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The architecture is fully convolutional. Therefore, it should not be a problem to scale the input up in powers of 2 and use the provided weights. If you do so, you have to recompute the geometry related to the prior boxes. The easiest way to do this is to creating a new instance of the PriorUtility. If you want to scale down the model input by powers of 2, you have to remove appropriated many layers and prediction paths from the end of the architecture. In general, an arbitrary input size requires changing the architecture and training the model.
You can always resize your image to the models input size of 512x512 and scale the predicted bounding boxes back to the original image size. Also, consider padding the image to maintain an aspect ratio that is appropriate to the features learned from the training data.
Can I evaluate on arbitrary sized input? is there a way?
Model seems to return error when I change
Input in the TBPP model to None sized array
The text was updated successfully, but these errors were encountered: