You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Ideally we should be able to simplify the logic of learn.summary by checking for shape changes rather than specific layers, and then depending on what comes of that we can check for other types (train params, frozen, etc). Brent on the forums had some other thoughts too that should be taken into consideration here
The text was updated successfully, but these errors were encountered:
Also, I wonder if summary() could provide a more useful diagnosis of model errors, rather than just crashing. Size mismatches are a persistent issue posted on the forums. Something like, "The layer [layer string] needed shape 1x3x50x50 but the previous layer sent 1x2x50x50. You might fix this by ...." It could help people diagnose their own models.
Would it make sense to print the full Input shape on the first line?
DynamicUnet (Input shape: 8)
============================================================================
Layer (type) Output Shape Param # Trainable
============================================================================
8 x 64 x 48 x 64
Within the table; the first Output Shape that is printed is the first "changed" shape; so having DynamicUnet (Input shape: 8) list the full shape would be useful to see the original image size if that info isn't available in some other part of the summary I have not understood yet.
Ideally we should be able to simplify the logic of learn.summary by checking for shape changes rather than specific layers, and then depending on what comes of that we can check for other types (train params, frozen, etc). Brent on the forums had some other thoughts too that should be taken into consideration here
The text was updated successfully, but these errors were encountered: