You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm curious about how other experiments performed, it would be great to add these results to the readme if possible.
For example, have you tried using the entire image as input instead of small sized batches? This is basically the idea of the U-Net and worked well in other tasks.
Thanks again for open sourcing this!
The text was updated successfully, but these errors were encountered:
Hi Martin, we're currently running the model on the STARE dataset. We'll be publishing results soon.
As for the entire image, we would have gladly tried it, but the DRIVE dataset only consists of 20 images (STARE is not a whole lot more). We'd need a much larger dataset of annotated cases to be successful I believe.
We have developed a ladder network / U-net hybrid internally, which (in theory) helps with semi-supervised segmentation tasks. We could take advantage of an un-annotated dataset to build up a robust whole-image net, we'll eventually go this route.
Stay tuned for the next batch of results and feel free to contribute additional experiments.
Hi,
I'm curious about how other experiments performed, it would be great to add these results to the readme if possible.
For example, have you tried using the entire image as input instead of small sized batches? This is basically the idea of the U-Net and worked well in other tasks.
Thanks again for open sourcing this!
The text was updated successfully, but these errors were encountered: