-
Notifications
You must be signed in to change notification settings - Fork 15
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Threshold and cells #9
Comments
(1) It doesn't recognize cell bodies, just fibers. In some cases, when the cell bodies are very bright, it can impact the ability to identify the most local fibers nearby, but I suspect this could be adjusted with a little extra training. Future versions will hopefully have multi-class output for cells and fibers from the same channel. |
Thanks for the replies. Very instructive and clear. I have been testing it on my own dataset and facing some issues which might be related to the acquisition: |
If you prefer I can open a new issue for this but since I mentioned using my dataset, I just came across a weird pattern in the segmented output of the network: In both test images, the raw data (inverted LUT) on the left, segmented corresponding plane on the right...what is that squared pattern that you can see in the segmented frames? It appears as if the network fails to...properly align back the batches (each "square" is 36by36 pixels)? I can see the checker-board pattern in every single image of the segmented stack. No idea what is going on. |
Looking at your examples and hearing your description, I would agree that it’s too dense for the current model. The barrel cortex projections we show in Fig. 4 were imaged at higher optical zoom. As with the layer I cortical axons in Fig. 4, if a “fiber type” is not covered in the training set, it won’t be segmented. While we have good coverage of brain regions and background textures, different cell types will inevitably have different appearances in their terminals, necessitating some transfer learning.
Yes, that’s the skeletonization function we use. It’s up to you if you want to use our weighted design or not. Small disconnected objects can be identified and later categorized and removed with something like MATLAB’s
That checkerboard pattern can appear when and object or texture dominates a single cube to the point that the network “gives up” on finding axons in the region. In your examples, the very dense barrels are not being identified as axons and are dominating nearby voxels. Some transfer learning with denser labeling or higher mag imaging might help. |
Thanks for your reply @dfriedma |
It all is at 4.0625 um/vx, but in practice, augmentation allows for axons of different apparent size. It's likely that there just has to be a little separation between the fibers in order to identify them as fibers and not splotches in background. Your clearing quality looks nice! It'll work out. I'm working on getting a new model for higher density regions now as well, but haven't finalized things yet. |
Sounds good, looking forward! |
Reopening this, because I came up with another question: |
We use this function, too, but are currently looking for an alternative due to this bug: scikit-image/scikit-image#3757 Do you think this could influence the results here? |
Very sorry for such a late reply, I only saw you reopened this now, but I thought it's worth answering if anyone has the same question! We did not normalize the volumes because we lost too much information (raw intensity is important for axon identification). For example, a noisy image of very low intensity would begin to look like axons if you normalized it. Instead, we took our training set and augmented with random scaling and constants additions. This created a training set that was diverse enough in axon intensity to perform well on our validation set. For quantification, if you see the skeletonization_3d is leading to disconnected components, you can lower the confidence from 0.5 to say 0.3. If you are still unhappy with the results, you can also use some classical post-processing filters such as removing small components or filtering out shapes. Let me know if you have anymore questions! |
Hi Albert, thanks for the reply! Only if you could reply to the second question, that would be great:
Thanks! |
I can speak to the quantification strategy-- |
Dear @AlbertPun and @dfriedma ,
These are more questions than issues, I hope it fits here.
-I was wondering how does the network behave in the presence of neuronal bodies present in the images, is it able to remove them? Or should we clean them up before inference?
-The
threshold
variable is set at a default of 0.01...what does it represent? As far as I understand, if the chunk to be processed in the network has a maximum value higher than threshold then it will be added to the queue. Fine. But what does this value represent? Whatever shade that is not pitch black, right? Is it in float32 or int16?Thanks!
A
The text was updated successfully, but these errors were encountered: