-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
allow to use other image types (than phase contrast) for tracking #20
Comments
Explanation: |
I assume that data obtained with an ubiquitously expressed fluorescence signal would just work. I'd be happy to give it a try in case such datasets are readily available. |
Great! We will get you this very soon. |
Yes, this is really great! |
So, I gave a quick look at the data we already shared. The following datasets have uniform cytoplasmic fluorescence on channel 2: http://swissregulon.unibas.ch/video/Missing_hypothesis/20150630_pos0_GL05 if you need more GLs: Importantly, one GL with uniform fluorescence at lower illumination (hence lower signal to noise ratio) is available here (I could share more if you need): |
as mentioned above, another type of images that would be very interesting to use for us is correlation images. We prepared a small dataset (http://swissregulon.unibas.ch/video/20160917_sent/20160914_Pos0.tar.gz) with the following channels:
It would be very interesting if MoMA was able to use any of these image type for tracking. Please let me know if you need more info / longer datasets… NB: this has lower priority than being able to use fluorescence images for tracking, but still correlation images should be more reliable than standard phase contrast. |
also if it becomes possible to use different image types, it would be very convenient to be able to tell MoMA which channel of the image dataset should be used for tracking (cf #2 (comment)). |
this has come over and over in emails but never made it as an issue (probably because not required for the first method paper and not a minor request…)
because MoMA produces a set of nested segmentations hypotheses and then handle them as a graph, it should in principle be possible to generate this segmentations hypotheses from other types of images such as fluorescence (uniform cytoplasm tagging) or more sensitive phase reconstructed images (such as correlation images).
Importantly, this should be designed as far as possible such that users can support new image types by themselves (by defining a config file and e.g. training a classifier)…
This should also help improving the following:
The text was updated successfully, but these errors were encountered: