-
Notifications
You must be signed in to change notification settings - Fork 91
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Training data structure #106
Comments
Great question! We have indeed changed the directory structure and style
of training data. Now, each raw image (phase.tif) will have 1
corresponding annotation file. We no longer use cell edge as training
data and instead generate this data from a unique cell mask. Instead of
having cell boundary and cell interior as different label files, we just
use a single file with each cell uniquely labeled.
An example structure would look like this:
```
HeLa
│
└───set1
│ │
└───annotated
│ │ │ feature.png
│ │
└───raw
│ │ │ phase.png
│
└───set2
│ │
└───annotated
│ │ │ feature.png
│ │
└───raw
│ │ │ phase.png
```
(The *annotated* and *raw* directories can be defined in the
*make_training_data* function call, or can be ignored by passing in empty
strings as the *raw_dir *and *annotation_dir* parameters
|
Excellent, it's now working as expected. Thanks! |
Great, glad it's working. I will close this issue. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
I've used older versions of deepcell-tf in the past with some degree of success to segment HeLa cells from brightfield images. Back then, the structure of the training data (2 features, cell edge and cell interior) for sampling was as follows:
I've understood from the source code that this structure has been updated. I've tried running the provided Jupyter Notebooks to get a feel of how the training data .npz files are constructed but as I don't have access to the original raw data, I cannot replicate this training data structure with my own data. What's the preferred structure of raw images/annotated images to properly generate training data?
The text was updated successfully, but these errors were encountered: