New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Project] add Bactteria_Dataset project in dev-1.x #2568
Conversation
projects/Bactteria_Dataset/configs/Bactteria_Det_unet_0.0001_CrossEntropyLoss.py
Outdated
Show resolved
Hide resolved
projects/Bactteria_Dataset/README.md
Outdated
|
||
### Bactteria detection with darkfield microscopy Dataset | ||
|
||
| Method | Backbone | Crop Size | lr | mIoU | mDice | config | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It would be more convincing to provide some logs, if you don't have a server, you can put them on GitHub, something like https://github.com/Ezra-Yu/MY_STORE/releases/tag/v0.0.1
projects/Bactteria_Dataset/README.md
Outdated
|
||
### Dataset preparing | ||
|
||
Preparing `Bactteria detection with darkfield microscopy Dataset` dataset in following format as below. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is doubtful whether this directory is the result of processing before or after using the script. As the random_split.py
is in the below.
...teria_detection/configs/fcn-unet-s5-d16_unet_1xb16-0.0001-20k_bactteria-detection-512x512.py
Outdated
Show resolved
Hide resolved
projects/bactteria_detection/datasets/bactteria_detection_dataset.py
Outdated
Show resolved
Hide resolved
projects/bactteria_detection/datasets/bactteria_detection_dataset.py
Outdated
Show resolved
Hide resolved
projects/medical/2d_image/microscopy_images/bactteria_detection/README.md
Outdated
Show resolved
Hide resolved
To train on multiple GPUs, e.g. 8 GPUs, run the following command: | ||
|
||
```shell | ||
mim train mmseg ./configs/${CONFIG_PATH} --launcher pytorch --gpus 8 | ||
``` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
From this config file name, is there no need to use 8 GPUs to train?
all_imgs = glob.glob('data/bactteria_detection/Bacteria_detection_with_\ | ||
darkfield_microscopy_datasets/images/*' + img_suffix) # noqa |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
all_imgs = glob.glob('data/bactteria_detection/Bacteria_detection_with_\ | |
darkfield_microscopy_datasets/images/*' + img_suffix) # noqa | |
all_imgs = glob.glob('data/bactteria_detection/Bacteria_detection_with_darkfield_microscopy_datasets/images/*' + img_suffix) # noqa |
...teria_detection/configs/fcn-unet-s5-d16_unet_1xb16-0.0001-20k_bactteria-detection-512x512.py
Outdated
Show resolved
Hide resolved
projects/medical/2d_image/microscopy_images/bactteria_detection/tools/prepare_dataset.py
Outdated
Show resolved
Hide resolved
projects/medical/2d_image/microscopy_images/bactteria_detection/README.md
Outdated
Show resolved
Hide resolved
projects/medical/2d_image/microscopy_images/bactteria_detection/README.md
Outdated
Show resolved
Hide resolved
### Training commands | ||
|
||
```shell | ||
mim train mmseg ./configs/${CONFIG_PATH} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
mim train mmseg ./configs/${CONFIG_PATH} | |
mim train mmseg ./configs/${CONFIG_FILE} |
is FILE
more precise than PATH
? as ./configs/
is path
|
||
### Bactteria detection with darkfield microscopy | ||
|
||
| Method | Backbone | Crop Size | lr | mIoU | mDice | config | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do you share the pre-trained weights with other users?
reduce_zero_label=False, | ||
**kwargs) -> None: | ||
super().__init__( | ||
img_suffix=img_suffix, | ||
seg_map_suffix=seg_map_suffix, | ||
reduce_zero_label=reduce_zero_label, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you really want to fix reduce_zero_label
, just use hard code.
reduce_zero_label=False, | |
**kwargs) -> None: | |
super().__init__( | |
img_suffix=img_suffix, | |
seg_map_suffix=seg_map_suffix, | |
reduce_zero_label=reduce_zero_label, | |
**kwargs) -> None: | |
super().__init__( | |
img_suffix=img_suffix, | |
seg_map_suffix=seg_map_suffix, | |
reduce_zero_label=False, |
reduce_zero_label (bool): Whether to mark label zero as ignored. | ||
Default to False. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
reduce_zero_label (bool): Whether to mark label zero as ignored. | |
Default to False. |
projects/medical/2d_image/microscopy_images/bactteria_detection/tools/prepare_dataset.py
Show resolved
Hide resolved
projects/medical/2d_image/microscopy_images/bactteria_detection/tools/prepare_dataset.py
Show resolved
Hide resolved
projects/medical/2d_image/microscopy_images/bactteria_detection/README.md
Outdated
Show resolved
Hide resolved
- download dataset from [here](https://tianchi.aliyun.com/dataset/94411) and decompress data to path `'data/'`. | ||
- run script `"python tools/prepare_dataset.py"` to format data and change folder structure as below. | ||
- run script `"python ../../tools/split_seg_dataset.py"` to split dataset and generate `train.txt`, `val.txt` and `test.txt`. If the label of official validation set and test set can't be obtained, we generate `train.txt` and `val.txt` from the training set randomly. | ||
|
||
```none | ||
mmsegmentation |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it should add more details about
- the path about dataset after decompress, as you just hard code the data_root in
tools/prepare_dataset.py
- the relationship between the below table of contents and these commands
- dataset split ratio
### Dataset preparing | ||
|
||
- download dataset from [here](https://tianchi.aliyun.com/dataset/94411) and decompress data to path `'data/'`. | ||
- run script `"python tools/prepare_dataset.py"` to format data and change folder structure as below. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- run script `"python tools/prepare_dataset.py"` to format data and change folder structure as below. | |
- run script `python tools/prepare_dataset.py` to format data and change folder structure as below. |
|
||
- download dataset from [here](https://tianchi.aliyun.com/dataset/94411) and decompress data to path `'data/'`. | ||
- run script `"python tools/prepare_dataset.py"` to format data and change folder structure as below. | ||
- run script `"python ../../tools/split_seg_dataset.py"` to split dataset and generate `train.txt`, `val.txt` and `test.txt`. If the label of official validation set and test set can't be obtained, we generate `train.txt` and `val.txt` from the training set randomly. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- run script `"python ../../tools/split_seg_dataset.py"` to split dataset and generate `train.txt`, `val.txt` and `test.txt`. If the label of official validation set and test set can't be obtained, we generate `train.txt` and `val.txt` from the training set randomly. | |
- run script `python ../../tools/split_seg_dataset.py` to split dataset and generate `train.txt`, `val.txt` and `test.txt`. If the label of official validation set and test set can't be obtained, we generate `train.txt` and `val.txt` from the training set randomly. |
projects/medical/2d_image/microscopy_images/bactteria_detection/README.md
Show resolved
Hide resolved
...edical/2d_image/microscopy_images/bactteria_detection/configs/bactteria-detection_512x512.py
Show resolved
Hide resolved
projects/medical/2d_image/microscopy_images/bactteria_detection/tools/prepare_dataset.py
Show resolved
Hide resolved
|
||
## Results | ||
|
||
### Bactteria detection with darkfield microscopy |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
these results are for random test dataset or official val dataset?
Thanks for your contribution and we appreciate it a lot. The following instructions would make your pull request more healthy and more easily get feedback. If you do not understand some items, don't worry, just make the pull request and seek help from maintainers.
Motivation
Please describe the motivation of this PR and the goal you want to achieve through this PR.
Modification
Please briefly describe what modification is made in this PR.
BC-breaking (Optional)
Does the modification introduce changes that break the backward-compatibility of the downstream repos?
If so, please describe how it breaks the compatibility and how the downstream projects should modify their code to keep compatibility with this PR.
Use cases (Optional)
If this PR introduces a new feature, it is better to list some use cases here, and update the documentation.
Checklist