Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Project] add Bactteria_Dataset project in dev-1.x #2568

Merged
merged 26 commits into from May 6, 2023

Conversation

tianbinli
Copy link
Contributor

Thanks for your contribution and we appreciate it a lot. The following instructions would make your pull request more healthy and more easily get feedback. If you do not understand some items, don't worry, just make the pull request and seek help from maintainers.

Motivation

Please describe the motivation of this PR and the goal you want to achieve through this PR.

Modification

Please briefly describe what modification is made in this PR.

BC-breaking (Optional)

Does the modification introduce changes that break the backward-compatibility of the downstream repos?
If so, please describe how it breaks the compatibility and how the downstream projects should modify their code to keep compatibility with this PR.

Use cases (Optional)

If this PR introduces a new feature, it is better to list some use cases here, and update the documentation.

Checklist

  1. Pre-commit or other linting tools are used to fix the potential lint issues.
  2. The modification is covered by complete unit tests. If not, please add more unit test to ensure the correctness.
  3. If the modification has potential influence on downstream projects, this PR should be tested with downstream projects, like MMDet or MMDet3D.
  4. The documentation has been modified accordingly, like docstring or example tutorials.


### Bactteria detection with darkfield microscopy Dataset

| Method | Backbone | Crop Size | lr | mIoU | mDice | config |
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would be more convincing to provide some logs, if you don't have a server, you can put them on GitHub, something like https://github.com/Ezra-Yu/MY_STORE/releases/tag/v0.0.1


### Dataset preparing

Preparing `Bactteria detection with darkfield microscopy Dataset` dataset in following format as below.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is doubtful whether this directory is the result of processing before or after using the script. As the random_split.py is in the below.

@MeowZheng MeowZheng changed the title add Bactteria_Dataset project in dev-1.x [Project] add Bactteria_Dataset project in dev-1.x Feb 20, 2023
projects/bactteria_detection/README.md Outdated Show resolved Hide resolved
projects/bactteria_detection/README.md Outdated Show resolved Hide resolved
projects/bactteria_detection/README.md Outdated Show resolved Hide resolved
Comment on lines 67 to 71
To train on multiple GPUs, e.g. 8 GPUs, run the following command:

```shell
mim train mmseg ./configs/${CONFIG_PATH} --launcher pytorch --gpus 8
```
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

From this config file name, is there no need to use 8 GPUs to train?

Comment on lines 14 to 15
all_imgs = glob.glob('data/bactteria_detection/Bacteria_detection_with_\
darkfield_microscopy_datasets/images/*' + img_suffix) # noqa
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
all_imgs = glob.glob('data/bactteria_detection/Bacteria_detection_with_\
darkfield_microscopy_datasets/images/*' + img_suffix) # noqa
all_imgs = glob.glob('data/bactteria_detection/Bacteria_detection_with_darkfield_microscopy_datasets/images/*' + img_suffix) # noqa

### Training commands

```shell
mim train mmseg ./configs/${CONFIG_PATH}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
mim train mmseg ./configs/${CONFIG_PATH}
mim train mmseg ./configs/${CONFIG_FILE}

is FILE more precise than PATH? as ./configs/ is path


### Bactteria detection with darkfield microscopy

| Method | Backbone | Crop Size | lr | mIoU | mDice | config |
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do you share the pre-trained weights with other users?

Comment on lines 24 to 29
reduce_zero_label=False,
**kwargs) -> None:
super().__init__(
img_suffix=img_suffix,
seg_map_suffix=seg_map_suffix,
reduce_zero_label=reduce_zero_label,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you really want to fix reduce_zero_label, just use hard code.

Suggested change
reduce_zero_label=False,
**kwargs) -> None:
super().__init__(
img_suffix=img_suffix,
seg_map_suffix=seg_map_suffix,
reduce_zero_label=reduce_zero_label,
**kwargs) -> None:
super().__init__(
img_suffix=img_suffix,
seg_map_suffix=seg_map_suffix,
reduce_zero_label=False,

Comment on lines 16 to 17
reduce_zero_label (bool): Whether to mark label zero as ignored.
Default to False.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
reduce_zero_label (bool): Whether to mark label zero as ignored.
Default to False.

Comment on lines 54 to 59
- download dataset from [here](https://tianchi.aliyun.com/dataset/94411) and decompress data to path `'data/'`.
- run script `"python tools/prepare_dataset.py"` to format data and change folder structure as below.
- run script `"python ../../tools/split_seg_dataset.py"` to split dataset and generate `train.txt`, `val.txt` and `test.txt`. If the label of official validation set and test set can't be obtained, we generate `train.txt` and `val.txt` from the training set randomly.

```none
mmsegmentation
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it should add more details about

  1. the path about dataset after decompress, as you just hard code the data_root in tools/prepare_dataset.py
  2. the relationship between the below table of contents and these commands
  3. dataset split ratio

### Dataset preparing

- download dataset from [here](https://tianchi.aliyun.com/dataset/94411) and decompress data to path `'data/'`.
- run script `"python tools/prepare_dataset.py"` to format data and change folder structure as below.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
- run script `"python tools/prepare_dataset.py"` to format data and change folder structure as below.
- run script `python tools/prepare_dataset.py` to format data and change folder structure as below.


- download dataset from [here](https://tianchi.aliyun.com/dataset/94411) and decompress data to path `'data/'`.
- run script `"python tools/prepare_dataset.py"` to format data and change folder structure as below.
- run script `"python ../../tools/split_seg_dataset.py"` to split dataset and generate `train.txt`, `val.txt` and `test.txt`. If the label of official validation set and test set can't be obtained, we generate `train.txt` and `val.txt` from the training set randomly.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
- run script `"python ../../tools/split_seg_dataset.py"` to split dataset and generate `train.txt`, `val.txt` and `test.txt`. If the label of official validation set and test set can't be obtained, we generate `train.txt` and `val.txt` from the training set randomly.
- run script `python ../../tools/split_seg_dataset.py` to split dataset and generate `train.txt`, `val.txt` and `test.txt`. If the label of official validation set and test set can't be obtained, we generate `train.txt` and `val.txt` from the training set randomly.


## Results

### Bactteria detection with darkfield microscopy
Copy link
Collaborator

@MeowZheng MeowZheng Mar 7, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

these results are for random test dataset or official val dataset?

@xiexinch xiexinch merged commit b299120 into open-mmlab:dev-1.x May 6, 2023
9 of 11 checks passed
nahidnazifi87 pushed a commit to nahidnazifi87/mmsegmentation_playground that referenced this pull request Apr 5, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants