New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Questions about the polygon annotation in the dataset #1823
Comments
Hi, thanks for your interest in MMPose.
|
Thank you for your answer. For the third question I will use COCO format
but with custom point and polygon class names. My point class names are «
apex » and « base » and my polygon class name is « polygon »
Le lun. 21 nov. 2022 à 03:31, Yining Li ***@***.***> a écrit :
Hi, thanks for your interest in MMPose.
1. It should not matter whether the keypoints are inside the polygons.
The polygon annotation is not used in most algorithms in MMPose, except for
Associative Embedding where the polygon is used to generate masks of
invalid instances (instances with no labeled keypoints).
2. The keypoint detection model usually takes the bounding-box area
(top-down) or the whole image (bottom-up) as the input. As mentioned above,
the polygon information is not used in inference.
3. Could you please further clarify your needs? Do you need to use
MMPose with customized data formats other than COCO?
—
Reply to this email directly, view it on GitHub
<#1823 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AEWZCRYXWUJTJ7LCPOY6H6DWJLNIVANCNFSM6AAAAAASFZILNA>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
--
Sylvain Ard
0549507724
0778380991
***@***.***
http://sylvain-ard.fr
Entreprise individuelle SIRET : 80079243400022
Appt 26 Bât A Résidence Le Patio
83 rue de la Bugellerie
86000 Poitiers
|
In this case, I think you can directly use TopDownCocoDataset (for mmpose 0.x) or CocoDataset (for mmpose 1.0). You would probably need a metainfo config to describe the keypoint definition of your data (e.g. the metainfo of coco) |
perdon my polygon class is "points"
here is the beginning of my COCO file :
{"categories":[{"id":1,"name":"points","supercategory":"points","keypoints":["apex","base"],"skeleton":[[0,1]]}],"images":[{"id":1,"file_name":"P7243290.JPG",
Sylvain Ard
0549507724
0778380991
***@***.***
http://sylvain-ard.fr
Entreprise individuelle SIRET : 80079243400022
Appt 26 Bât A Résidence Le Patio
83 rue de la Bugellerie
86000 Poitiers
Le lun. 21 nov. 2022 à 11:01, Sylvain Ard ***@***.***> a écrit :
… Thank you for your answer. For the third question I will use COCO format
but with custom point and polygon class names. My point class names are «
apex » and « base » and my polygon class name is « polygon »
Le lun. 21 nov. 2022 à 03:31, Yining Li ***@***.***> a
écrit :
> Hi, thanks for your interest in MMPose.
>
> 1. It should not matter whether the keypoints are inside the
> polygons. The polygon annotation is not used in most algorithms in MMPose,
> except for Associative Embedding where the polygon is used to generate
> masks of invalid instances (instances with no labeled keypoints).
> 2. The keypoint detection model usually takes the bounding-box area
> (top-down) or the whole image (bottom-up) as the input. As mentioned above,
> the polygon information is not used in inference.
> 3. Could you please further clarify your needs? Do you need to use
> MMPose with customized data formats other than COCO?
>
> —
> Reply to this email directly, view it on GitHub
> <#1823 (comment)>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AEWZCRYXWUJTJ7LCPOY6H6DWJLNIVANCNFSM6AAAAAASFZILNA>
> .
> You are receiving this because you authored the thread.Message ID:
> ***@***.***>
>
--
Sylvain Ard
0549507724
0778380991
***@***.***
http://sylvain-ard.fr
Entreprise individuelle SIRET : 80079243400022
Appt 26 Bât A Résidence Le Patio
83 rue de la Bugellerie
86000 Poitiers
|
OK so I change theses two files and it should work ? and have I special
parameters to pass for training and testing please ?
Sylvain Ard
0549507724
0778380991
***@***.***
http://sylvain-ard.fr
Entreprise individuelle SIRET : 80079243400022
Appt 26 Bât A Résidence Le Patio
83 rue de la Bugellerie
86000 Poitiers
Le lun. 21 nov. 2022 à 11:16, Yining Li ***@***.***> a écrit :
… In this case, I think you can directly use TopDownCocoDataset
<https://github.com/open-mmlab/mmpose/blob/master/mmpose/datasets/datasets/top_down/topdown_coco_dataset.py>
(for mmpose 0.x) or CocoDataset
<https://github.com/open-mmlab/mmpose/blob/1.x/mmpose/datasets/datasets/body/coco_dataset.py>
(for mmpose 1.0). You would probably need a metainfo config to describe the
keypoint definition of your data (e.g. the metainfo of coco
<https://github.com/open-mmlab/mmpose/blob/1.x/configs/_base_/datasets/coco.py>
)
—
Reply to this email directly, view it on GitHub
<#1823 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AEWZCRYSF4ZC6XKLO52LLYDWJNDXXANCNFSM6AAAAAASFZILNA>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
You needn't change the dataset class (if your annotation file is in standard coco format), but only the metainfo file. And you will need to modify the config file to correctly set the path of the metainfo file and the annotation file, the model output channel number (usually equal to the point number), and (only in mmpose 0.x) other data-related parameters like here. |
OK your code lacks documentation, " And you will need to modify the config
file to correctly set the path of the metainfo file and the annotation
file" >> what config file ?
and what are the python commands for train and test please ?
Sylvain Ard
0549507724
0778380991
***@***.***
http://sylvain-ard.fr
Entreprise individuelle SIRET : 80079243400022
Appt 26 Bât A Résidence Le Patio
83 rue de la Bugellerie
86000 Poitiers
Le lun. 21 nov. 2022 à 11:40, Yining Li ***@***.***> a écrit :
… You needn't change the dataset class (if your annotation file is in
standard coco format), but only the metainfo file. And you will need to
modify the config file to correctly set the path of the metainfo file and
the annotation file, the model output channel number (usually equal to the
point number), and (only in mmpose 0.x) other data-related parameters like
here
<https://github.com/open-mmlab/mmpose/blob/master/configs/body/2d_kpt_sview_rgb_img/topdown_heatmap/coco/hrnet_w32_coco_256x192.py#L20-L28>
.
—
Reply to this email directly, view it on GitHub
<#1823 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AEWZCR5VVTKFYBGV3AXKJZLWJNGSHANCNFSM6AAAAAASFZILNA>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
and I see in your config files that the size of the images in the input of
the neural network are about 224x224, it is not sufficiently precise for
me, I need 1500x1500
Sylvain Ard
0549507724
0778380991
***@***.***
http://sylvain-ard.fr
Entreprise individuelle SIRET : 80079243400022
Appt 26 Bât A Résidence Le Patio
83 rue de la Bugellerie
86000 Poitiers
Le lun. 21 nov. 2022 à 11:46, Sylvain Ard ***@***.***> a écrit :
… OK your code lacks documentation, " And you will need to modify the
config file to correctly set the path of the metainfo file and the
annotation file" >> what config file ?
and what are the python commands for train and test please ?
Sylvain Ard
0549507724
0778380991
***@***.***
http://sylvain-ard.fr
Entreprise individuelle SIRET : 80079243400022
Appt 26 Bât A Résidence Le Patio
83 rue de la Bugellerie
86000 Poitiers
Le lun. 21 nov. 2022 à 11:40, Yining Li ***@***.***> a
écrit :
> You needn't change the dataset class (if your annotation file is in
> standard coco format), but only the metainfo file. And you will need to
> modify the config file to correctly set the path of the metainfo file and
> the annotation file, the model output channel number (usually equal to the
> point number), and (only in mmpose 0.x) other data-related parameters like
> here
> <https://github.com/open-mmlab/mmpose/blob/master/configs/body/2d_kpt_sview_rgb_img/topdown_heatmap/coco/hrnet_w32_coco_256x192.py#L20-L28>
> .
>
> —
> Reply to this email directly, view it on GitHub
> <#1823 (comment)>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AEWZCR5VVTKFYBGV3AXKJZLWJNGSHANCNFSM6AAAAAASFZILNA>
> .
> You are receiving this because you authored the thread.Message ID:
> ***@***.***>
>
|
Actually, we have documentation for these questions (assume you are using mmpose 0.29. Documents of other versions can also be found at readthedocs):
As for the input size, 224x224 (for general objects) or 192x256 (for persons) are typical settings for top-down models, where the input is the bounding-box area of a single object. If you would like to use a bottom-up method where you can input the entire image, please refer to configs of Associative Embedding, CID, or DEKR (can be found here) |
so for change the image size I have to change :
data_cfg = dict(
image_size=512,
base_size=256, to
data_cfg = dict(
image_size=1500,
base_size=750, isn't it ?
Sylvain Ard
0549507724
0778380991
***@***.***
http://sylvain-ard.fr
Entreprise individuelle SIRET : 80079243400022
Appt 26 Bât A Résidence Le Patio
83 rue de la Bugellerie
86000 Poitiers
Le lun. 21 nov. 2022 à 12:40, Yining Li ***@***.***> a écrit :
… Actually, we have documentation for these questions (assume you are using
mmpose 0.29. Documents of other versions can also be found at readthedocs):
- About the config file:
https://mmpose.readthedocs.io/en/latest/tutorials/0_config.html
- About the training/testing:
https://mmpose.readthedocs.io/en/latest/get_started.html#
As for the input size, 224x224 (for general objects) or 192x256 (for
persons) are typical settings for top-down models, where the input is the
bounding-box area of a single object. If you would like to use a bottom-up
method where you can input the entire image, please refer to configs of
Associative Embedding, CID, or DEKR (can be found here
<https://github.com/open-mmlab/mmpose/tree/master/configs/body/2d_kpt_sview_rgb_img>
)
—
Reply to this email directly, view it on GitHub
<#1823 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AEWZCRZLUIVY2UIJOJ4BU7LWJNNUBANCNFSM6AAAAAASFZILNA>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
In a bottom-up config, the
And you may need to adjust Please also note that the polygon information is used in bottom-up methods to generate ignoring masks, as mentioned above. So it's better to reformat it to coco style in the annotation files. |
last questions :
with your program the points are associated with their bbox or object or
not ?
how to specify to do prediction with CPU
what version of Cuda is needed ?
the coordinates in COCO files are for the original size or the "image_size"
?
"In a bottom-up config, the image_size is the length of the shorter edge"
> so I can have images of size 512x1024 pixels for images for
image_size=512
I don't understand the sentence " Please also note that the polygon
information is used in bottom-up methods to generate ignoring masks, as
mentioned above. So it's better to reformat it to coco style in the
annotation files."
thank you very much
Sylvain Ard
0549507724
0778380991
***@***.***
http://sylvain-ard.fr
Entreprise individuelle SIRET : 80079243400022
Appt 26 Bât A Résidence Le Patio
83 rue de la Bugellerie
86000 Poitiers
Le lun. 21 nov. 2022 à 12:44, Sylvain Ard ***@***.***> a écrit :
… so for change the image size I have to change :
data_cfg = dict(
image_size=512,
base_size=256, to
data_cfg = dict(
image_size=1500,
base_size=750, isn't it ?
Sylvain Ard
0549507724
0778380991
***@***.***
http://sylvain-ard.fr
Entreprise individuelle SIRET : 80079243400022
Appt 26 Bât A Résidence Le Patio
83 rue de la Bugellerie
86000 Poitiers
Le lun. 21 nov. 2022 à 12:40, Yining Li ***@***.***> a
écrit :
> Actually, we have documentation for these questions (assume you are using
> mmpose 0.29. Documents of other versions can also be found at readthedocs):
>
> - About the config file:
> https://mmpose.readthedocs.io/en/latest/tutorials/0_config.html
> - About the training/testing:
> https://mmpose.readthedocs.io/en/latest/get_started.html#
>
> As for the input size, 224x224 (for general objects) or 192x256 (for
> persons) are typical settings for top-down models, where the input is the
> bounding-box area of a single object. If you would like to use a bottom-up
> method where you can input the entire image, please refer to configs of
> Associative Embedding, CID, or DEKR (can be found here
> <https://github.com/open-mmlab/mmpose/tree/master/configs/body/2d_kpt_sview_rgb_img>
> )
>
> —
> Reply to this email directly, view it on GitHub
> <#1823 (comment)>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AEWZCRZLUIVY2UIJOJ4BU7LWJNNUBANCNFSM6AAAAAASFZILNA>
> .
> You are receiving this because you authored the thread.Message ID:
> ***@***.***>
>
|
Yes.
Please check the argument
It depends on the PyTorch version. MMPose 0.29.0 works with PyTorch>=1.5.0
The original size. The image will be resized to As for the polygon, in the bottom-up dataset, it will try to generate a mask of the invalid area (see https://github.com/open-mmlab/mmpose/blob/master/mmpose/datasets/datasets/base/kpt_2d_sview_rgb_img_bottom_up_dataset.py#L136-L157), which will be ignored in loss computation. For example, COCO has instances of "crowd people" where the polygon is available but the keypoint is not. In this case, we usually do not compute loss from this area by using a mask generated from the polygon. If there is no such invalid object instance in your data, it should be fine to leave the 'polygon' annotation as it is, because it will not be used. Otherwise, it's better to convert the 'polygon' annotation into COCO format (the key should be "segmentation" and the content should be RLE or polygon as in https://cocodataset.org/#format-data). |
OK thank you very much, I will finish my annotations and will test your
program, if I encounter bugs or questions I will ask again
Sylvain Ard
0549507724
0778380991
***@***.***
http://sylvain-ard.fr
Entreprise individuelle SIRET : 80079243400022
Appt 26 Bât A Résidence Le Patio
83 rue de la Bugellerie
86000 Poitiers
Le lun. 21 nov. 2022 à 13:55, Yining Li ***@***.***> a écrit :
… with your program the points are associated with their bbox or object or
not?
Yes.
how to specify to do prediction with CPU
Please check the argument --device=cpu of the demo scripts, as described
in the documentation:
https://mmpose.readthedocs.io/en/latest/demo.html#d-human-pose-demo
what version of Cuda is needed
It depends on the PyTorch version. MMPose 0.29.0 works with PyTorch>=1.5.0
the coordinates in COCO files are for the original size or the
"image_size"?
The original size. The image will be resized to image_size and the
keypoint coordinates will be transformed accordingly.
As for the polygon, in the bottom-up dataset, it will try to generate a
mask of the invalid area (see
https://github.com/open-mmlab/mmpose/blob/master/mmpose/datasets/datasets/base/kpt_2d_sview_rgb_img_bottom_up_dataset.py#L136-L157),
which will be ignored in loss computation. For example, COCO has instances
of "crowd people" where the polygon is available but the keypoint is not.
In this case, we usually do not compute loss from this area by using a mask
generated from the polygon. If there is no such invalid object instance in
your data, it should be fine to leave the 'polygon' annotation as it is,
because it will not be used. Otherwise, it's better to convert the
'polygon' annotation into COCO format (the key should be "segmentation" and
the content should be RLE or polygon as in
https://cocodataset.org/#format-data).
—
Reply to this email directly, view it on GitHub
<#1823 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AEWZCRZ5XLLR3TEBCMLTN4DWJNWNHANCNFSM6AAAAAASFZILNA>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
I tested but I have the error :
2022-11-23 13:18:44,373 - mmpose - INFO - workflow: [('train', 1)], max:
140 epochs
2022-11-23 13:18:44,374 - mmpose - INFO - Checkpoints will be saved to
C:\mmpose-master\output by HardDiskBackend.
C:\Users\MASTER\.conda\envs\openmmlab\lib\site-packages\mmcv\__init__.py:20:
UserWarning: On January 1, 2023, MMCV will release v2.0.0, in which it will
remove components related to the training process and add a data
transformation module. In addition, it will rename the package names mmcv
to mmcv-lite and mmcv-full to mmcv. See
https://github.com/open-mmlab/mmcv/blob/master/docs/en/compatibility.md for
more details.
warnings.warn(
C:\Users\MASTER\.conda\envs\openmmlab\lib\site-packages\mmcv\__init__.py:20:
UserWarning: On January 1, 2023, MMCV will release v2.0.0, in which it will
remove components related to the training process and add a data
transformation module. In addition, it will rename the package names mmcv
to mmcv-lite and mmcv-full to mmcv. See
https://github.com/open-mmlab/mmcv/blob/master/docs/en/compatibility.md for
more details.
warnings.warn(
Traceback (most recent call last):
File "tools/train.py", line 201, in <module>
main()
File "tools/train.py", line 190, in main
train_model(
File "c:\mmpose-master\mmpose\apis\train.py", line 213, in train_model
runner.run(data_loaders, cfg.workflow, cfg.total_epochs)
File
"C:\Users\MASTER\.conda\envs\openmmlab\lib\site-packages\mmcv\runner\epoch_based_runner.py",
line 136, in run
epoch_runner(data_loaders[i], **kwargs)
File
"C:\Users\MASTER\.conda\envs\openmmlab\lib\site-packages\mmcv\runner\epoch_based_runner.py",
line 49, in train
for i, data_batch in enumerate(self.data_loader):
File
"C:\Users\MASTER\.conda\envs\openmmlab\lib\site-packages\torch\utils\data\dataloader.py",
line 517, in __next__
data = self._next_data()
File
"C:\Users\MASTER\.conda\envs\openmmlab\lib\site-packages\torch\utils\data\dataloader.py",
line 1199, in _next_data
return self._process_data(data)
File
"C:\Users\MASTER\.conda\envs\openmmlab\lib\site-packages\torch\utils\data\dataloader.py",
line 1225, in _process_data
data.reraise()
File
"C:\Users\MASTER\.conda\envs\openmmlab\lib\site-packages\torch\_utils.py",
line 429, in reraise
raise self.exc_type(msg)
KeyError: Caught KeyError in DataLoader worker process 0.
Original Traceback (most recent call last):
File
"C:\Users\MASTER\.conda\envs\openmmlab\lib\site-packages\torch\utils\data\_utils\worker.py",
line 202, in _worker_loop
data = fetcher.fetch(index)
File
"C:\Users\MASTER\.conda\envs\openmmlab\lib\site-packages\torch\utils\data\_utils\fetch.py",
line 44, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File
"C:\Users\MASTER\.conda\envs\openmmlab\lib\site-packages\torch\utils\data\_utils\fetch.py",
line 44, in <listcomp>
data = [self.dataset[idx] for idx in possibly_batched_index]
File
"c:\mmpose-master\mmpose\datasets\datasets\base\kpt_2d_sview_rgb_img_bottom_up_dataset.py",
line 189, in __getitem__
return self.prepare_train_img(idx)
File
"c:\mmpose-master\mmpose\datasets\datasets\base\kpt_2d_sview_rgb_img_bottom_up_dataset.py",
line 170, in prepare_train_img
results = copy.deepcopy(self._get_single(idx))
File
"c:\mmpose-master\mmpose\datasets\datasets\bottom_up\bottom_up_coco.py",
line 96, in _get_single
mask = self._get_mask(anno, idx)
File
"c:\mmpose-master\mmpose\datasets\datasets\base\kpt_2d_sview_rgb_img_bottom_up_dataset.py",
line 150, in _get_mask
elif obj['num_keypoints'] == 0:
KeyError: 'num_keypoints'
in addition what are the values that I must put in :
joint_weights
and
sigmas
in the file configs/_base_/datasets/coco.py please ?
Sylvain Ard
0549507724
0778380991
***@***.***
http://sylvain-ard.fr
Entreprise individuelle SIRET : 80079243400022
Appt 26 Bât A Résidence Le Patio
83 rue de la Bugellerie
86000 Poitiers
Le lun. 21 nov. 2022 à 17:21, Sylvain Ard ***@***.***> a écrit :
… OK thank you very much, I will finish my annotations and will test your
program, if I encounter bugs or questions I will ask again
Sylvain Ard
0549507724
0778380991
***@***.***
http://sylvain-ard.fr
Entreprise individuelle SIRET : 80079243400022
Appt 26 Bât A Résidence Le Patio
83 rue de la Bugellerie
86000 Poitiers
Le lun. 21 nov. 2022 à 13:55, Yining Li ***@***.***> a
écrit :
> with your program the points are associated with their bbox or object or
> not?
>
> Yes.
>
> how to specify to do prediction with CPU
>
> Please check the argument --device=cpu of the demo scripts, as described
> in the documentation:
> https://mmpose.readthedocs.io/en/latest/demo.html#d-human-pose-demo
>
> what version of Cuda is needed
>
> It depends on the PyTorch version. MMPose 0.29.0 works with PyTorch>=1.5.0
>
> the coordinates in COCO files are for the original size or the
> "image_size"?
>
> The original size. The image will be resized to image_size and the
> keypoint coordinates will be transformed accordingly.
>
> As for the polygon, in the bottom-up dataset, it will try to generate a
> mask of the invalid area (see
> https://github.com/open-mmlab/mmpose/blob/master/mmpose/datasets/datasets/base/kpt_2d_sview_rgb_img_bottom_up_dataset.py#L136-L157),
> which will be ignored in loss computation. For example, COCO has instances
> of "crowd people" where the polygon is available but the keypoint is not.
> In this case, we usually do not compute loss from this area by using a mask
> generated from the polygon. If there is no such invalid object instance in
> your data, it should be fine to leave the 'polygon' annotation as it is,
> because it will not be used. Otherwise, it's better to convert the
> 'polygon' annotation into COCO format (the key should be "segmentation" and
> the content should be RLE or polygon as in
> https://cocodataset.org/#format-data).
>
> —
> Reply to this email directly, view it on GitHub
> <#1823 (comment)>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AEWZCRZ5XLLR3TEBCMLTN4DWJNWNHANCNFSM6AAAAAASFZILNA>
> .
> You are receiving this because you authored the thread.Message ID:
> ***@***.***>
>
|
another question : how to choose the model type (resnet50, resnet101, ...) ?
Sylvain Ard
0549507724
0778380991
***@***.***
http://sylvain-ard.fr
Entreprise individuelle SIRET : 80079243400022
Appt 26 Bât A Résidence Le Patio
83 rue de la Bugellerie
86000 Poitiers
Le mer. 23 nov. 2022 à 13:23, Sylvain Ard ***@***.***> a écrit :
… I tested but I have the error :
2022-11-23 13:18:44,373 - mmpose - INFO - workflow: [('train', 1)], max:
140 epochs
2022-11-23 13:18:44,374 - mmpose - INFO - Checkpoints will be saved to
C:\mmpose-master\output by HardDiskBackend.
C:\Users\MASTER\.conda\envs\openmmlab\lib\site-packages\mmcv\__init__.py:20:
UserWarning: On January 1, 2023, MMCV will release v2.0.0, in which it will
remove components related to the training process and add a data
transformation module. In addition, it will rename the package names mmcv
to mmcv-lite and mmcv-full to mmcv. See
https://github.com/open-mmlab/mmcv/blob/master/docs/en/compatibility.md
for more details.
warnings.warn(
C:\Users\MASTER\.conda\envs\openmmlab\lib\site-packages\mmcv\__init__.py:20:
UserWarning: On January 1, 2023, MMCV will release v2.0.0, in which it will
remove components related to the training process and add a data
transformation module. In addition, it will rename the package names mmcv
to mmcv-lite and mmcv-full to mmcv. See
https://github.com/open-mmlab/mmcv/blob/master/docs/en/compatibility.md
for more details.
warnings.warn(
Traceback (most recent call last):
File "tools/train.py", line 201, in <module>
main()
File "tools/train.py", line 190, in main
train_model(
File "c:\mmpose-master\mmpose\apis\train.py", line 213, in train_model
runner.run(data_loaders, cfg.workflow, cfg.total_epochs)
File
"C:\Users\MASTER\.conda\envs\openmmlab\lib\site-packages\mmcv\runner\epoch_based_runner.py",
line 136, in run
epoch_runner(data_loaders[i], **kwargs)
File
"C:\Users\MASTER\.conda\envs\openmmlab\lib\site-packages\mmcv\runner\epoch_based_runner.py",
line 49, in train
for i, data_batch in enumerate(self.data_loader):
File
"C:\Users\MASTER\.conda\envs\openmmlab\lib\site-packages\torch\utils\data\dataloader.py",
line 517, in __next__
data = self._next_data()
File
"C:\Users\MASTER\.conda\envs\openmmlab\lib\site-packages\torch\utils\data\dataloader.py",
line 1199, in _next_data
return self._process_data(data)
File
"C:\Users\MASTER\.conda\envs\openmmlab\lib\site-packages\torch\utils\data\dataloader.py",
line 1225, in _process_data
data.reraise()
File
"C:\Users\MASTER\.conda\envs\openmmlab\lib\site-packages\torch\_utils.py",
line 429, in reraise
raise self.exc_type(msg)
KeyError: Caught KeyError in DataLoader worker process 0.
Original Traceback (most recent call last):
File
"C:\Users\MASTER\.conda\envs\openmmlab\lib\site-packages\torch\utils\data\_utils\worker.py",
line 202, in _worker_loop
data = fetcher.fetch(index)
File
"C:\Users\MASTER\.conda\envs\openmmlab\lib\site-packages\torch\utils\data\_utils\fetch.py",
line 44, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File
"C:\Users\MASTER\.conda\envs\openmmlab\lib\site-packages\torch\utils\data\_utils\fetch.py",
line 44, in <listcomp>
data = [self.dataset[idx] for idx in possibly_batched_index]
File
"c:\mmpose-master\mmpose\datasets\datasets\base\kpt_2d_sview_rgb_img_bottom_up_dataset.py",
line 189, in __getitem__
return self.prepare_train_img(idx)
File
"c:\mmpose-master\mmpose\datasets\datasets\base\kpt_2d_sview_rgb_img_bottom_up_dataset.py",
line 170, in prepare_train_img
results = copy.deepcopy(self._get_single(idx))
File
"c:\mmpose-master\mmpose\datasets\datasets\bottom_up\bottom_up_coco.py",
line 96, in _get_single
mask = self._get_mask(anno, idx)
File
"c:\mmpose-master\mmpose\datasets\datasets\base\kpt_2d_sview_rgb_img_bottom_up_dataset.py",
line 150, in _get_mask
elif obj['num_keypoints'] == 0:
KeyError: 'num_keypoints'
in addition what are the values that I must put in :
joint_weights
and
sigmas
in the file configs/_base_/datasets/coco.py please ?
Sylvain Ard
0549507724
0778380991
***@***.***
http://sylvain-ard.fr
Entreprise individuelle SIRET : 80079243400022
Appt 26 Bât A Résidence Le Patio
83 rue de la Bugellerie
86000 Poitiers
Le lun. 21 nov. 2022 à 17:21, Sylvain Ard ***@***.***> a
écrit :
> OK thank you very much, I will finish my annotations and will test your
> program, if I encounter bugs or questions I will ask again
> Sylvain Ard
> 0549507724
> 0778380991
> ***@***.***
> http://sylvain-ard.fr
> Entreprise individuelle SIRET : 80079243400022
> Appt 26 Bât A Résidence Le Patio
> 83 rue de la Bugellerie
> 86000 Poitiers
>
>
> Le lun. 21 nov. 2022 à 13:55, Yining Li ***@***.***> a
> écrit :
>
>> with your program the points are associated with their bbox or object or
>> not?
>>
>> Yes.
>>
>> how to specify to do prediction with CPU
>>
>> Please check the argument --device=cpu of the demo scripts, as
>> described in the documentation:
>> https://mmpose.readthedocs.io/en/latest/demo.html#d-human-pose-demo
>>
>> what version of Cuda is needed
>>
>> It depends on the PyTorch version. MMPose 0.29.0 works with
>> PyTorch>=1.5.0
>>
>> the coordinates in COCO files are for the original size or the
>> "image_size"?
>>
>> The original size. The image will be resized to image_size and the
>> keypoint coordinates will be transformed accordingly.
>>
>> As for the polygon, in the bottom-up dataset, it will try to generate a
>> mask of the invalid area (see
>> https://github.com/open-mmlab/mmpose/blob/master/mmpose/datasets/datasets/base/kpt_2d_sview_rgb_img_bottom_up_dataset.py#L136-L157),
>> which will be ignored in loss computation. For example, COCO has instances
>> of "crowd people" where the polygon is available but the keypoint is not.
>> In this case, we usually do not compute loss from this area by using a mask
>> generated from the polygon. If there is no such invalid object instance in
>> your data, it should be fine to leave the 'polygon' annotation as it is,
>> because it will not be used. Otherwise, it's better to convert the
>> 'polygon' annotation into COCO format (the key should be "segmentation" and
>> the content should be RLE or polygon as in
>> https://cocodataset.org/#format-data).
>>
>> —
>> Reply to this email directly, view it on GitHub
>> <#1823 (comment)>,
>> or unsubscribe
>> <https://github.com/notifications/unsubscribe-auth/AEWZCRZ5XLLR3TEBCMLTN4DWJNWNHANCNFSM6AAAAAASFZILNA>
>> .
>> You are receiving this because you authored the thread.Message ID:
>> ***@***.***>
>>
>
|
the bug is here :
in the file mmpose\datasets\datasets\base\kpt_2d_sview_rgb_img_bottom_up_dataset.py |
I verified I must put iscrowd to 0, I think the line : |
The definition of COCO format, including the description of 'iscrowd', can be found at: https://cocodataset.org/#format-data
To prepare data for MMPose, if there are objects that have segmentation (or polygon) annotation but no keypoint annotation, it's better to set these objects as |
as I understand on another website iscrowd=1 is for objects which are
composed of several polygons and iscrowd=0 for objects of one polygon
You don't answer at :
in addition what are the values that I must put in :
joint_weights
and
sigmas
in the file configs/_base_/datasets/coco.py please ?
and
another question : how to choose the model type (resnet50, resnet101, ...) ?
your program works only if iscrowd=1 or num_keypoints=0, but how to make it
works with iscrowd=0 ?
Sylvain Ard
0549507724
0778380991
***@***.***
http://sylvain-ard.fr
Entreprise individuelle SIRET : 80079243400022
Appt 26 Bât A Résidence Le Patio
83 rue de la Bugellerie
86000 Poitiers
Le jeu. 24 nov. 2022 à 03:42, Yining Li ***@***.***> a écrit :
… The definition of COCO format, including the description of 'iscrowd', can
be found at: https://cocodataset.org/#format-data
Crowd annotations (iscrowd=1) are used to label large groups of objects
(e.g. a crowd of people).
To prepare data for MMPose, if there are objects that have segmentation
(or polygon) annotation but no keypoint annotation, it's better to set
these objects as iscrowd==1, and they will be ignored in loss computation
to avoid misleading supervision signals.
—
Reply to this email directly, view it on GitHub
<#1823 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AEWZCR4MNJC42YUTFTBQUB3WJ3IXVANCNFSM6AAAAAASFZILNA>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
|
So because I have two keypoints per object I can put 0.5,0.5 in
joint_weights ?
I don't understand sigmas
but in
mmpose-master\mmpose\datasets\datasets\base\kpt_2d_sview_rgb_img_bottom_up_dataset.py
:
def _get_mask(self, anno, idx):
"""Get ignore masks to mask out losses."""
coco = self.coco
img_info = coco.loadImgs(self.img_ids[idx])[0]
m = np.zeros((img_info['height'], img_info['width']),
dtype=np.float32)
for obj in anno:
print(obj)
if 'segmentation' in obj:
if obj['iscrowd']:
rle = xtcocotools.mask.frPyObjects(obj['segmentation'],
img_info['height'],
img_info['width'])
m += xtcocotools.mask.decode(rle)
elif obj['num_keypoints'] == 0:
rles = xtcocotools.mask.frPyObjects(
obj['segmentation'], img_info['height'],
img_info['width'])
for rle in rles:
m += xtcocotools.mask.decode(rle)
return m < 0.5
the mask is created only if obj['iscrowd']=1 or obj['num_keypoints'] =0
so itcrashed for me because obj['iscrowd']=0 and no num_keypoints entry
Have I to change the line :
if obj['iscrowd']:
to
if obj['iscrowd']=0:
?
thank you
best regards
Sylvain Ard
0549507724
0778380991
***@***.***
http://sylvain-ard.fr
Entreprise individuelle SIRET : 80079243400022
Appt 26 Bât A Résidence Le Patio
83 rue de la Bugellerie
86000 Poitiers
Le ven. 25 nov. 2022 à 04:35, Yining Li ***@***.***> a écrit :
…
1. joint_weights is used to control the loss weights of keypoints. You
can set them according to your task.
2. sigmas is obtained from the annotation and used for computing COCO
AP/AR. More information can be found at COCO official website:
https://cocodataset.org/#keypoints-eval. If sigmas are not available
in your case, you can set them as arbitrary values and use metrics other
than COCO AP/AR, e.g. NME
3. The model selection should be depend on the requirement and
constraint (model size, speed, accuracy requirement, ...). We provide model
performance benchmark in our documentation for your reference:
https://mmpose.readthedocs.io/en/dev-1.x/model_zoo_papers/algorithms.html
4. In the dataset, an instance should be either valid (has at least
one labeled keypoint) or invalid (no keypoint label, and marked as
iscrowd==1)
—
Reply to this email directly, view it on GitHub
<#1823 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AEWZCR33C72CEXL7WFHNALDWKAXXFANCNFSM6AAAAAASFZILNA>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
If If it crashes at this part, it's likely that your data format is different from the standard COCO format. We suggest users organize their data in COCO format to use MMPose's COCO dataset, or the user may have to implement a custom dataset class. |
for me iscrowd=0 but num_keypoints is not defined as it is not necessary
and in any cases it will not be equal to zeron because there are keypoints
Sylvain Ard
0549507724
0778380991
***@***.***
http://sylvain-ard.fr
Entreprise individuelle SIRET : 80079243400022
Appt 26 Bât A Résidence Le Patio
83 rue de la Bugellerie
86000 Poitiers
Le ven. 25 nov. 2022 à 09:47, Yining Li ***@***.***> a écrit :
… If iscrowd==0 and num_keypoints==0 the mask will be generated by:
https://github.com/open-mmlab/mmpose/blob/master/mmpose/datasets/datasets/base/kpt_2d_sview_rgb_img_bottom_up_dataset.py#L150-L155
If it crashes at this part, it's likely that your data format is different
from the standard COCO format. We suggest users organize their data in COCO
format to use MMPose's COCO dataset, or the user may have to implement a
custom dataset class.
—
Reply to this email directly, view it on GitHub
<#1823 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AEWZCR5W4QO4IFPAPJJETB3WKB4KHANCNFSM6AAAAAASFZILNA>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
I dontt get you you said to me that if there wasn't any keypoint we must
set iscrowd to 1 and the second case is iscrowd=0 and num_keypoints=0 so
where is the case iscrowd=0 and num_keypoints <>0 ?!
Sylvain Ard
0549507724
0778380991
***@***.***
http://sylvain-ard.fr
Entreprise individuelle SIRET : 80079243400022
Appt 26 Bât A Résidence Le Patio
83 rue de la Bugellerie
86000 Poitiers
Le ven. 25 nov. 2022 à 09:49, Sylvain Ard ***@***.***> a écrit :
… for me iscrowd=0 but num_keypoints is not defined as it is not necessary
and in any cases it will not be equal to zeron because there are keypoints
Sylvain Ard
0549507724
0778380991
***@***.***
http://sylvain-ard.fr
Entreprise individuelle SIRET : 80079243400022
Appt 26 Bât A Résidence Le Patio
83 rue de la Bugellerie
86000 Poitiers
Le ven. 25 nov. 2022 à 09:47, Yining Li ***@***.***> a
écrit :
> If iscrowd==0 and num_keypoints==0 the mask will be generated by:
> https://github.com/open-mmlab/mmpose/blob/master/mmpose/datasets/datasets/base/kpt_2d_sview_rgb_img_bottom_up_dataset.py#L150-L155
>
> If it crashes at this part, it's likely that your data format is
> different from the standard COCO format. We suggest users organize their
> data in COCO format to use MMPose's COCO dataset, or the user may have to
> implement a custom dataset class.
>
> —
> Reply to this email directly, view it on GitHub
> <#1823 (comment)>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AEWZCR5W4QO4IFPAPJJETB3WKB4KHANCNFSM6AAAAAASFZILNA>
> .
> You are receiving this because you authored the thread.Message ID:
> ***@***.***>
>
|
In this case, no mask should be generated. You may need to either add |
Ok ,why no masks will be generated ? if no masks will be generated it will
not detect keypoints isn't it ?
Sylvain Ard
0549507724
0778380991
***@***.***
http://sylvain-ard.fr
Entreprise individuelle SIRET : 80079243400022
Appt 26 Bât A Résidence Le Patio
83 rue de la Bugellerie
86000 Poitiers
Le ven. 25 nov. 2022 à 09:57, Yining Li ***@***.***> a écrit :
… In this case, no mask should be generated. You may need to either add
num_keypoints in your annotation file which should be the number of
visible keypoints of each object or modify this line
<https://github.com/open-mmlab/mmpose/blob/master/mmpose/datasets/datasets/base/kpt_2d_sview_rgb_img_bottom_up_dataset.py#L150>
to manually calculate visible keypoint number from obj['keypoints'].
—
Reply to this email directly, view it on GitHub
<#1823 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AEWZCR6SPFSXYIOCXXZXF3DWKB5QHANCNFSM6AAAAAASFZILNA>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
As mentioned above, the mask is used in loss computation to ignore invalid regions in the image, where there are objects but no ground-truth keypoint labels. (Because of the absence of ground-truth keypoint labels, the loss will be incorrect and may cause performance degradation.) If you have an object and also its keypoint label, the loss will be computed normally an no mask is needed. |
ok thank you
now I have the error :
after_run:
(VERY_LOW ) TextLoggerHook
--------------------
2022-11-25 10:37:41,142 - mmpose - INFO - workflow: [('train', 1)], max:
140 epochs
2022-11-25 10:37:41,143 - mmpose - INFO - Checkpoints will be saved to
C:\mmpose-master\output by HardDiskBackend.
C:\Users\MASTER\.conda\envs\openmmlab\lib\site-packages\mmcv\__init__.py:20:
UserWarning: On January 1, 2023, MMCV will release v2.0.0, in which it will
remove components related to the training process and add a data
transformation module. In addition, it will rename the package names mmcv
to mmcv-lite and mmcv-full to mmcv. See
https://github.com/open-mmlab/mmcv/blob/master/docs/en/compatibility.md for
more details.
warnings.warn(
C:\Users\MASTER\.conda\envs\openmmlab\lib\site-packages\mmcv\__init__.py:20:
UserWarning: On January 1, 2023, MMCV will release v2.0.0, in which it will
remove components related to the training process and add a data
transformation module. In addition, it will rename the package names mmcv
to mmcv-lite and mmcv-full to mmcv. See
https://github.com/open-mmlab/mmcv/blob/master/docs/en/compatibility.md for
more details.
warnings.warn(
Traceback (most recent call last):
File "tools/train.py", line 201, in <module>
main()
File "tools/train.py", line 190, in main
train_model(
File "c:\mmpose-master\mmpose\apis\train.py", line 213, in train_model
runner.run(data_loaders, cfg.workflow, cfg.total_epochs)
File
"C:\Users\MASTER\.conda\envs\openmmlab\lib\site-packages\mmcv\runner\epoch_based_runner.py",
line 136, in run
epoch_runner(data_loaders[i], **kwargs)
File
"C:\Users\MASTER\.conda\envs\openmmlab\lib\site-packages\mmcv\runner\epoch_based_runner.py",
line 53, in train
self.run_iter(data_batch, train_mode=True, **kwargs)
File
"C:\Users\MASTER\.conda\envs\openmmlab\lib\site-packages\mmcv\runner\epoch_based_runner.py",
line 31, in run_iter
outputs = self.model.train_step(data_batch, self.optimizer,
File
"C:\Users\MASTER\.conda\envs\openmmlab\lib\site-packages\mmcv\parallel\data_parallel.py",
line 77, in train_step
return self.module.train_step(*inputs[0], **kwargs[0])
File "c:\mmpose-master\mmpose\models\detectors\base.py", line 104, in
train_step
losses = self.forward(**data_batch)
File
"C:\Users\MASTER\.conda\envs\openmmlab\lib\site-packages\mmcv\runner\fp16_utils.py",
line 119, in new_func
return old_func(*args, **kwargs)
File "c:\mmpose-master\mmpose\models\detectors\cid.py", line 137, in
forward
return self.forward_train(img, multi_heatmap, multi_mask,
File "c:\mmpose-master\mmpose\models\detectors\cid.py", line 183, in
forward_train
output = self.backbone(img)
File
"C:\Users\MASTER\.conda\envs\openmmlab\lib\site-packages\torch\nn\modules\module.py",
line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "c:\mmpose-master\mmpose\models\backbones\hrnet.py", line 577, in
forward
y_list = self.stage2(x_list)
File
"C:\Users\MASTER\.conda\envs\openmmlab\lib\site-packages\torch\nn\modules\module.py",
line 889, in _call_impl
result = self.forward(*input, **kwargs)
File
"C:\Users\MASTER\.conda\envs\openmmlab\lib\site-packages\torch\nn\modules\container.py",
line 119, in forward
input = module(input)
File
"C:\Users\MASTER\.conda\envs\openmmlab\lib\site-packages\torch\nn\modules\module.py",
line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "c:\mmpose-master\mmpose\models\backbones\hrnet.py", line 200, in
forward
x[i] = self.branches[i](x[i])
File
"C:\Users\MASTER\.conda\envs\openmmlab\lib\site-packages\torch\nn\modules\module.py",
line 889, in _call_impl
result = self.forward(*input, **kwargs)
File
"C:\Users\MASTER\.conda\envs\openmmlab\lib\site-packages\torch\nn\modules\container.py",
line 119, in forward
input = module(input)
File
"C:\Users\MASTER\.conda\envs\openmmlab\lib\site-packages\torch\nn\modules\module.py",
line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "c:\mmpose-master\mmpose\models\backbones\resnet.py", line 124, in
forward
out = _inner_forward(x)
File "c:\mmpose-master\mmpose\models\backbones\resnet.py", line 107, in
_inner_forward
out = self.conv1(x)
File
"C:\Users\MASTER\.conda\envs\openmmlab\lib\site-packages\torch\nn\modules\module.py",
line 889, in _call_impl
result = self.forward(*input, **kwargs)
File
"C:\Users\MASTER\.conda\envs\openmmlab\lib\site-packages\torch\nn\modules\conv.py",
line 399, in forward
return self._conv_forward(input, self.weight, self.bias)
File
"C:\Users\MASTER\.conda\envs\openmmlab\lib\site-packages\torch\nn\modules\conv.py",
line 395, in _conv_forward
return F.conv2d(input, weight, bias, self.stride,
RuntimeError: CUDA out of memory. Tried to allocate 240.00 MiB (GPU 0;
24.00 GiB total capacity; 22.75 GiB already allocated; 0 bytes free; 23.10
GiB reserved in total by PyTorch)
I think it is because I putted image_size to 1024, how to decrease the
images per batch please ?
thank you very much for your help
Sylvain Ard
0549507724
0778380991
***@***.***
http://sylvain-ard.fr
Entreprise individuelle SIRET : 80079243400022
Appt 26 Bât A Résidence Le Patio
83 rue de la Bugellerie
86000 Poitiers
Le ven. 25 nov. 2022 à 10:13, Yining Li ***@***.***> a écrit :
… As mentioned above, the mask is used in loss computation to ignore invalid
regions in the image, where there are objects but no ground-truth keypoint
labels. (Because of the absence of ground-truth keypoint labels, the loss
will be incorrect and may cause performance degradation.)
If you have an object and also its keypoint label, the loss will be
computed normally an no mask is needed.
—
Reply to this email directly, view it on GitHub
<#1823 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AEWZCR5HDJIRJ2LBHL3SGU3WKB7NHANCNFSM6AAAAAASFZILNA>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
The batch size can be set in the config file: https://mmpose.readthedocs.io/en/latest/tutorials/0_config.html |
OK I sets samples_per_gpu to 4,
now I have the error :
C:\Users\MASTER\.conda\envs\openmmlab\lib\site-packages\mmcv\__init__.py:20:
UserWarning: On January 1, 2023, MMCV will release v2.0.0, in which it will
remove components related to the training process and add a data
transformation module. In addition, it will rename the package names mmcv
to mmcv-lite and mmcv-full to mmcv. See
https://github.com/open-mmlab/mmcv/blob/master/docs/en/compatibility.md for
more details.
warnings.warn(
Traceback (most recent call last):
File "tools/train.py", line 201, in <module>
main()
File "tools/train.py", line 190, in main
train_model(
File "c:\mmpose-master\mmpose\apis\train.py", line 213, in train_model
runner.run(data_loaders, cfg.workflow, cfg.total_epochs)
File
"C:\Users\MASTER\.conda\envs\openmmlab\lib\site-packages\mmcv\runner\epoch_based_runner.py",
line 136, in run
epoch_runner(data_loaders[i], **kwargs)
File
"C:\Users\MASTER\.conda\envs\openmmlab\lib\site-packages\mmcv\runner\epoch_based_runner.py",
line 53, in train
self.run_iter(data_batch, train_mode=True, **kwargs)
File
"C:\Users\MASTER\.conda\envs\openmmlab\lib\site-packages\mmcv\runner\epoch_based_runner.py",
line 31, in run_iter
outputs = self.model.train_step(data_batch, self.optimizer,
File
"C:\Users\MASTER\.conda\envs\openmmlab\lib\site-packages\mmcv\parallel\data_parallel.py",
line 77, in train_step
return self.module.train_step(*inputs[0], **kwargs[0])
File "c:\mmpose-master\mmpose\models\detectors\base.py", line 104, in
train_step
losses = self.forward(**data_batch)
File
"C:\Users\MASTER\.conda\envs\openmmlab\lib\site-packages\mmcv\runner\fp16_utils.py",
line 119, in new_func
return old_func(*args, **kwargs)
File "c:\mmpose-master\mmpose\models\detectors\cid.py", line 137, in
forward
return self.forward_train(img, multi_heatmap, multi_mask,
File "c:\mmpose-master\mmpose\models\detectors\cid.py", line 190, in
forward_train
cid_losses = self.keypoint_head(output, labels)
File
"C:\Users\MASTER\.conda\envs\openmmlab\lib\site-packages\torch\nn\modules\module.py",
line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "c:\mmpose-master\mmpose\models\heads\cid_head.py", line 92, in
forward
return self.forward_train(features, forward_info)
File "c:\mmpose-master\mmpose\models\heads\cid_head.py", line 103, in
forward_train
multi_heatmap_loss = self.heatmap_loss(pred_multi_heatmap,
File
"C:\Users\MASTER\.conda\envs\openmmlab\lib\site-packages\torch\nn\modules\module.py",
line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "c:\mmpose-master\mmpose\models\losses\heatmap_loss.py", line 117,
in forward
pos_loss = torch.log(pred) * torch.pow(1 - pred, self.alpha) * pos_inds
RuntimeError: The size of tensor a (18) must match the size of tensor b (3)
at non-singleton dimension 1
Sylvain Ard
0549507724
0778380991
***@***.***
http://sylvain-ard.fr
Entreprise individuelle SIRET : 80079243400022
Appt 26 Bât A Résidence Le Patio
83 rue de la Bugellerie
86000 Poitiers
Le ven. 25 nov. 2022 à 11:25, Yining Li ***@***.***> a écrit :
… The batch size can be set in the config file:
https://mmpose.readthedocs.io/en/latest/tutorials/0_config.html
—
Reply to this email directly, view it on GitHub
<#1823 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AEWZCR5MRCVFEO7NSJNGRATWKCH2VANCNFSM6AAAAAASFZILNA>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
here are the lines 26 to 30 of my
\configs\body\2d_kpt_sview_rgb_img\cid\coco\hrnet_w48_coco_512x512.py file :
data_cfg = dict(
image_size=1024,
base_size=512,
base_sigma=2,
heatmap_size=[256],
Sylvain Ard
0549507724
0778380991
***@***.***
http://sylvain-ard.fr
Entreprise individuelle SIRET : 80079243400022
Appt 26 Bât A Résidence Le Patio
83 rue de la Bugellerie
86000 Poitiers
Le ven. 25 nov. 2022 à 11:29, Sylvain Ard ***@***.***> a écrit :
… OK I sets samples_per_gpu to 4,
now I have the error :
C:\Users\MASTER\.conda\envs\openmmlab\lib\site-packages\mmcv\__init__.py:20:
UserWarning: On January 1, 2023, MMCV will release v2.0.0, in which it will
remove components related to the training process and add a data
transformation module. In addition, it will rename the package names mmcv
to mmcv-lite and mmcv-full to mmcv. See
https://github.com/open-mmlab/mmcv/blob/master/docs/en/compatibility.md
for more details.
warnings.warn(
Traceback (most recent call last):
File "tools/train.py", line 201, in <module>
main()
File "tools/train.py", line 190, in main
train_model(
File "c:\mmpose-master\mmpose\apis\train.py", line 213, in train_model
runner.run(data_loaders, cfg.workflow, cfg.total_epochs)
File
"C:\Users\MASTER\.conda\envs\openmmlab\lib\site-packages\mmcv\runner\epoch_based_runner.py",
line 136, in run
epoch_runner(data_loaders[i], **kwargs)
File
"C:\Users\MASTER\.conda\envs\openmmlab\lib\site-packages\mmcv\runner\epoch_based_runner.py",
line 53, in train
self.run_iter(data_batch, train_mode=True, **kwargs)
File
"C:\Users\MASTER\.conda\envs\openmmlab\lib\site-packages\mmcv\runner\epoch_based_runner.py",
line 31, in run_iter
outputs = self.model.train_step(data_batch, self.optimizer,
File
"C:\Users\MASTER\.conda\envs\openmmlab\lib\site-packages\mmcv\parallel\data_parallel.py",
line 77, in train_step
return self.module.train_step(*inputs[0], **kwargs[0])
File "c:\mmpose-master\mmpose\models\detectors\base.py", line 104, in
train_step
losses = self.forward(**data_batch)
File
"C:\Users\MASTER\.conda\envs\openmmlab\lib\site-packages\mmcv\runner\fp16_utils.py",
line 119, in new_func
return old_func(*args, **kwargs)
File "c:\mmpose-master\mmpose\models\detectors\cid.py", line 137, in
forward
return self.forward_train(img, multi_heatmap, multi_mask,
File "c:\mmpose-master\mmpose\models\detectors\cid.py", line 190, in
forward_train
cid_losses = self.keypoint_head(output, labels)
File
"C:\Users\MASTER\.conda\envs\openmmlab\lib\site-packages\torch\nn\modules\module.py",
line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "c:\mmpose-master\mmpose\models\heads\cid_head.py", line 92, in
forward
return self.forward_train(features, forward_info)
File "c:\mmpose-master\mmpose\models\heads\cid_head.py", line 103, in
forward_train
multi_heatmap_loss = self.heatmap_loss(pred_multi_heatmap,
File
"C:\Users\MASTER\.conda\envs\openmmlab\lib\site-packages\torch\nn\modules\module.py",
line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "c:\mmpose-master\mmpose\models\losses\heatmap_loss.py", line 117,
in forward
pos_loss = torch.log(pred) * torch.pow(1 - pred, self.alpha) * pos_inds
RuntimeError: The size of tensor a (18) must match the size of tensor b
(3) at non-singleton dimension 1
Sylvain Ard
0549507724
0778380991
***@***.***
http://sylvain-ard.fr
Entreprise individuelle SIRET : 80079243400022
Appt 26 Bât A Résidence Le Patio
83 rue de la Bugellerie
86000 Poitiers
Le ven. 25 nov. 2022 à 11:25, Yining Li ***@***.***> a
écrit :
> The batch size can be set in the config file:
> https://mmpose.readthedocs.io/en/latest/tutorials/0_config.html
>
> —
> Reply to this email directly, view it on GitHub
> <#1823 (comment)>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AEWZCR5MRCVFEO7NSJNGRATWKCH2VANCNFSM6AAAAAASFZILNA>
> .
> You are receiving this because you authored the thread.Message ID:
> ***@***.***>
>
|
Please check the keypoint number is consistent in the data and the config, including the dataset metainfo (like https://github.com/open-mmlab/mmpose/blob/master/configs/_base_/datasets/coco.py), the channel_cfg (like https://github.com/open-mmlab/mmpose/blob/master/configs/body/2d_kpt_sview_rgb_img/associative_embedding/coco/higherhrnet_w32_coco_512x512.py#L21-L28) and the model config (like https://github.com/open-mmlab/mmpose/blob/master/configs/body/2d_kpt_sview_rgb_img/associative_embedding/coco/higherhrnet_w32_coco_512x512.py#L90). Descriptions of these configurations can be found in the doc: https://mmpose.readthedocs.io/en/latest/tutorials/0_config.html |
have I tio add lines to :
extra=dict(
stage1=dict(
num_modules=1,
num_branches=1,
block='BOTTLENECK',
num_blocks=(4, ),
num_channels=(64, )),
stage2=dict(
num_modules=1,
num_branches=2,
block='BASIC',
num_blocks=(4, 4),
num_channels=(48, 96)),
stage3=dict(
num_modules=4,
num_branches=3,
block='BASIC',
num_blocks=(4, 4, 4),
num_channels=(48, 96, 192)),
stage4=dict(
num_modules=3,
num_branches=4,
block='BASIC',
num_blocks=(4, 4, 4, 4),
num_channels=(48, 96, 192, 384),
multiscale_output=True)),
and what please ?
Sylvain Ard
0549507724
0778380991
***@***.***
http://sylvain-ard.fr
Entreprise individuelle SIRET : 80079243400022
Appt 26 Bât A Résidence Le Patio
83 rue de la Bugellerie
86000 Poitiers
Le ven. 25 nov. 2022 à 11:37, Sylvain Ard ***@***.***> a écrit :
… here are the lines 26 to 30 of my
\configs\body\2d_kpt_sview_rgb_img\cid\coco\hrnet_w48_coco_512x512.py file :
data_cfg = dict(
image_size=1024,
base_size=512,
base_sigma=2,
heatmap_size=[256],
Sylvain Ard
0549507724
0778380991
***@***.***
http://sylvain-ard.fr
Entreprise individuelle SIRET : 80079243400022
Appt 26 Bât A Résidence Le Patio
83 rue de la Bugellerie
86000 Poitiers
Le ven. 25 nov. 2022 à 11:29, Sylvain Ard ***@***.***> a
écrit :
> OK I sets samples_per_gpu to 4,
> now I have the error :
>
> C:\Users\MASTER\.conda\envs\openmmlab\lib\site-packages\mmcv\__init__.py:20:
> UserWarning: On January 1, 2023, MMCV will release v2.0.0, in which it will
> remove components related to the training process and add a data
> transformation module. In addition, it will rename the package names mmcv
> to mmcv-lite and mmcv-full to mmcv. See
> https://github.com/open-mmlab/mmcv/blob/master/docs/en/compatibility.md
> for more details.
> warnings.warn(
> Traceback (most recent call last):
> File "tools/train.py", line 201, in <module>
> main()
> File "tools/train.py", line 190, in main
> train_model(
> File "c:\mmpose-master\mmpose\apis\train.py", line 213, in train_model
> runner.run(data_loaders, cfg.workflow, cfg.total_epochs)
> File
> "C:\Users\MASTER\.conda\envs\openmmlab\lib\site-packages\mmcv\runner\epoch_based_runner.py",
> line 136, in run
> epoch_runner(data_loaders[i], **kwargs)
> File
> "C:\Users\MASTER\.conda\envs\openmmlab\lib\site-packages\mmcv\runner\epoch_based_runner.py",
> line 53, in train
> self.run_iter(data_batch, train_mode=True, **kwargs)
> File
> "C:\Users\MASTER\.conda\envs\openmmlab\lib\site-packages\mmcv\runner\epoch_based_runner.py",
> line 31, in run_iter
> outputs = self.model.train_step(data_batch, self.optimizer,
> File
> "C:\Users\MASTER\.conda\envs\openmmlab\lib\site-packages\mmcv\parallel\data_parallel.py",
> line 77, in train_step
> return self.module.train_step(*inputs[0], **kwargs[0])
> File "c:\mmpose-master\mmpose\models\detectors\base.py", line 104, in
> train_step
> losses = self.forward(**data_batch)
> File
> "C:\Users\MASTER\.conda\envs\openmmlab\lib\site-packages\mmcv\runner\fp16_utils.py",
> line 119, in new_func
> return old_func(*args, **kwargs)
> File "c:\mmpose-master\mmpose\models\detectors\cid.py", line 137, in
> forward
> return self.forward_train(img, multi_heatmap, multi_mask,
> File "c:\mmpose-master\mmpose\models\detectors\cid.py", line 190, in
> forward_train
> cid_losses = self.keypoint_head(output, labels)
> File
> "C:\Users\MASTER\.conda\envs\openmmlab\lib\site-packages\torch\nn\modules\module.py",
> line 889, in _call_impl
> result = self.forward(*input, **kwargs)
> File "c:\mmpose-master\mmpose\models\heads\cid_head.py", line 92, in
> forward
> return self.forward_train(features, forward_info)
> File "c:\mmpose-master\mmpose\models\heads\cid_head.py", line 103, in
> forward_train
> multi_heatmap_loss = self.heatmap_loss(pred_multi_heatmap,
> File
> "C:\Users\MASTER\.conda\envs\openmmlab\lib\site-packages\torch\nn\modules\module.py",
> line 889, in _call_impl
> result = self.forward(*input, **kwargs)
> File "c:\mmpose-master\mmpose\models\losses\heatmap_loss.py", line 117,
> in forward
> pos_loss = torch.log(pred) * torch.pow(1 - pred, self.alpha) *
> pos_inds
> RuntimeError: The size of tensor a (18) must match the size of tensor b
> (3) at non-singleton dimension 1
> Sylvain Ard
> 0549507724
> 0778380991
> ***@***.***
> http://sylvain-ard.fr
> Entreprise individuelle SIRET : 80079243400022
> Appt 26 Bât A Résidence Le Patio
> 83 rue de la Bugellerie
> 86000 Poitiers
>
>
> Le ven. 25 nov. 2022 à 11:25, Yining Li ***@***.***> a
> écrit :
>
>> The batch size can be set in the config file:
>> https://mmpose.readthedocs.io/en/latest/tutorials/0_config.html
>>
>> —
>> Reply to this email directly, view it on GitHub
>> <#1823 (comment)>,
>> or unsubscribe
>> <https://github.com/notifications/unsubscribe-auth/AEWZCR5MRCVFEO7NSJNGRATWKCH2VANCNFSM6AAAAAASFZILNA>
>> .
>> You are receiving this because you authored the thread.Message ID:
>> ***@***.***>
>>
>
|
The discussion has gone beyond the original scope of this issue. I would suggest opening separate issues each for a specific topic so other users can search for useful information when they encountered similar problems. This issue will be closed for now. |
thank you very much, the training is starting, I have forgot to set
num_joints
Sylvain Ard
0549507724
0778380991
***@***.***
http://sylvain-ard.fr
Entreprise individuelle SIRET : 80079243400022
Appt 26 Bât A Résidence Le Patio
83 rue de la Bugellerie
86000 Poitiers
Le ven. 25 nov. 2022 à 11:52, Yining Li ***@***.***> a écrit :
… Closed #1823 <#1823> as
completed.
—
Reply to this email directly, view it on GitHub
<#1823 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/AEWZCR7YVXNI5CW3MEEWKODWKCK7TANCNFSM6AAAAAASFZILNA>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
Hello,.
I would like to know if the points in the JSON training file formatted in COCO must be inside their polygons or if it does not matter. Moreover I would like to know if the detection of points takes into account the neighborhood of the point on the one hand and the neighborhood of its polygon on the other hand. Finally I would like to know how to customize my point and polygon classes in your software.
Thanks
Cordially
The text was updated successfully, but these errors were encountered: