Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

I met a bug during the evaluating #6

Open
lisaner000 opened this issue Apr 11, 2024 · 30 comments
Open

I met a bug during the evaluating #6

lisaner000 opened this issue Apr 11, 2024 · 30 comments

Comments

@lisaner000
Copy link

Traceback (most recent call last):
File "evaluate.py", line 94, in
model.load_state_dict(checkpoint['gen_state_dict'])
File "/root/miniconda3/envs/yzy_staf/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1498, in load_state_dict
self.class.name, "\n\t".join(error_msgs)))
RuntimeError: Error(s) in loading state_dict for SAFM:
Missing key(s) in state_dict: "nonlocalblock.attention.fc.weight", "nonlocalblock.attention.fc.bias", "nonlocalblock.attention.attention.0.weight", "nonlocalblock.attention.attention.0.bias", "nonlocalblock.attention.attention.2.weight", "nonlocalblock.attention.attention.2.bias", "nonlocalblock.attention.attention.4.weight", "nonlocalblock.attention.attention.4.bias".
Unexpected key(s) in state_dict: "points_grid", "deconv_layers.0.weight", "deconv_layers.1.weight", "deconv_layers.1.bias", "deconv_layers.1.running_mean", "deconv_layers.1.running_var", "deconv_layers.1.num_batches_tracked", "deconv_layers.3.weight", "deconv_layers.4.weight", "deconv_layers.4.bias", "deconv_layers.4.running_mean", "deconv_layers.4.running_var", "deconv_layers.4.num_batches_tracked", "deconv_layers.6.weight", "deconv_layers.7.weight", "deconv_layers.7.bias", "deconv_layers.7.running_mean", "deconv_layers.7.running_var", "deconv_layers.7.num_batches_tracked", "maf_extractor.0.Dmap", "maf_extractor.0.conv0.weight", "maf_extractor.0.conv0.bias", "maf_extractor.0.conv1.weight", "maf_extractor.0.conv1.bias", "maf_extractor.0.conv2.weight", "maf_extractor.0.conv2.bias", "maf_extractor.1.Dmap", "maf_extractor.1.conv0.weight", "maf_extractor.1.conv0.bias", "maf_extractor.1.conv1.weight", "maf_extractor.1.conv1.bias", "maf_extractor.1.conv2.weight", "maf_extractor.1.conv2.bias", "maf_extractor.2.Dmap", "maf_extractor.2.conv0.weight", "maf_extractor.2.conv0.bias", "maf_extractor.2.conv1.weight", "maf_extractor.2.conv1.bias", "maf_extractor.2.conv2.weight", "maf_extractor.2.conv2.bias", "regressor.0.init_pose", "regressor.0.init_shape", "regressor.0.init_cam", "regressor.0.fc1.weight", "regressor.0.fc1.bias", "regressor.0.fc2.weight", "regressor.0.fc2.bias", "regressor.0.decpose.weight", "regressor.0.decpose.bias", "regressor.0.decshape.weight", "regressor.0.decshape.bias", "regressor.0.deccam.weight", "regressor.0.deccam.bias", "regressor.0.smpl.betas", "regressor.0.smpl.global_orient", "regressor.0.smpl.body_pose", "regressor.0.smpl.faces_tensor", "regressor.0.smpl.v_template", "regressor.0.smpl.shapedirs", "regressor.0.smpl.J_regressor", "regressor.0.smpl.posedirs", "regressor.0.smpl.parents", "regressor.0.smpl.lbs_weights", "regressor.0.smpl.J_regressor_extra", "regressor.0.smpl.vertex_joint_selector.extra_joints_idxs", "regressor.1.init_pose", "regressor.1.init_shape", "regressor.1.init_cam", "regressor.1.fc1.weight", "regressor.1.fc1.bias", "regressor.1.fc2.weight", "regressor.1.fc2.bias", "regressor.1.decpose.weight", "regressor.1.decpose.bias", "regressor.1.decshape.weight", "regressor.1.decshape.bias", "regressor.1.deccam.weight", "regressor.1.deccam.bias", "regressor.1.smpl.betas", "regressor.1.smpl.global_orient", "regressor.1.smpl.body_pose", "regressor.1.smpl.faces_tensor", "regressor.1.smpl.v_template", "regressor.1.smpl.shapedirs", "regressor.1.smpl.J_regressor", "regressor.1.smpl.posedirs", "regressor.1.smpl.parents", "regressor.1.smpl.lbs_weights", "regressor.1.smpl.J_regressor_extra", "regressor.1.smpl.vertex_joint_selector.extra_joints_idxs", "regressor.2.init_pose", "regressor.2.init_shape", "regressor.2.init_cam", "regressor.2.fc1.weight", "regressor.2.fc1.bias", "regressor.2.fc2.weight", "regressor.2.fc2.bias", "regressor.2.decpose.weight", "regressor.2.decpose.bias", "regressor.2.decshape.weight", "regressor.2.decshape.bias", "regressor.2.deccam.weight", "regressor.2.deccam.bias", "regressor.2.smpl.betas", "regressor.2.smpl.global_orient", "regressor.2.smpl.body_pose", "regressor.2.smpl.faces_tensor", "regressor.2.smpl.v_template", "regressor.2.smpl.shapedirs", "regressor.2.smpl.J_regressor", "regressor.2.smpl.posedirs", "regressor.2.smpl.parents", "regressor.2.smpl.lbs_weights", "regressor.2.smpl.J_regressor_extra", "regressor.2.smpl.vertex_joint_selector.extra_joints_idxs", "safm.nonlocalblock.attention.fc.weight", "safm.nonlocalblock.attention.fc.bias", "safm.nonlocalblock.attention.attention.0.weight", "safm.nonlocalblock.attention.attention.0.bias", "safm.nonlocalblock.attention.attention.2.weight", "safm.nonlocalblock.attention.attention.2.bias", "safm.nonlocalblock.attention.attention.4.weight", "safm.nonlocalblock.attention.attention.4.bias", "tcfm.nonlocalblock.conv_phi.weight", "tcfm.nonlocalblock.conv_theta.weight", "tcfm.nonlocalblock.conv_g.weight", "tcfm.nonlocalblock.conv_mask.weight", "tcfm.nonlocalblock.conv_mask_forR.weight".
Can you give me some advice?

@yw0208
Copy link
Owner

yw0208 commented Apr 11, 2024

I'm very sorry about that. The codes for evaluation are validate.py, validate_h36m.py, and validate_mpii3d.py. And if you want to evaluate STAF. You need to prepare some things, like issue. Because my codes are based on the features that have been extracted,

@lisaner000
Copy link
Author

I'm very sorry about that. The codes for evaluation are validate.py, validate_h36m.py, and validate_mpii3d.py. And if you want to evaluate STAF. You need to prepare some things, like issue. Because my codes are based on the features that have been extracted,

This issue has been solved. And I find the results differ greatly from those provided in your paper, the reason is related to the datasets you processed?

@yw0208
Copy link
Owner

yw0208 commented Apr 11, 2024

How much does the result differ? Show me how you extract features?

@lisaner000
Copy link
Author

How much does the result differ? Show me how you extract features?

At present, I didn't add the feature extraction code, but I use the preprocessed data by VIBE.

@yw0208
Copy link
Owner

yw0208 commented Apr 11, 2024

Although ours and VIBE use ResNet50 as the backbone, the weights differ. You should load the weights from ./data/base_model.pt and extract features.
Note that STAF takes the features with the shape of [batch_size, 9, 2048, 7, 7] as input. But I remember the preprocessed data by VIBE are 2048-dim features. So, how did you get the evaluation results? There must be something wrong.

@lisaner000
Copy link
Author

Although ours and VIBE use ResNet50 as the backbone, the weights differ. You should load the weights from ./data/base_model.pt and extract features. Note that STAF takes the features with the shape of [batch_size, 9, 2048, 7, 7] as input. But I remember the preprocessed data by VIBE are 2048-dim features. So, how did you get the evaluation results? There must be something wrong.

So your mean is that I should first load the weights and change the codes in "dataset".py, and self.generateor and related codes in trainer.py to extract features, so that make them from 2048-dim to [batch_size, 9, 2048, 7, 7]. My understanding is right?

@yw0208
Copy link
Owner

yw0208 commented Apr 11, 2024

Although ours and VIBE use ResNet50 as the backbone, the weights differ. You should load the weights from ./data/base_model.pt and extract features. Note that STAF takes the features with the shape of [batch_size, 9, 2048, 7, 7] as input. But I remember the preprocessed data by VIBE are 2048-dim features. So, how did you get the evaluation results? There must be something wrong.

So your mean is that I should first load the weights and change the codes in "dataset".py, and self.generateor and related codes in trainer.py to extract features, so that make them from 2048-dim to [batch_size, 9, 2048, 7, 7]. My understanding is right?

Yes, you are right.

@lisaner000
Copy link
Author

Although ours and VIBE use ResNet50 as the backbone, the weights differ. You should load the weights from ./data/base_model.pt and extract features. Note that STAF takes the features with the shape of [batch_size, 9, 2048, 7, 7] as input. But I remember the preprocessed data by VIBE are 2048-dim features. So, how did you get the evaluation results? There must be something wrong.

So your mean is that I should first load the weights and change the codes in "dataset".py, and self.generateor and related codes in trainer.py to extract features, so that make them from 2048-dim to [batch_size, 9, 2048, 7, 7]. My understanding is right?

Yes, you are right.

About extracting feature, you can give me some advice?

@yw0208
Copy link
Owner

yw0208 commented Apr 11, 2024

Although ours and VIBE use ResNet50 as the backbone, the weights differ. You should load the weights from ./data/base_model.pt and extract features. Note that STAF takes the features with the shape of [batch_size, 9, 2048, 7, 7] as input. But I remember the preprocessed data by VIBE are 2048-dim features. So, how did you get the evaluation results? There must be something wrong.

So your mean is that I should first load the weights and change the codes in "dataset".py, and self.generateor and related codes in trainer.py to extract features, so that make them from 2048-dim to [batch_size, 9, 2048, 7, 7]. My understanding is right?

Yes, you are right.

About extracting feature, you can give me some advice?

No more advice, and then you can get the right results. Please make sure the preprocess of image is right when extracting features. You can refer to how I do in demo.py.

@lisaner000
Copy link
Author

Although ours and VIBE use ResNet50 as the backbone, the weights differ. You should load the weights from ./data/base_model.pt and extract features. Note that STAF takes the features with the shape of [batch_size, 9, 2048, 7, 7] as input. But I remember the preprocessed data by VIBE are 2048-dim features. So, how did you get the evaluation results? There must be something wrong.

So your mean is that I should first load the weights and change the codes in "dataset".py, and self.generateor and related codes in trainer.py to extract features, so that make them from 2048-dim to [batch_size, 9, 2048, 7, 7]. My understanding is right?

Yes, you are right.

About extracting feature, you can give me some advice?

No more advice, and then you can get the right results. Please make sure the preprocess of image is right when extracting features. You can refer to how I do in demo.py.

The base datasets is the same as TCMR preprocessed data? And I should refer to demo.py to change the relate code to achieve the feature extraction?

@yw0208
Copy link
Owner

yw0208 commented Apr 11, 2024

Yes, then these features can be sent to STAF.

@lisaner000
Copy link
Author

Yes, then these features can be sent to STAF.

So then further extracted features will serve as the data_loaders during the training right? Therefore I should add features extraction function in trainer.py and call it in fit() fuction?

@yw0208
Copy link
Owner

yw0208 commented Apr 11, 2024

Yes, then these features can be sent to STAF.

So then further extracted features will serve as the data_loaders during the training right? Therefore I should add features extraction function in trainer.py and call it in fit() fuction?

I think the best way is that make the feature extractor a part of STAF, i.e., initiate the feature extractor here. And change the dataloader to load images not features. Related codes in trainer.py also need to be changed.

@lisaner000
Copy link
Author

lisaner000 commented Apr 12, 2024

Yes, then these features can be sent to STAF.

So then further extracted features will serve as the data_loaders during the training right? Therefore I should add features extraction function in trainer.py and call it in fit() fuction?

I think the best way is that make the feature extractor a part of STAF, i.e., initiate the feature extractor here. And change the dataloader to load images not features. Related codes in trainer.py also need to be changed.

About changing the dataloader to load images, does it complex? And can you provide the related code?

@yw0208
Copy link
Owner

yw0208 commented Apr 12, 2024

Yes, then these features can be sent to STAF.

So then further extracted features will serve as the data_loaders during the training right? Therefore I should add features extraction function in trainer.py and call it in fit() fuction?

I think the best way is that make the feature extractor a part of STAF, i.e., initiate the feature extractor here. And change the dataloader to load images not features. Related codes in trainer.py also need to be changed.

About changing the dataloader to load images, does it complex? And can you provide the related code?

This should be easy. I have shown you an example of class CropDataset in _dataset_demo.py. All you need to do is change this class to fit your training dataset. As for sequence split, you can still follow "dataset".py in ./lib/dataset/.

@lisaner000
Copy link
Author

lisaner000 commented Apr 12, 2024

Yes, then these features can be sent to STAF.

So then further extracted features will serve as the data_loaders during the training right? Therefore I should add features extraction function in trainer.py and call it in fit() fuction?

I think the best way is that make the feature extractor a part of STAF, i.e., initiate the feature extractor here. And change the dataloader to load images not features. Related codes in trainer.py also need to be changed.

About changing the dataloader to load images, does it complex? And can you provide the related code?

This should be easy. I have shown you an example of class CropDataset in _dataset_demo.py. All you need to do is change this class to fit your training dataset. As for sequence split, you can still follow "dataset".py in ./lib/dataset/.

I understand your advice is that I should refer here to modify ./lib/dataset/_loaders.py to achieve loading images, right?

@yw0208
Copy link
Owner

yw0208 commented Apr 12, 2024

Yes, then these features can be sent to STAF.

So then further extracted features will serve as the data_loaders during the training right? Therefore I should add features extraction function in trainer.py and call it in fit() fuction?

I think the best way is that make the feature extractor a part of STAF, i.e., initiate the feature extractor here. And change the dataloader to load images not features. Related codes in trainer.py also need to be changed.

About changing the dataloader to load images, does it complex? And can you provide the related code?

This should be easy. I have shown you an example of class CropDataset in _dataset_demo.py. All you need to do is change this class to fit your training dataset. As for sequence split, you can still follow "dataset".py in ./lib/dataset/.

I understand your advice is that I should refer here to modify ./lib/dataset/_loaders.py to achieve loading images, right?

Right.

@lisaner000
Copy link
Author

Yes, then these features can be sent to STAF.

So then further extracted features will serve as the data_loaders during the training right? Therefore I should add features extraction function in trainer.py and call it in fit() fuction?

I think the best way is that make the feature extractor a part of STAF, i.e., initiate the feature extractor here. And change the dataloader to load images not features. Related codes in trainer.py also need to be changed.

About changing the dataloader to load images, does it complex? And can you provide the related code?

This should be easy. I have shown you an example of class CropDataset in _dataset_demo.py. All you need to do is change this class to fit your training dataset. As for sequence split, you can still follow "dataset".py in ./lib/dataset/.

I understand your advice is that I should refer here to modify ./lib/dataset/_loaders.py to achieve loading images, right?

Right.

I have another question: Function" get_data_loaders() " in ./lib/dataset/_loaders.py gets 2d/3d datasets first, then I want to add CropDataset codes, I find that CropDataset is cropping for a single image, but I get 2d/ 3d datasets before it. I‘m not sure some parameters, such as frames, bbox and joints2d , can be deleted?

@yw0208
Copy link
Owner

yw0208 commented Apr 12, 2024

Yes, then these features can be sent to STAF.

So then further extracted features will serve as the data_loaders during the training right? Therefore I should add features extraction function in trainer.py and call it in fit() fuction?

I think the best way is that make the feature extractor a part of STAF, i.e., initiate the feature extractor here. And change the dataloader to load images not features. Related codes in trainer.py also need to be changed.

About changing the dataloader to load images, does it complex? And can you provide the related code?

This should be easy. I have shown you an example of class CropDataset in _dataset_demo.py. All you need to do is change this class to fit your training dataset. As for sequence split, you can still follow "dataset".py in ./lib/dataset/.

I understand your advice is that I should refer here to modify ./lib/dataset/_loaders.py to achieve loading images, right?

Right.

I have another question: Function" get_data_loaders() " in ./lib/dataset/_loaders.py gets 2d/3d datasets first, then I want to add CropDataset codes, I find that CropDataset is cropping for a single image, but I get 2d/ 3d datasets before it. I‘m not sure some parameters, such as frames, bbox and joints2d , can be deleted?

get_data_loaders() is based on "dataset".py. And "dataset".py inherit from _dataset_2d.py or _dataset_3d.py. So, please find the function get_img_sequence(). Then, you can change it to load image sequences according to CropDataset. It should be pretty easy.

@lisaner000
Copy link
Author

Yes, then these features can be sent to STAF.

So then further extracted features will serve as the data_loaders during the training right? Therefore I should add features extraction function in trainer.py and call it in fit() fuction?

I think the best way is that make the feature extractor a part of STAF, i.e., initiate the feature extractor here. And change the dataloader to load images not features. Related codes in trainer.py also need to be changed.

About changing the dataloader to load images, does it complex? And can you provide the related code?

This should be easy. I have shown you an example of class CropDataset in _dataset_demo.py. All you need to do is change this class to fit your training dataset. As for sequence split, you can still follow "dataset".py in ./lib/dataset/.

I understand your advice is that I should refer here to modify ./lib/dataset/_loaders.py to achieve loading images, right?

Right.

I have another question: Function" get_data_loaders() " in ./lib/dataset/_loaders.py gets 2d/3d datasets first, then I want to add CropDataset codes, I find that CropDataset is cropping for a single image, but I get 2d/ 3d datasets before it. I‘m not sure some parameters, such as frames, bbox and joints2d , can be deleted?

get_data_loaders() is based on "dataset".py. And "dataset".py inherit from _dataset_2d.py or _dataset_3d.py. So, please find the function get_img_sequence(). Then, you can change it to load image sequences according to CropDataset. It should be pretty easy.
I change the function get_img_sequence() in ./lib/dataset/_dataset_2d.py is as following:
def get_img_sequence(self, img_paths):
##imgs_tensor_list = []
bbox_scale = 1.1
for path in img_paths:
bboxes = joints2d = None
img_data = joblib.load(path)
##img = img_data['s_feat']
bboxes = img_data['bbox']
frames = img_data['frames']
dataset = CropDataset(
image_folder=path,
frames=frames,
bboxes=bboxes,
joints2d=joints2d,
scale=bbox_scale,
)
bboxes = dataset.bboxes
frames = dataset.frames
##imgs_tensor_list.append(img)
return dataset
Could you please help me see if there is any problem?

@yw0208
Copy link
Owner

yw0208 commented Apr 13, 2024

Yes, then these features can be sent to STAF.

So then further extracted features will serve as the data_loaders during the training right? Therefore I should add features extraction function in trainer.py and call it in fit() fuction?

I think the best way is that make the feature extractor a part of STAF, i.e., initiate the feature extractor here. And change the dataloader to load images not features. Related codes in trainer.py also need to be changed.

About changing the dataloader to load images, does it complex? And can you provide the related code?

This should be easy. I have shown you an example of class CropDataset in _dataset_demo.py. All you need to do is change this class to fit your training dataset. As for sequence split, you can still follow "dataset".py in ./lib/dataset/.

I understand your advice is that I should refer here to modify ./lib/dataset/_loaders.py to achieve loading images, right?

Right.

I have another question: Function" get_data_loaders() " in ./lib/dataset/_loaders.py gets 2d/3d datasets first, then I want to add CropDataset codes, I find that CropDataset is cropping for a single image, but I get 2d/ 3d datasets before it. I‘m not sure some parameters, such as frames, bbox and joints2d , can be deleted?

get_data_loaders() is based on "dataset".py. And "dataset".py inherit from _dataset_2d.py or _dataset_3d.py. So, please find the function get_img_sequence(). Then, you can change it to load image sequences according to CropDataset. It should be pretty easy.
I change the function get_img_sequence() in ./lib/dataset/_dataset_2d.py is as following:
def get_img_sequence(self, img_paths):
##imgs_tensor_list = []
bbox_scale = 1.1
for path in img_paths:
bboxes = joints2d = None
img_data = joblib.load(path)
##img = img_data['s_feat']
bboxes = img_data['bbox']
frames = img_data['frames']
dataset = CropDataset(
image_folder=path,
frames=frames,
bboxes=bboxes,
joints2d=joints2d,
scale=bbox_scale,
)
bboxes = dataset.bboxes
frames = dataset.frames
##imgs_tensor_list.append(img)
return dataset
Could you please help me see if there is any problem?

Sorry, it's a basic programming problem. Maybe you should find some online lessons to learn how to use pytroch.

@lisaner000
Copy link
Author

When I change the code, I encounter an bug indicating that there is no original image file, so I need to download the original MPI-INF-3DHP and Human3.6M datasets, right?

@yw0208
Copy link
Owner

yw0208 commented Apr 15, 2024

When I change the code, I encounter an bug indicating that there is no original image file, so I need to download the original MPI-INF-3DHP and Human3.6M datasets, right?

Yes, you need.

@lisaner000
Copy link
Author

When I change the code, I encounter an bug indicating that there is no original image file, so I need to download the original MPI-INF-3DHP and Human3.6M datasets, right?

Yes, you need.

Can you share your processed training datasets? I'm sincerely hope get your datasets to achieve my method. Thank you very much!

@yw0208
Copy link
Owner

yw0208 commented Apr 16, 2024

When I change the code, I encounter an bug indicating that there is no original image file, so I need to download the original MPI-INF-3DHP and Human3.6M datasets, right?

Yes, you need.

Can you share your processed training datasets? I'm sincerely hope get your datasets to achieve my method. Thank you very much!

Do you mean extracted features? This consumes almost 14 TB of storage. I had trouble uploading it and sharing it. I don't recommend you use this method either. It is more convenient to load images directly.

@lisaner000
Copy link
Author

When I change the code, I encounter an bug indicating that there is no original image file, so I need to download the original MPI-INF-3DHP and Human3.6M datasets, right?

Yes, you need.

Can you share your processed training datasets? I'm sincerely hope get your datasets to achieve my method. Thank you very much!

Do you mean extracted features? This consumes almost 14 TB of storage. I had trouble uploading it and sharing it. I don't recommend you use this method either. It is more convenient to load images directly.

Thank you very much! I have another question: I want to know which parameter decide the target_3d['w_3d'] and target['w_smpl'] second dimension.

@yw0208
Copy link
Owner

yw0208 commented Apr 18, 2024

When I change the code, I encounter an bug indicating that there is no original image file, so I need to download the original MPI-INF-3DHP and Human3.6M datasets, right?

Yes, you need.

Can you share your processed training datasets? I'm sincerely hope get your datasets to achieve my method. Thank you very much!

Do you mean extracted features? This consumes almost 14 TB of storage. I had trouble uploading it and sharing it. I don't recommend you use this method either. It is more convenient to load images directly.

Thank you very much! I have another question: I want to know which parameter decide the target_3d['w_3d'] and target['w_smpl'] second dimension.

‘w_3d’ and 'w_smpl' means with 3D joint label and with SMPL label. So, when there are 3D joint labels and SMPL labels in your dataset, ‘w_3d’ and 'w_smpl' would be 1; if not, they would be 0.

@lisaner000
Copy link
Author

When I change the code, I encounter an bug indicating that there is no original image file, so I need to download the original MPI-INF-3DHP and Human3.6M datasets, right?

Yes, you need.

Can you share your processed training datasets? I'm sincerely hope get your datasets to achieve my method. Thank you very much!

Do you mean extracted features? This consumes almost 14 TB of storage. I had trouble uploading it and sharing it. I don't recommend you use this method either. It is more convenient to load images directly.

Thank you very much! I have another question: I want to know which parameter decide the target_3d['w_3d'] and target['w_smpl'] second dimension.

‘w_3d’ and 'w_smpl' means with 3D joint label and with SMPL label. So, when there are 3D joint labels and SMPL labels in your dataset, ‘w_3d’ and 'w_smpl' would be 1; if not, they would be 0.

If I used the off-the-shelf datasets provided by TCMR, and I want to change target_3d['w_3d'] from [13, 3] to [13, 16], can you give me some advice?

@yw0208
Copy link
Owner

yw0208 commented Apr 18, 2024

When I change the code, I encounter an bug indicating that there is no original image file, so I need to download the original MPI-INF-3DHP and Human3.6M datasets, right?

Yes, you need.

Can you share your processed training datasets? I'm sincerely hope get your datasets to achieve my method. Thank you very much!

Do you mean extracted features? This consumes almost 14 TB of storage. I had trouble uploading it and sharing it. I don't recommend you use this method either. It is more convenient to load images directly.

Thank you very much! I have another question: I want to know which parameter decide the target_3d['w_3d'] and target['w_smpl'] second dimension.

‘w_3d’ and 'w_smpl' means with 3D joint label and with SMPL label. So, when there are 3D joint labels and SMPL labels in your dataset, ‘w_3d’ and 'w_smpl' would be 1; if not, they would be 0.

If I used the off-the-shelf datasets provided by TCMR, and I want to change target_3d['w_3d'] from [13, 3] to [13, 16], can you give me some advice?

Sorry, I made a mistake. I don't know why you want to change it. There are two ways to do that.
First, you can repeat it like
target_3d['w_3d']= target_3d['w_3d'][:,0].repeat(1,16)
Second, you can just set the SEQLEN as 16 in repr_table4_h36m_mpii3d_model.yaml

@yw0208
Copy link
Owner

yw0208 commented Apr 18, 2024

When I change the code, I encounter an bug indicating that there is no original image file, so I need to download the original MPI-INF-3DHP and Human3.6M datasets, right?

Yes, you need.

Can you share your processed training datasets? I'm sincerely hope get your datasets to achieve my method. Thank you very much!

Do you mean extracted features? This consumes almost 14 TB of storage. I had trouble uploading it and sharing it. I don't recommend you use this method either. It is more convenient to load images directly.

Thank you very much! I have another question: I want to know which parameter decide the target_3d['w_3d'] and target['w_smpl'] second dimension.

‘w_3d’ and 'w_smpl' means with 3D joint label and with SMPL label. So, when there are 3D joint labels and SMPL labels in your dataset, ‘w_3d’ and 'w_smpl' would be 1; if not, they would be 0.

If I used the off-the-shelf datasets provided by TCMR, and I want to change target_3d['w_3d'] from [13, 3] to [13, 16], can you give me some advice?

Sorry, I made a mistake. I don't know why you want to change it. There are two ways to do that. First, you can repeat it like target_3d['w_3d']= target_3d['w_3d'][:,0].repeat(1,16) Second, you can just set the SEQLEN as 16 in repr_table4_h36m_mpii3d_model.yaml

Which method to use depends on what you want to achieve.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants