Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Inference on internet still need h36m dataset??? #15

Closed
lucasjinreal opened this issue Jan 28, 2022 · 14 comments
Closed

Inference on internet still need h36m dataset??? #15

lucasjinreal opened this issue Jan 28, 2022 · 14 comments

Comments

@lucasjinreal
Copy link

class BaseAdaptor():
    def __init__(self, options):
        self.options = options
        self.exppath = osp.join(self.options.expdir, self.options.expname)
        os.makedirs(self.exppath+'/mesh', exist_ok=True)
        os.makedirs(self.exppath+'/image', exist_ok=True)
        os.makedirs(self.exppath+'/result', exist_ok=True)
        self.summary_writer = SummaryWriter(self.exppath)
        self.device = torch.device('cuda')
        # set seed
        self.seed_everything(self.options.seed)

        self.options.mixtrain = self.options.lower_level_mixtrain or self.options.upper_level_mixtrain

        if self.options.retrieval:
            # # load basemodel's feature
            self.load_h36_cluster_res()

        # if self.options.retrieval:
        #     self.h36m_dataset = SourceDataset(datapath='data/retrieval_res/h36m_random_sample_center_10_10.pt')

        # set model

For inference only why still need training dataset????

@syguan96
Copy link
Owner

Hi, thanks for your interest.
Yes, h36m is used to train the source model. But the point is how to generalize the model trained on human 3.6m to other OOD data, such as 3DPW.

There are some misunderstandings if you think it's a trick. You can refer to Sec. 4.2 in our paper for more details.

@lucasjinreal
Copy link
Author

@syguan96 I don't care much about is it a trick or not. I just care about if deploy this model, why should I still need training dataset, this is doesn't make sense

@syguan96
Copy link
Owner

I'm not sure if you mean adaptation or test by inference.
For adaptation, using h36m has been proved to be effective. How to make it more efficient, we can do more work. Indeed, we propose a scheme to reduce the storage cost. A more efficient scheme will be found I believe.

If you do care about the runtime but don't want to try more schemes, you can remove this part.

@lucasjinreal
Copy link
Author

@syguan96 Can u provide a demo which don't use h36m for test on internet videos? How's the performance drop?

@syguan96
Copy link
Owner

syguan96 commented Feb 8, 2022

Sure, I just finished the Spring Festival holiday. I will add this later, please stay tuned.

@syguan96 syguan96 closed this as completed Feb 8, 2022
@Len-Li
Copy link

Len-Li commented May 14, 2022

Sure, I just finished the Spring Festival holiday. I will add this later, please stay tuned.

Hi, what's the command to infer a demo video without h36m datasets? Really appreciate your effort.

@syguan96
Copy link
Owner

syguan96 commented May 14, 2022

Hi @Len-Li, add this command should work:

--lower_level_mixtrain 0 --upper_level_mixtrain 0 --mixtrain 0 --labelloss_weight 0

@Len-Li
Copy link

Len-Li commented May 14, 2022

Hi, @syguan96 , thanks for your timely reply. However, I still encountered dataset issue when adding the above flag.
My script:

CUDA_VISIBLE_DEVICES=0 python dynaboa_internet.py --expdir exps --expname internet --dataset internet \
                                            --motionloss_weight 0.8 \
                                            --retrieval 1 \
                                            --dynamic_boa 1 \
                                            --optim_steps 7 \
                                            --cos_sim_threshold 3.1e-4 \
                                            --shape_prior_weight 2e-4 \
                                            --pose_prior_weight 1e-4 \
                                            --save_res 1 \
                                            --lower_level_mixtrain 0 \
                                            --upper_level_mixtrain 0 \
                                            --mixtrain 0 \
                                            --labelloss_weight 0

The error:

Traceback (most recent call last):
  File "dynaboa_internet.py", line 183, in <module>
    adaptor.excute()
  File "dynaboa_internet.py", line 83, in excute
    self.adaptation(batch)
  File "dynaboa_internet.py", line 104, in adaptation
    lower_level_loss, _ = self.lower_level_adaptation(image, gt_keypoints_2d, h36m_batch, learner)
  File "/home/leheng.li/my_nerf/obj_prior/human/DynaBOA/base_adaptor.py", line 276, in lower_level_adaptation
    h36m_batch = self.retrieval(init_features[5])
  File "/home/leheng.li/my_nerf/obj_prior/human/DynaBOA/base_adaptor.py", line 89, in retrieval
    h36mdata_list.append(self.get_h36m_data(x))
  File "/home/leheng.li/my_nerf/obj_prior/human/DynaBOA/base_adaptor.py", line 71, in get_h36m_data
    item_i = self.h36m_dataset[indice]
  File "/home/leheng.li/my_nerf/obj_prior/human/DynaBOA/base_adaptor.py", line 500, in __getitem__
    img = self.read_image(imgname)
  File "/home/leheng.li/my_nerf/obj_prior/human/DynaBOA/base_adaptor.py", line 541, in read_image
    imgname)
FileNotFoundError: [Errno 2] No such file or directory: '/data/syguan/human_datasets/Human3.6M/human36m_full_raw/images/S8_Greeting.55011271_000111.jpg'

@syguan96
Copy link
Owner

Sorry, I forgot to remind you to set --retrieval 0.

@syguan96 syguan96 reopened this May 14, 2022
@Len-Li
Copy link

Len-Li commented May 15, 2022

Sorry, I forgot to remind you to set --retrieval 0.

This solved my problem! But I encounter pyrender problem when rendering the result: ValueError: Invalid device ID(0)

Traceback (most recent call last):
  File "dynaboa_internet.py", line 183, in <module>
    adaptor.excute()
  File "dynaboa_internet.py", line 86, in excute
    self.inference(batch, self.model)
  File "dynaboa_internet.py", line 168, in inference
    self.save_results(pred_vertices, pred_cam, image, batch['imgname'], batch['bbox'], prefix='Pred')
  File "/home/leheng.li/my_nerf/obj_prior/human/DynaBOA/base_adaptor.py", line 481, in save_results
    renderer = Renderer(resolution=(ori_w, ori_h), orig_img=True, wireframe=False)
  File "/home/leheng.li/my_nerf/obj_prior/human/DynaBOA/render_demo.py", line 69, in __init__
    point_size=1.0
  File "/home/leheng.li/miniconda3/envs/p4d/lib/python3.7/site-packages/pyrender/offscreen.py", line 31, in __init__
    self._create()
  File "/home/leheng.li/miniconda3/envs/p4d/lib/python3.7/site-packages/pyrender/offscreen.py", line 137, in _create
    egl_device = egl.get_device_by_index(device_id)
  File "/home/leheng.li/miniconda3/envs/p4d/lib/python3.7/site-packages/pyrender/platforms/egl.py", line 83, in get_device_by_index
    raise ValueError('Invalid device ID ({})'.format(device_id, len(devices)))
ValueError: Invalid device ID (0)

I am using a remote server. It seems that opengl doesn't recognize my graphic card. Do you have any advice on it?

@syguan96
Copy link
Owner

try to add os.environ['PYOPENGL_PLATFORM'] = 'egl' at the top line of dynaboa_internet.py?

@Len-Li
Copy link

Len-Li commented May 15, 2022

try to add os.environ['PYOPENGL_PLATFORM'] = 'egl' at the top line of dynaboa_internet.py?

I try to add os.environ['PYOPENGL_PLATFORM'] = 'egl', but it doesn't help.

@syguan96
Copy link
Owner

It looks like your GPUs cannot be found. Change to use OSMesa backends? Make sure you have installed all dependencies of Pyrender.

@Len-Li
Copy link

Len-Li commented May 15, 2022

Thanks for your help. I will try to use OSMesa.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants