Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to get center location and scale when inference? #23

Open
lucasjinreal opened this issue Mar 18, 2019 · 6 comments
Open

How to get center location and scale when inference? #23

lucasjinreal opened this issue Mar 18, 2019 · 6 comments

Comments

@lucasjinreal
Copy link

I need to get the final result which seems can calling this functions:

def get_final_preds(config, batch_heatmaps, center, scale):

But it needs a center and a scale. How to get it anyway?

@Calmost
Copy link

Calmost commented Mar 19, 2019

Hi,have you ever got the final result ?

@Calmost
Copy link

Calmost commented Mar 19, 2019

I have got a center and a scale by using yolo detector.
Could you show me your demo file to have inference from a given image.

@lucasjinreal
Copy link
Author

@Calmost I don't think this 2 stage pose detector worth to do further inference, it does not make realtime

@njustczr
Copy link

I have the same problem...

@savan77
Copy link

savan77 commented May 29, 2019

I wrote a quick and ugly script to run inference, but stuck at center and scale. Apparently, we need to find these details on our own, I guess.


from __future__ import absolute_import
from __future__ import division
from __future__ import print_function

import argparse
import os
import pprint
from PIL import Image
import torch
import torch.nn.parallel
import torch.backends.cudnn as cudnn
import torch.optim
import torch.utils.data
import torch.utils.data.distributed
import torchvision.transforms as transforms

import _init_paths
from config import cfg
from config import update_config
from core.loss import JointsMSELoss
from core.function import validate
from utils.utils import create_logger
import numpy as np
import dataset
import models


args = argparse.ArgumentParser()
args.add_argument("--image",  help="path to image", default="tools/imgs/1.jpg")
args.add_argument('--cfg',
                        help='experiment configure file name',
                        required=True,
                        type=str)
args.add_argument('opts',
                        help="Modify config options using the command-line",
                        default=None,
                        nargs=argparse.REMAINDER)
args.add_argument('--modelDir',
                        help='model directory',
                        type=str,
                        default='')
args.add_argument('--logDir',
                    help='log directory',
                    type=str,
                    default='')
args.add_argument('--dataDir',
                    help='data directory',
                    type=str,
                    default='')
args.add_argument('--prevModelDir',
                    help='prev Model directory',
                    type=str,
                    default='')

args = args.parse_args()
update_config(cfg, args)
cudnn.benchmark = cfg.CUDNN.BENCHMARK
torch.backends.cudnn.deterministic = cfg.CUDNN.DETERMINISTIC
torch.backends.cudnn.enabled = cfg.CUDNN.ENABLED

model = eval('models.'+cfg.MODEL.NAME+'.get_pose_net')(
    cfg, is_train=False
)

if cfg.TEST.MODEL_FILE:
    print('=> loading model from {}'.format(cfg.TEST.MODEL_FILE))
    model.load_state_dict(torch.load(cfg.TEST.MODEL_FILE), strict=False)
    model.cuda()

criterion = JointsMSELoss(
    use_target_weight=cfg.LOSS.USE_TARGET_WEIGHT
).cuda()

normalize = transforms.Normalize(
    mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]
)

transform = transforms.Compose([
        transforms.Resize((256,192)),
        transforms.ToTensor(),
        normalize,
    ])

img = Image.open(args.image)
image_tensor = transform(img) 
image_tensor = image_tensor.unsqueeze(0)

model.eval()

num_samples = 1
all_preds = np.zeros(
    (num_samples, cfg.MODEL.NUM_JOINTS, 3),
    dtype=np.float32
)

with torch.no_grad():
   
    # compute output
    outputs = model(image_tensor.type(torch.cuda.FloatTensor))
    if isinstance(outputs, list):
        outpuuts[-1]
        outpuuts[-1]
    else:
        output = outputs
    print(output.size()) #heatmap
    #now we need to call get_final_preds(config, batch_heatmaps, center, scale)
    # but we dont have center and scale.
    
    #uncomment following line if you have center and scale.
    ### preds, maxvals = get_final_preds(cfg, output, center, scale)

    ## save_batch_heatmaps(iamge_tensor, output, "test.jpg",
    ##                    normalize=True)

@eng100200
Copy link

@savan77 hello did you resolve?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants