Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

pickle file has only det_bboxes #616

Closed
godwinrayanc opened this issue Jul 17, 2022 · 6 comments
Closed

pickle file has only det_bboxes #616

godwinrayanc opened this issue Jul 17, 2022 · 6 comments

Comments

@godwinrayanc
Copy link

Hello,
I have tested on my custom dataset for VID and saved the results to a .pkl file. However, the pickle file seems to have only the det_bboxes and not the det_labels .
Is there any way to add det_labels too? Any tips would be helpful!

@dyhBUPT
Copy link
Collaborator

dyhBUPT commented Jul 18, 2022

Hi, the simple_test outputs detection results for all classes.
You can refer to

outs = [
bbox2result(det_bboxes, det_labels, self.bbox_head.num_classes)
for det_bboxes, det_labels in bbox_list
]

and

https://github.com/open-mmlab/mmdetection/blob/56e42e72cdf516bebb676e586f408b98f854d84c/mmdet/models/roi_heads/standard_roi_head.py#L253-L260

Best wishes.

@godwinrayanc
Copy link
Author

Hello,
I am using the SELSA VID and am not sure how to use this simple test function to test. Is there an example?

The motto is to get the number of True Positives, False Positives and False Negatives in my dataset. Is there a way to get it before the mAP metric is calculated? Since they would be used for AP and AR calculation.

@dyhBUPT
Copy link
Collaborator

dyhBUPT commented Jul 20, 2022

Oh, the simple_test is called while testing, you may don't need to modify it. What I want to show you here is the structure of the detection results.

For your question, the VID metrics is calculated based on cocoapi, referring to

super_eval_results = super().evaluate(

So if you want to evaluate the TP/FP/FN, you need to modify the method evaluate. Please refer to the source code of cocoapi https://github.com/cocodataset/cocoapi/blob/8c9bcc3cf640524c4c20a9c40e89cb6a2f2fa0e9/PythonAPI/pycocotools/cocoeval.py#L10

@godwinrayanc
Copy link
Author

godwinrayanc commented Jul 21, 2022

`

    num_tp = np.zeros((T, K, A, M))
    num_fp = np.zeros((T, K, A, M))
    num_fn = np.zeros((T, K, A, M))

    # create dictionary for future indexing
    _pe = self._paramsEval
    catIds = _pe.catIds if _pe.useCats else [-1]
    setK = set(catIds)
    setA = set(map(tuple, _pe.areaRng))
    setM = set(_pe.maxDets)
    setI = set(_pe.imgIds)
    # get inds to evaluate
    k_list = [n for n, k in enumerate(p.catIds)  if k in setK]
    m_list = [m for n, m in enumerate(p.maxDets) if m in setM]
    a_list = [n for n, a in enumerate(map(lambda x: tuple(x), p.areaRng)) if a in setA]
    i_list = [n for n, i in enumerate(p.imgIds)  if i in setI]
    I0 = len(_pe.imgIds)
    A0 = len(_pe.areaRng)
    # retrieve E at each category, area range, and max number of detections
    for k, k0 in enumerate(k_list):
        Nk = k0*A0*I0
        for a, a0 in enumerate(a_list):
            Na = a0*I0
            for m, maxDet in enumerate(m_list):
                E = [self.evalImgs[Nk + Na + i] for i in i_list]
                E = [e for e in E if not e is None]
                if len(E) == 0:
                    continue
                dtScores = np.concatenate([e['dtScores'][0:maxDet] for e in E])

                # different sorting method generates slightly different results.
                # mergesort is used to be consistent as Matlab implementation.
                inds = np.argsort(-dtScores, kind='mergesort')
                dtScoresSorted = dtScores[inds]

                dtm  = np.concatenate([e['dtMatches'][:,0:maxDet] for e in E], axis=1)[:,inds]
                dtIg = np.concatenate([e['dtIgnore'][:,0:maxDet]  for e in E], axis=1)[:,inds]
                gtIg = np.concatenate([e['gtIgnore'] for e in E])
                npig = np.count_nonzero(gtIg==0 )
                if npig == 0:
                    continue
                tps = np.logical_and(               dtm,  np.logical_not(dtIg) )
                fps = np.logical_and(np.logical_not(dtm), np.logical_not(dtIg) )

                tp_sum = np.cumsum(tps, axis=1).astype(dtype=np.float)
                fp_sum = np.cumsum(fps, axis=1).astype(dtype=np.float)
                for t, (tp, fp) in enumerate(zip(tp_sum, fp_sum)):
                    tp = np.array(tp)
                    fp = np.array(fp)
                    nd = len(tp)
                    rc = tp / npig
                    pr = tp / (fp+tp+np.spacing(1))
                    q  = np.zeros((R,))
                    ss = np.zeros((R,))
                    
                    if nd:
                        recall[t,k,a,m] = rc[-1]
                    else:
                        recall[t,k,a,m] = 0
                    
                    # numpy is slow without cython optimization for accessing elements
                    # use python array gets significant speed improvement
                    pr = pr.tolist(); q = q.tolist()

                    for i in range(nd-1, 0, -1):
                        if pr[i] > pr[i-1]:
                            pr[i-1] = pr[i]

                    inds = np.searchsorted(rc, p.recThrs, side='left')
                    try:
                        for ri, pi in enumerate(inds):
                            q[ri] = pr[pi]
                            ss[ri] = dtScoresSorted[pi]
                    except:
                        pass
                    
                    precision[t,:,k,a,m] = np.array(q)
                    scores[t,:,k,a,m] = np.array(ss)
                    num_tp[t, k, a, m] = np.sum(tp)
                    num_fp[t, k, a, m] = np.sum(fp)
                    num_fn[t, k, a, m] = np.sum(npig-tp)`

Hello, I modified the code in evaluate of coco in this function
However I am not sure how to get the final number of TP,FP, FN

@dyhBUPT
Copy link
Collaborator

dyhBUPT commented Jul 22, 2022

Sorry, I'm not sure. In my opinion, you can only get the TP/FP/FN with a specified IOU threshold. e.g., TP at IOU 0.5.
For more details, maybe you can submit an issue in https://github.com/cocodataset/cocoapi/issues

Best wishes.

@godwinrayanc
Copy link
Author

Hello, thank you for the reply. I have figured out the solution. Thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants