You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanks for sharing your work. Are there some codes for "Training-Free Confidence Scoring Mechanism"? I clone the repository and only find eval/run_llava.py for runing a demo. And are there the evaluation code for coco?
The text was updated successfully, but these errors were encountered:
Hi @CongHan0808 ,
Thank you for your interest in our work. In the init version of this repo, we only provided the demo code. We'll release the training and batch inference code later. Before the release, you can follow the instruction to modify the demo code:
Set the return_dict_in_generate=True and output_scores=True in the model.generate function.
Get the output sequence and corresponding score of each output token.
Calculate the bbox score and category score of each instance according to the paper (our implementation is the same as the paper description).
Later, we'll release the whole codes including the batch inference of REC/REG/Detection/Counting/Phrase Grounding.
Thanks for sharing your work. Are there some codes for "Training-Free Confidence Scoring Mechanism"? I clone the repository and only find
eval/run_llava.py
for runing a demo. And are there the evaluation code for coco?The text was updated successfully, but these errors were encountered: