Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RefCOCO/+/g json files #25

Open
nankepan opened this issue Sep 23, 2022 · 8 comments
Open

RefCOCO/+/g json files #25

nankepan opened this issue Sep 23, 2022 · 8 comments

Comments

@nankepan
Copy link

Thanks for your great work!
Can you provide the RefCOCO/+/g json files? I din't find them in this repository.

1663920380189

@wjn922
Copy link
Owner

wjn922 commented Sep 23, 2022

Hi, you can refer to this file to generate the json files by yourself.

We may release the generated json files in a few days since we are busy on other project right now. Thanks for your understanding.

@nankepan nankepan closed this as not planned Won't fix, can't repro, duplicate, stale Sep 23, 2022
@nankepan
Copy link
Author

Thank you for your response!

@nankepan nankepan reopened this Sep 23, 2022
@wjn922
Copy link
Owner

wjn922 commented Oct 5, 2022

@nankepan We have uploaded the pre-processed refcoco/+/g files in OneDrive.

@itruonghai
Copy link

@wjn922 Could you release the joint training script?

@wjn922
Copy link
Owner

wjn922 commented Oct 14, 2022

@itruonghai The script is like this, but note that we use 32 V100 GPUs for the joint-training:

python3 -m torch.distributed.launch --nproc_per_node=8  --use_env \
main_joint.py  --dataset_file joint --binary --with_box_refine \
--batch_size 1 --num_frames 5 \
--epochs 12 --lr_drop 8 10 \
--freeze_text_encoder \
[backbone]

@itruonghai
Copy link

@wjn922 Could you also release the training log? This could help me reproduce the result efficiently because I do not have such computational resources (32 V100 GPUs)

@wjn922
Copy link
Owner

wjn922 commented Oct 14, 2022

Sure. Please see log.txt.

@itruonghai
Copy link

@wjn922 Could you share all the log files, including the joint training across models, as well as the finetuning stages? Thanks for your time.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants