Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reproduce result of usa-airports #3

Closed
larry2020626 opened this issue Jun 29, 2020 · 5 comments
Closed

Reproduce result of usa-airports #3

larry2020626 opened this issue Jun 29, 2020 · 5 comments

Comments

@larry2020626
Copy link

Hi,Thank you for releasing your code. I am currently trying to reproduce the result of node classification experiment on US-Airport dataset, while I can't get as high as 68.3%. Is there any techniques I can use to get higher accuracy? Thanks!

@qibinc
Copy link
Collaborator

qibinc commented Jun 29, 2020

Hi @larry2020626 ,

Glad to help you reproduce the result. Please kindly provide the following information:

  1. Which model are you evaluating? (moco/e2e, downloaded/pretrained);
  2. Your device (cpu/cuda);
  3. Your pytorch/dgl version;
  4. Your obtained result on airport.

In addition, did you try other datasets/tasks? Please feel free to provide more datapoints/screenshots.

Thanks!

@larry2020626
Copy link
Author

Thanks very much for your reply!

I adopt E2E model, running on CUDA 10.0
My Device:
40 Intel(R) Xeon(R) Silver 4210 CPU @ 2.20GHz
3b:00.0 3D controller: NVIDIA Corporation Device 1eb8 (rev a1)
5e:00.0 3D controller: NVIDIA Corporation Device 1eb8 (rev a1)
pytorch version: 1.4.0+cu100
dgl version '0.4.3post2'
current result is {'Micro-F1': 0.6218}

here is what I do:

  1. to download small.bin pretrain dataset
python scripts/download.py --url https://cloud.tsinghua.edu.cn/f/b37eed70207c468ba367/?dl=1 --path data --fname small.bin

2.to pretrain:

bash scripts/pretrain.sh 0 --batch-size 256

3.For downstream tasks:

python scripts/download.py --url https://cloud.tsinghua.edu.cn/f/2535437e896c4b73b6bb/?dl=1 --path data --fname downstream.tar.gz
bash scripts/generate.sh 0 Pretrain_moco_False_dgl_gin_layer_5_lr_0.005_decay_1e-05_bsz_256_hid_64_samples_2000_nce_t_0.07_nce_k_32_rw_hops_256_restart_prob_0.8_aug_1st_ft_False_deg_16_pos_32_momentum_0.999 usa_airport
bash scripts/node_classification/ours.sh saved/Pretrain_moco_False_dgl_gin_layer_5_lr_0.005_decay_1e-05_bsz_256_hid_64_samples_2000_nce_t_0.07_nce_k_32_rw_hops_256_restart_prob_0.8_aug_1st_ft_False_deg_16_pos_32_momentum_0.999 64 usa_airport

Thanks!
And I will try to use downloaded models to evaluate on usa-airports dataset to check whether it is because my pre-training effect is not good enough.

@qibinc
Copy link
Collaborator

qibinc commented Jun 29, 2020

Hi @larry2020626 ,

Thanks for your reply. The main reason is that your experiments correspond to the GCC (E2E, freeze) model, while "68.3" is the result of GCC (E2E, full). For the difference, please see the Freezing vs. full fine-tuning paragraph at the beginning of page 6.

To obtain the "full" (finetuning) result, please run python train.py --exp FT --model-path saved --tb-path tensorboard --tb-freq 5 --gpu 0 --dataset usa_airport --finetune --epochs 30 --resume saved/Pretrain_moco_False_dgl_gin_layer_5_lr_0.005_decay_1e-05_bsz_256_hid_64_samples_2000_nce_t_0.07_nce_k_32_rw_hops_256_restart_prob_0.8_aug_1st_ft_False_deg_16_pos_32_momentum_0.999/current.pth --cv. This will start finetuning the model on usa_airport on 10 splits, and the final mean/std of the accuracy will be reported. The result on my side is 66.7±4.1.

Besides, you can use batch size 1024 in paper instead of 256 but that will take even longer for pretraining. In that case, the result is 68.3±2.8.

@larry2020626
Copy link
Author

larry2020626 commented Jun 30, 2020

Thanks!@qibinc , really appreciate your help.

@qibinc
Copy link
Collaborator

qibinc commented Jul 1, 2020

Hi Larry, I'm glad it worked for you. Feel free to raise more issues if you encounter other problems. Closing this.

@qibinc qibinc closed this as completed Jul 1, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants