You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
that means I must take some categories for pre-training and then using another categories to fine-tuning?
I wonder if i can fine-tuning in the GPT-GNN model directly or not?like fine-tuning in the pre-trained bert,just modify the data to adjust their input format and then fine-tuning for the downstream task.
thanks.
The text was updated successfully, but these errors were encountered:
Of course, you can directly pre-train and fine-tune on the same graph. The reason I adopt that in the experiment is to make the task harder and more realistic (in a real setting, we sometimes cannot see the fine-tuning data during pre-training).
Since you mentioned BERT, mostly the pre-training corpus and the fine-tuning corpus of BERT will come from a different domains, but BERT still shows a really impressive generalization ability. This is the reason why we evaluate our model in the three different transfer settings. But of course, you can directly fine-tune on the same graph.
hi @acbull
that means I must take some categories for pre-training and then using another categories to fine-tuning?
I wonder if i can fine-tuning in the GPT-GNN model directly or not?like fine-tuning in the pre-trained bert,just modify the data to adjust their input format and then fine-tuning for the downstream task.
thanks.
The text was updated successfully, but these errors were encountered: