You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanks for sharing the code, however I notice that in pretrain.config, the file path leads to data/all, however there is no such folder or corpus data for pre-training, which leads a significant drop in result.
Is it possible that you can update readme on pre-processing the corpus or upload pre-trained weights to guide us reproduce the result in paper? Many thanks.
The text was updated successfully, but these errors were encountered:
As described in the paper, we use tables from both WikiTables and WebQueryTable datasets for pertaining. Those two datasets are publicly available. I'll put updating the model weights on my to-do list.
As described in the paper, we use tables from both WikiTables and WebQueryTable datasets for pertaining. Those two datasets are publicly available. I'll put updating the model weights on my to-do list.
Hi, have you already trained the model, if it's convenient, could you please share it, thanks a lot!!!
Thanks for sharing the code, however I notice that in pretrain.config, the file path leads to data/all, however there is no such folder or corpus data for pre-training, which leads a significant drop in result.
Is it possible that you can update readme on pre-processing the corpus or upload pre-trained weights to guide us reproduce the result in paper? Many thanks.
The text was updated successfully, but these errors were encountered: