You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi Devendra, thanks for open sourcing this great project! I want to apply your code on my own Chinese dataset, but I am confused on how to process my dataset to get files like Pre-tokenized evidence passages and their titles and Wikipedia evidence passages from DPR paper you provided for open-domain QA tasks. Could you give me some advice to build them? Thanks in advance.
The text was updated successfully, but these errors were encountered:
Hi Devendra, thanks for open sourcing this great project! I want to apply your code on my own Chinese dataset, but I am confused on how to process my dataset to get files like Pre-tokenized evidence passages and their titles and Wikipedia evidence passages from DPR paper you provided for open-domain QA tasks. Could you give me some advice to build them? Thanks in advance.
The text was updated successfully, but these errors were encountered: