You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, I am working on your code to solve QA task.
I have a question.
Currently my dataset consists of context, question, answer(each question have pair of answer & context).
The length of question is too short(mostly, after tokenizing with 'bert-base-multilingual', 6 to 7 tokens are generated). so I add to make model run because default model setting requires 'chunk_size = 64'.
here is my question.
I think the role of token is just added to fill out empty space. I think that means logically is nothing.
Once training, I made two chunks and put those in. First chunk is made by "question: blahblahblah" and second chunk is made by "answer: blahblah"(those sentences are too short.. as I mentioned earlier)
Once I put input such as "question: blahblahblah? answer: " into the checkpoint model, the model spit out [CLS] question : blahblahblah? answer: [SEP] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD].
I have no clue why my model stay dumb...
The text was updated successfully, but these errors were encountered:
Hi, I am working on your code to solve QA task.
I have a question.
Currently my dataset consists of context, question, answer(each question have pair of answer & context).
The length of question is too short(mostly, after tokenizing with 'bert-base-multilingual', 6 to 7 tokens are generated). so I add to make model run because default model setting requires 'chunk_size = 64'.
here is my question.
I think the role of token is just added to fill out empty space. I think that means logically is nothing.
Once training, I made two chunks and put those in. First chunk is made by "question: blahblahblah" and second chunk is made by "answer: blahblah"(those sentences are too short.. as I mentioned earlier)
Once I put input such as "question: blahblahblah? answer: " into the checkpoint model, the model spit out [CLS] question : blahblahblah? answer: [SEP] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD].
I have no clue why my model stay dumb...
The text was updated successfully, but these errors were encountered: