-
Notifications
You must be signed in to change notification settings - Fork 125
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to prepare two sequences as input for bert-multitask-learning? #33
Comments
Now you reminded me... Sorry, it's not implemented. |
Sorry, I misread your question. You can prepare something like: @preprocessing_fn
def proc_fn(params, mode):
return [{'a': ["Everyone", "should", "be", "happy", "."], 'b': ["you're", "right"]}], ['true'] |
I prepared two sequences following your format, Here's an example: |
Maybe it's a bug. Could you confirm that the ({'a': ['Everyone', 'should', 'not', 'be', 'happy', '.'], 'b': ["you're", 'right']}, 'some label') |
After adding this print, this is what I have found But when the mode is 'infer', right before printing the accuracies of the particular task, there is no print of 'example', and tokens become like this -> |
This is a bug. I'll fix it later.
That's weird. Maybe it's caused by another bug. Could you provide more info? |
Sorry, accidentally closed. Reopen now. |
Hi, I have a dataset that involves 2 sequences and the task is classifying the sequence pair. I am not sure how to prepare the input in this case. So far, I have been working with only one sequence where I used the following format:
["Everyone", "should", "be", "happy", "."]
How do I extend this for 2 sequences? Do I have to insert a "SEP" token myself?
The text was updated successfully, but these errors were encountered: