[Feature Request]:SLURP #2006
Unanswered
zhengqing187
asked this question in
Q&A
Replies: 1 comment
-
Hi @zhengqing187, sorry I just saw this. Our SLU is treated as a seq2seq token generation task. For example we train the model to decode strings like During evaluation we only need to reformat the decoded string to match the calculation of accuracy. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
🚀 The feature
hello
I would like to consult the experiment on SLURP data set. Do you combine the two tasks of intention detection and slot filling? Does the loss function add the loss function of the two tasks?
Solution outline
learn more about the model
Additional context
No response
Beta Was this translation helpful? Give feedback.
All reactions