-
Notifications
You must be signed in to change notification settings - Fork 245
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Multi Task Learning implementation and example #778
Conversation
Incredible. You are doing a little sweep through all of FARM and it is much cleaner than before : ) Thanks a lot @johann-petrak |
I had a quick look at your code and I think your changes make sense. Haven't tried if they are breaking anything around NaturalQuestions, but the NQ inference part was broken already before and we will need to refactor this anyways.
I guess you mean I believe the order of the predictions should reflect the order of your prediction heads. However, I can totally see that for multiple classification heads (especially with very similar predictions) you want a clear identification in the results. We could probably add a field "task_name" to the dict at the above location in the code by reading from the prediction head's |
My main concern is that it is a bit confusing how the keys/parameter names "task", "task_type" and "task_name" are used, ideally it would always be according to the convention that "task" is a task object, "task_name" is the name of a task, and "task_type" is the type, but this is not always the case. (I know I can be a bit too OCD about such things) Of course hard to change now as it would break backwards compatibility, so I have no opinion :) Adding a field "task_name" in the prediction would definitely help to immediately realize that "task" is not meant to be the task name. Should we make this part of this PR or have a different issue for it? |
Hey @johann-petrak I agree that the task setup could be improved - inconsistent use of naming is very confusing to users (and also us, looking at code we wrote a year ago and just shaking heads), so we also appreciate rigorous improvement on this end. In general tasks is a nested dict looking like: self.tasks[name] = {
"label_list": label_list,
"metric": metric,
"label_tensor_name": label_tensor_name,
"label_name": label_name,
"label_column_name": label_column_name,
"text_column_name": text_column_name,
"task_type": task_type
} and I like the suggestion to add the tasks name as "task_name" to the prediction heads output. People might want to have n classification heads that would be only distinguishable through this task_name. So lets add this "task_name" in this PR and improve upon the general naming of tasks in a separate PR? |
Hey @johann-petrak any progress on the task_name field? I think the PR is already in good shape and could be merged soon. |
Sorry, did not forget about this, I have been crazy busy with other stuff and planning to get back this this in the next days. |
OK, this is now now ready from my side, the prediction heads for text classifcation, regression and a few others now include the key "task_name" in the formatted prediction result. The key "task" contains the type as before. This did not address or test using multiple heads where the heads are of different type, e.g. classification and NER. Probably better to open or re-use a separate issue for that, as it will need some good dataset for testing as well. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looking good. Let's only remove the diary.md
before merging. I also started a quick benchmark run to validate that the changes didn't impact performance.
Sorry, that was my insanity, that file should not have been added, thanks for spotting! |
This reverts commit 577e6f3.
Had some problems with executing the benchmark but now the results are here: #797 (comment) -> Ready to merge Benchmarks: QA Accuracy✔️ QA Accuracy Benchmark Passed
|
This fixes #408 for me, did some testing with two classification heads so far and most everything works fine: I am getting proper evaluations for each head during training and the inferencer provides a list of two predictions for each instance.
Still to check:
Help with these things to check and with how to best test this in general would be appreciated.
Timoeller edit: fixes #742