-
-
Notifications
You must be signed in to change notification settings - Fork 291
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Metrics and EvalProtocol API #60
Comments
Yes, that would be ideal! |
Actually, it was more of a question than a proposal. I still don't have a clear idea on how to solve this problem. |
I think the
As for the metrics, as soon as we have all the the metrics implemented we can define an unique signature for the |
@vlomonaco Can I work on this and also it would be helpful if you could suggest some initial metrics that are needed and also is issue #51 solved? |
#51 is closed. Regarding the Metrics, maybe we can use Flows to define how and when to compute each metric? The problem that we have right now is that each metrics requires different computations and arguments. This could be easily solved with a callback system, like the flows used for training and test. Take as an example the memory usage MU. It should be printed only once, instead it is printed everywhere because the EvaluationProtocol does not know how to print it:
Each metric will probably need to implement a couple of methods, while the others will stay empty, so the implementation of new metrics should not be more complex than it is right now. |
Hi @AntonioCarta you are totally right. Indeed now the metrics are called through the EvaluationPlugin! Being it a plugin, it can implement all the callbacks independently from the main strategy (all the plugin methods will be called before the main strategy methods). This also means that the calls he make can be fine-tuned to specific metric needs. Does it make sense or am I missing something? |
Metrics and EvalProtocol are a little bit unclear to me.
compute
method. Each time we add a new metric, we also have to add a new if case inside EvalProtocol's get_results.I would prefer a generic EvalProtocol that controls printing and logging and only delegates the computations to the metrics (e.g. instead of printing inside
compute
EvalProtocol calls the__str__
method). I would also prefer to be able to choose where to print the metrics (output file, tensorboard, stdout).The text was updated successfully, but these errors were encountered: