Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

call evaluation_hooks in Evaluator #122

Merged

Conversation

keisuke-nakata
Copy link
Member

@keisuke-nakata keisuke-nakata commented Jan 18, 2021

call evaluation_hooks in {Async,}Evaluator instead of train_agent{,_batch,_async}.

This PR is a preparation for unifying evaluation_hooks to use_tensorboard option.
(Interactions with Tensorboard are called in {Async,}Evaluator.)

I also added support_train_agent{,_batch,_async} attributes to EvaluationHook class in order to clarify that which hooks are eligible for each training function (train_agent_with_evaluation, train_agent_batch_with_evaluation, train_agent_async).


NOTE: This PR breaks backward compatibility of EvaluationHook.__call__ API. It no longer has eval_score argument. Use eval_stats['mean'] instead (eval_score is equivalent to it).

@pfn-ci-bot
Copy link

Successfully created a job for commit 073f688:

@pfn-ci-bot
Copy link

Successfully created a job for commit ac05424:

@pfn-ci-bot
Copy link

Successfully created a job for commit 9bb3e19:

@keisuke-nakata keisuke-nakata marked this pull request as ready for review January 19, 2021 07:13
@muupan muupan self-requested a review January 20, 2021 03:16
@muupan
Copy link
Member

muupan commented Jan 27, 2021

I'd like you to clarify the plan behind this PR.

  • Are you planning to replace use_tensorboard with another evaluation hook, which is implemented after this PR?
  • Are other evaluation hooks also planned? e.g. one for mlflow
  • Since the change of the arguments are necessary for those tensorboard and/or mlflow hooks, correct?

@keisuke-nakata
Copy link
Member Author

keisuke-nakata commented Jan 27, 2021

  • Are you planning to replace use_tensorboard with another evaluation hook, which is implemented after this PR?
  • Are other evaluation hooks also planned? e.g. one for mlflow

Yes.

  • Since the change of the arguments are necessary for those tensorboard and/or mlflow hooks, correct?

I unified arguments of EvaluationHook.__call__ to tensorboard's one.
I believe that arguments required by mlflow are included in these (say, env, agent, evaluator, step, eval_stats, agent_stats, env_stats). If these arguments are not suffice for mlflow, I'd like to add some in later PR.

So, remaining tasks are:

  • Implement tensorboard option as an EvaluationHook .
  • Implement mlflow option as an EvaluationHook. (And add some arguments if mlflow requires more information)

@keisuke-nakata keisuke-nakata force-pushed the call-evaluation_hooks-in-evaluator branch from 9bb3e19 to 22b7c57 Compare January 27, 2021 05:14
@pfn-ci-bot
Copy link

Successfully created a job for commit 22b7c57:

@keisuke-nakata keisuke-nakata force-pushed the call-evaluation_hooks-in-evaluator branch from 22b7c57 to 43c2fd4 Compare January 27, 2021 13:44
@pfn-ci-bot
Copy link

Successfully created a job for commit 43c2fd4:

@keisuke-nakata
Copy link
Member Author

We decided to break backward compatibility for cleaner API.
EvaluationHook no longer has the argument eval_score, but it has contains general information as eval_stats, agent_stats, env_stats. (eval_score is equivalent to eval_stats['mean']).

@muupan muupan merged commit 284ed1f into pfnet:master Feb 3, 2021
@keisuke-nakata keisuke-nakata deleted the call-evaluation_hooks-in-evaluator branch February 3, 2021 02:56
@muupan muupan added this to the v0.3.0 milestone Jul 7, 2021
@muupan muupan added the enhancement New feature or request label Jul 7, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants