Conversation
d226ddf to
220a1c9
Compare
torchbenchmark/util/model.py
Outdated
| model.train(train) | ||
|
|
||
|
|
||
| def check_results(self): |
There was a problem hiding this comment.
i'm wondering how this is intended to be used. If it's used mainly locally by jit developers, it won't affect all users of the benchmark... Should we make it slightly less intrusive by not needing to implement a 'check_results' override for models that don't jit? is it sufficient to just try/except NotImplementedError inside this func to avoid the ones that aren't supported?
220a1c9 to
4343589
Compare
| ] | ||
|
|
||
| if model_name in model_blacklist: | ||
| warnings.warn(UserWarning(f"{model_name}.get_module() doesn't support `check_results` yet!")) |
There was a problem hiding this comment.
is this just WIP (that you have both blacklist and try/except)? or you need both for some reason?
There was a problem hiding this comment.
@wconstab unfortunately, for some benchmarks results don't match, while the others are set up in a way I can't get at their jitted module to be able to reset the optimization version and recompile it at noopt.
I added some comments.
Eventually, I'm hoping to enable results checking for as many models as I can
4343589 to
fbcca56
Compare
wconstab
left a comment
There was a problem hiding this comment.
lgtm- other than one nit, which is that you could add a comment and/or change check_results function name, explaining the idea of verifying that the JIT optimization behaves same as baseline JIT.
(Disambiguate from also important but non-addressed case of making sure the model is producing correct results..)
check_results option for inference runscheck_opt_vs_noopt_jit option to check that results between baseline and optimized jitted versions match
add
check_opt_vs_noopt_jitoption to check that results between baseline and optimized jitted versions match