Skip to content

Commit

Permalink
Update on "[inductor] Fix logging for run_and_get_cpp_code"
Browse files Browse the repository at this point in the history
Summary: Found during testing with remote caching: Use the same output logger object between graph.py and codecache.py since it's patched in `run_and_get_cpp_code`. That allows us to capture any logging produced from the codecache path when using `run_and_get_cpp_code`. I'm also fixing a few tests that were passing mistakenly because logging was missing.

[ghstack-poisoned]
  • Loading branch information
masnesral committed Jun 19, 2024
2 parents 9589b99 + 08edb4b commit f5dd0e7
Showing 1 changed file with 11 additions and 1 deletion.
12 changes: 11 additions & 1 deletion test/inductor/test_cpu_repro.py
Original file line number Diff line number Diff line change
Expand Up @@ -1920,6 +1920,8 @@ def _internal_check(
FileCheck().check(_target_code_check).run(code)
if _target_code_check_not:
FileCheck().check_not(_target_code_check_not).run(code)
# Verify that the output isn't empty
FileCheck().check("Output code:").run(code)

self.assertEqual(
_fn(*_inps),
Expand All @@ -1934,7 +1936,15 @@ def _internal_check(

if "ATen parallel backend: OpenMP" in torch.__config__.parallel_info():
with set_num_threads(1):
_internal_check(fn, inps, "aten.scatter_reduce_")
# When running with a single thread, we expect the aten.scatter will go
# into the cpp backend codegen instead of a fallback to aten.scatter_reduce_.
# Avoid the inductor cache so we don't serve an entry compiled above.
with config.patch(
{"fx_graph_cache": False, "fx_graph_remote_cache": False}
):
_internal_check(
fn, inps, _target_code_check_not="aten.scatter_reduce_"
)

with config.patch({"cpp.dynamic_threads": True}), set_num_threads(1):
_internal_check(fn, inps, "aten.scatter_reduce_")
Expand Down

0 comments on commit f5dd0e7

Please sign in to comment.