Conversation
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Summary of ChangesHello @ConvolutedDog, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request primarily focuses on code hygiene and improving the debugging experience within the project. It removes a significant amount of conditional debug print statements and commented-out code, leading to a cleaner and more maintainable Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request focuses on cleaning up the codebase by removing a significant amount of debugging code, such as print statements and commented-out blocks. It also introduces verbosity flags (--verbose_cuda_code, --verbose_irmodule) to control debug output in test scripts, and improves the formatting of some log messages.
My review identifies a few areas for improvement:
- A potentially important warning in
op/Op.pyhas been commented out. It would be good to clarify the reason for this change. - There are some inconsistencies in the test scripts (
test_op.pyandtest_op_mp.py) regarding the use of the new verbosity flags and string formatting for print statements. Aligning these would improve code maintainability.
Overall, the changes are positive and improve the cleanliness of the code. Addressing the points above will further enhance consistency and ensure no important warnings are missed.
| # if len(self.unpad_outs) > 0: | ||
| # import warnings | ||
| # warnings.warn( | ||
| # "The unpad_outs length > 0, please check here." | ||
| # ) |
There was a problem hiding this comment.
This warning is being commented out. If the condition len(self.unpad_outs) > 0 indicates a potential issue that developers should be aware of, it might be better to keep this warning. Could you provide some context on why this is being removed? If it's for temporary debugging, it might be better to use a more structured logging approach or a debug flag.
| print("Eval top {} configs".format(topx)) | ||
| print("Compilation time: {}s".format(eval_results["compilation time"])) | ||
| print("Best time: {}ms".format(eval_results["best time"])) | ||
| print("Best config: {}".format(eval_results["best config"])) | ||
| print("Best idx: {}".format(eval_results["best idx"])) |
There was a problem hiding this comment.
For consistency with other changes in this file (e.g., line 491) and for better readability, consider using f-strings for these print statements.
| print("Eval top {} configs".format(topx)) | |
| print("Compilation time: {}s".format(eval_results["compilation time"])) | |
| print("Best time: {}ms".format(eval_results["best time"])) | |
| print("Best config: {}".format(eval_results["best config"])) | |
| print("Best idx: {}".format(eval_results["best idx"])) | |
| print(f"Eval top {topx} configs") | |
| print(f"Compilation time: {eval_results['compilation time']}s") | |
| print(f"Best time: {eval_results['best time']}ms") | |
| print(f"Best config: {eval_results['best config']}") | |
| print(f"Best idx: {eval_results['best idx']}") |
| @@ -406,8 +411,6 @@ def get_tvm_source( | |||
| codegen_input_reg_tiling=args.codegen_input_reg_tiling, | |||
| ) | |||
| if LatestTVM: | |||
| print("Top1 time: {} ms".format(top1_time)) | ||
| print("Top10 time: {} ms".format(best_time)) | ||
| print("Best idx: {}".format(best_idx)) | ||
| print("Best config: {}".format(rprogs[best_idx].Dump())) | ||
| print("Top1 compile time: {} s".format(emit_time)) | ||
| print("Top10 compile time: {} s".format(eval_time)) |
There was a problem hiding this comment.
For consistency and better readability, consider using f-strings for these print statements.
| print("Top1 time: {} ms".format(top1_time)) | |
| print("Top10 time: {} ms".format(best_time)) | |
| print("Best idx: {}".format(best_idx)) | |
| print("Best config: {}".format(rprogs[best_idx].Dump())) | |
| print("Top1 compile time: {} s".format(emit_time)) | |
| print("Top10 compile time: {} s".format(eval_time)) | |
| print(f"Top1 time: {top1_time} ms") | |
| print(f"Top10 time: {best_time} ms") | |
| print(f"Best idx: {best_idx}") | |
| print(f"Best config: {rprogs[best_idx].Dump()}") | |
| print(f"Top1 compile time: {emit_time} s") | |
| print(f"Top10 compile time: {eval_time} s") |
No description provided.