Automated Integration Test Goldens Update from CI#6136
Conversation
…374-bc64-81986dc1596d)
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request contains an automated update to the integration test golden files, ensuring that the test data remains synchronized with the latest changes in the application's output. Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request updates the test data configuration in chart_config.json by adding a new classification entry that includes a quantity filter. A review comment points out that the floating-point value used (2.2250738585072014e-308) is likely an uninitialized minimum value and suggests replacing it with 0.0 to ensure test robustness and follow precision guidelines.
| "idx": 0, | ||
| "qval": { | ||
| "cmp": "GE", | ||
| "val": 2.2250738585072014e-308 |
There was a problem hiding this comment.
The value 2.2250738585072014e-308 is suspicious. This value is equivalent to sys.float_info.min, the smallest positive normalized double-precision number. Its presence here strongly suggests a potential bug in the quantity detection logic, where an uninitialized or default value might be leaking into the results.
According to our guidelines, floating-point values in test data should be rounded to a lower precision to make tests more robust. This extremely precise and small number is likely unintended.
If the goal is to filter for values greater than or equal to zero, 0.0 would be a much clearer and more stable choice.
| "val": 2.2250738585072014e-308 | |
| "val": 0.0 |
References
- Round floating-point scores in test data to a consistent, lower precision. This makes tests more robust against insignificant model output variations and reduces review noise.
This pull request updates the golden files automatically via Cloud Build. Please review the changes carefully. Cloud Build Log