Update raw_score output for v2 scoring#37
Conversation
Summary of ChangesHello @rlronan, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request refines the output of the v2 scoring method by ensuring the Highlights
Changelog
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
🥚 Better Egg: HIGH TrustScore: 73% Score Breakdown
Top Contributions
|
There was a problem hiding this comment.
Code Review
The pull request effectively addresses the issue of raw_score output for v2 scoring by returning the logit value instead of the raw_score. The documentation updates clearly explain this change and the difference between v1 and v2 raw_score definitions. The code change is minimal and directly targets the identified issue, improving the coherence of the returned values.
| "trust_level": "HIGH", | ||
| "normalized_score": 0.82, | ||
| "raw_score": 0.0045, | ||
| "raw_score": 0.2871, |
| **v2:** The normalized graph score is combined with merge rate and account age into a logit, then passed through a sigmoid: | ||
|
|
||
| ``` | ||
| logit = intercept + graph_score_weight * graph_score | ||
| + merge_rate_weight * merge_rate | ||
| + account_age_weight * log(account_age_days + 1) | ||
|
|
||
| normalized = 1 / (1 + e^(-logit)) | ||
| ``` | ||
|
|
||
| `raw_score` in the v2 output contains the pre-sigmoid logit value. |
|
Please don’t use the page rank terminology. |
fixed |
The prior assertion (raw_score > 0) passed incidentally but didn't validate the raw_score/normalized_score relationship. Since raw_score is now the logit, it can be negative.
jeffreyksmithjr
left a comment
There was a problem hiding this comment.
Yeah, all seems reasonable. Made one tiny change to testing.
Fixes #36
The raw score returned with the v2 method should be the normalized score pre-normalization. For the v2 method that variable is logit, rather than raw_score. This PR updates the returned value:
return TrustScore( user_login=login, context_repo=context_repo, - raw_score=raw_score, + raw_score=logit, normalized_score=normalized, trust_level=trust_level, account_age_days=user_data.profile.account_age_days,Note that this does not effect the computations of the score (which are working correctly), it updates the returned values to be more coherent. The docs are updated to explain the difference in the returned raw_score for v1 and v2.
Diff of the output change when running v2
Note that sigmoid(-0.2917947228989848) = 0.42756454607181854.
Linting and tests pass