Skip to content
This repository has been archived by the owner on Dec 18, 2023. It is now read-only.

Add test-reports workflow for develop and main with badges #284

Merged
merged 5 commits into from Dec 9, 2022

Conversation

nate-credoai
Copy link
Contributor

@nate-credoai nate-credoai commented Dec 8, 2022

Describe your changes

test-reports.yml workflow will run on pushes to develop and main. It can also be triggered for other branches but the badges will not be updated. Badges are updated by pushing a generated badge to S3 that is referenced in the README.md. The README.md references the badges for the main branch but the URL to retrieve the badges for develop can be discovered easily.

The permissions to write to the S3 bucket are limited to this workflow running on the develop and main branch.

Examples of the badges generated for the last run on this branch before lockdown in AWS:
Tests
Coverage

Issue ticket number and link

This was a request in Slack.

Known outstanding issues that are not fully accounted for

Checklist before requesting a review

  • I have performed a self-review of my code
  • I have built basic tests for new functionality (particularly new evaluators)
  • If new libraries have been added, I have checked that readthedocs API documentation is constructed correctly
  • Will this be part of a major product update? If yes, please write one phrase about this update.

Extra-mile Checklist

  • I have thought expansively about edge cases and written tests for them

@github-actions
Copy link

github-actions bot commented Dec 8, 2022

Coverage

Coverage Report
FileStmtsMissCoverMissing
credoai
   __init__.py30100% 
credoai/artifacts
   __init__.py70100% 
credoai/artifacts/data
   __init__.py00100% 
   base_data.py1051289%55, 153, 156, 171, 178, 185, 189, 193, 197, 209, 212, 219
   comparison_data.py631379%53, 60, 71, 76, 81, 90, 96, 100, 105, 114, 147, 153, 156
   tabular_data.py40685%52, 73, 77, 96, 98, 105
credoai/artifacts/model
   __init__.py00100% 
   base_model.py36294%52, 84
   classification_model.py200100% 
   comparison_model.py110100% 
   regression_model.py11464%43–45, 48
credoai/evaluators
   __init__.py150100% 
   data_fairness.py1471292%83–90, 205, 260–261, 287, 311, 334–340, 356
   data_profiler.py34294%57, 60
   deepchecks.py40392%113–122
   equity.py1533180%78, 181–184, 204, 230–257, 281–296, 307–309, 358–359
   evaluator.py70889%50, 58, 61, 80, 106, 126, 174, 181
   fairness.py1451292%117, 238, 246–251, 310–319, 321, 333–336
   feature_drift.py59198%66
   identity_verification.py112298%144–145
   model_profiler.py741284%128–131, 145–148, 165, 182–183, 192–193, 231
   performance.py1171488%110, 137–143, 230–239, 241, 258–261
   privacy.py118497%410, 447–449
   ranking_fairness.py1341490%136–137, 157, 178, 184–185, 382–404, 409–439
   security.py96199%297
   shap.py871484%119, 127–128, 138–144, 170–171, 253–254, 284–292
   survival_fairness.py675025%29–33, 36–48, 53–64, 67–78, 81–99, 102, 105, 108
credoai/evaluators/utils
   __init__.py30100% 
   fairlearn.py18289%46, 59
   utils.py8188%9
   validation.py802865%14, 34–35, 37–39, 46, 67–74, 80–86, 89, 95–98, 105, 108, 111, 114–115, 119–121
credoai/governance
   __init__.py10100% 
credoai/lens
   __init__.py20100% 
   lens.py1891294%173–174, 210–215, 272, 314, 338, 420, 435, 439, 451
   pipeline_creator.py601280%20–21, 37, 79–91
   utils.py392828%20–27, 49–52, 71–82, 99, 106–109, 128–135
credoai/modules
   __init__.py30100% 
   constants_deepchecks.py20100% 
   constants_metrics.py170100% 
   constants_threshold_metrics.py30100% 
   metric_utils.py241825%15–30, 34–55
   metrics.py60788%62, 66, 69–70, 73, 83, 120
   metrics_credoai.py1154462%31–40, 45–47, 70–98, 114–117, 144, 168–169, 232–234, 310–316, 352–353, 423–424
   stats.py392828%11–14, 17–22, 25–27, 30–35, 38–52, 55–60
   stats_utils.py5340%5–8
credoai/utils
   __init__.py50100% 
   common.py1024061%55, 68–69, 75, 84–91, 96–104, 120–126, 131, 136–141, 152–159, 186
   constants.py20100% 
   dataset_utils.py613543%23, 26–31, 50, 54–55, 88–119
   logging.py551376%10–11, 14, 19–20, 23, 27, 44, 58–62
   model_utils.py301163%14–19, 28–29, 34–39
   version_check.py11191%16
TOTAL269850081% 

@esherman-credo
Copy link
Contributor

The badges don't seem to render for me on the front page of the branch:
image
Is that simply because the branch has been removed from the workflow file?

@nate-credoai
Copy link
Contributor Author

The badges don't seem to render for me on the front page of the branch: image Is that simply because the branch has been removed from the workflow file?

Yes. I updated the README.md to reference the main branch as that is what it would be when we merge to main. The goal of this is to show the status of the workflow that is run against the stable/release branch which is what main is right now. I could entertain the idea that develop is more important to show at this time as it represents the next release. Unfortunately, markdown doesn't have variable substitution that would allow us to make it dynamic per branch. As I write that, I wonder if I can do something on the badge hosting side to server based on CORS...

@esherman-credo
Copy link
Contributor

Sounds good, just wanted to confirm

Copy link
Contributor

@esherman-credo esherman-credo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Lgtm

.github/workflows/test-reports.yml Show resolved Hide resolved
.github/workflows/test.yml.orig Outdated Show resolved Hide resolved
nate-credoai and others added 2 commits December 8, 2022 10:58
team repo-admins needs to be prefixed with our org.
@IanAtCredo IanAtCredo merged commit 2d928b2 into develop Dec 9, 2022
@IanAtCredo IanAtCredo deleted the feature/add-badges branch December 9, 2022 18:53
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants