Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Watchers/Description Metrics #95

Merged

Conversation

dilanbhalla
Copy link
Contributor

I wanted to submit a suggestion to include GitHub Watchers (to help assess popularity) and the GitHub Description (to clarify the project's overall goal). I am currently helping contribute to OSSF's Security Metrics project, in which we are retrieving several of the GitHub metrics covered in this project (but also need to analyze the two mentioned above to help with our overall security assessment). If these can be included via the pull request I have submitted that would be extremely helpful. Thank you!

@inferno-chromium inferno-chromium merged commit 499174d into ossf:main Aug 4, 2021
@dilanbhalla
Copy link
Contributor Author

Thank you! Would it be possible to also refresh the Google Cloud Storage link to include this information @inferno-chromium?

@inferno-chromium
Copy link
Contributor

Thank you! Would it be possible to also refresh the Google Cloud Storage link to include this information @inferno-chromium?

We haven't productionized this code, so needs to be run manually for a week to regenerate all data. I won't have cycles to rerun this, but happy to update files if you can generate them.

@dilanbhalla
Copy link
Contributor Author

Hi @inferno-chromium, sure I can generate the files. A couple questions before doing so though.

An example of a command I have been running (as a small scale test) is python -u -m criticality_score.generate --count 2 --sample-size 5 --output-dir output, which seems to do the trick. When I want to generate the full output, can I simply run this command but set the count and sample size to 100,000?

Secondly, why does the link you posted on github also contain a "stars" and "license" column but the generator script from the command line does not? Is there a way to include this?

Thank you!

@inferno-chromium
Copy link
Contributor

Hi @inferno-chromium, sure I can generate the files. A couple questions before doing so though.

An example of a command I have been running (as a small scale test) is python -u -m criticality_score.generate --count 2 --sample-size 5 --output-dir output, which seems to do the trick. When I want to generate the full output, can I simply run this command but set the count and sample size to 100,000?

try higher like 25K for both. i think across all languages might take 1-2 weeks. we want to expand scorecards list as well, so more repos will help here.

Secondly, why does the link you posted on github also contain a "stars" and "license" column but the generator script from the command line does not? Is there a way to include this?

probably old code, those are not useful fields i think. can be excluded.

Thank you!

@dilanbhalla
Copy link
Contributor Author

Hi @inferno-chromium, I actually had a bunch of other work items pop up recently, but there is no rush for our team on this particular metric, so whenever the code is productionized and data is published, we will retrieve the information. Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants