You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The original one: the initialization logic for the scorecard check runner is to simply initialize and use all the clients if a valid repoURI is there, even if checks like Fuzzing or CII-Best-Practices are not specified to run.
Mine (GetDependencyDiffResult): use the same API, checker.GetClients to initialize all clients first, then return the clients corresponding to the checks to run. For example, if we only run checks License and Maintained, the OSSFuzzClient, CIIBestPracticeClient, and VulnsClient will be nil. Scorecard also runs well with this initialization.
Refactor the input params of GetDependencyDiffResult. This depends on what other projects might want.
By @laurentsimon , that it would be useful to have better tests for getScorecardCheckResults(). There's a bunch of logic in there that would be nice to have tests for. It may require some code-refactoring.
Describe the solution you'd like
For 1, the best way is to optimize both and initialize the corresponding clients for the run. IMO, neither the logic of the original Scorecard run nor mine is that ideal. An alternative temp solution is, if we simply want them to have the same initialization logic, we can use the original one to align mine or vice versa.
For 2, IMO, using the client objects as one of the API's input params might not be a good idea. My justifications:
This is an API for users to call, and the simplest way for users is to specify a list of checks to run (a list of check names, string types), and then the API will initialize the corresponding clients.
This is also an API used for the CLI (PR #2077), still, using a list of check names to run as the input is a straightforward way.
Might need an additional module to check if the input client objects are corresponding to the checks to run. For example, if the caller specifies the Fuzzing check to run but fails to initialize the FuzzClient and we don't have a verification module on this, the API and the Scorecard run will panic. This is unnecessary and redundant IMO.
For 3, I'll note this in my design doc and take a look later. As what Laurent suggested, I might use a follow-up PR to refactor it and add the e2e test coverage.
Is your feature request related to a problem? Please describe.
The different implementations of initializing clients in the original Scorecard run and the dependency-diff API (PR ✨ Feature DependencyDiff (Version 0 Part 2) #2046):
repoURI
is there, even if checks likeFuzzing
orCII-Best-Practices
are not specified to run.GetDependencyDiffResult
): use the same API, checker.GetClients to initialize all clients first, then return the clients corresponding to the checks to run. For example, if we only run checksLicense
andMaintained
, theOSSFuzzClient
,CIIBestPracticeClient
, andVulnsClient
will benil
. Scorecard also runs well with this initialization.Refactor the input params of
GetDependencyDiffResult
. This depends on what other projects might want.By @laurentsimon , that it would be useful to have better tests for getScorecardCheckResults(). There's a bunch of logic in there that would be nice to have tests for. It may require some code-refactoring.
Describe the solution you'd like
For 1, the best way is to optimize both and initialize the corresponding clients for the run. IMO, neither the logic of the original Scorecard run nor mine is that ideal. An alternative temp solution is, if we simply want them to have the same initialization logic, we can use the original one to align mine or vice versa.
For 2, IMO, using the client objects as one of the API's input params might not be a good idea. My justifications:
For 3, I'll note this in my design doc and take a look later. As what Laurent suggested, I might use a follow-up PR to refactor it and add the e2e test coverage.
Additional context
@azeemshaikh38 @laurentsimon wdut?
The text was updated successfully, but these errors were encountered: