Add initial implementation of auto-modeling#2448
Conversation
95b2373 to
a88e683
Compare
charisk
left a comment
There was a problem hiding this comment.
Thanks for taking this over the line! I've not tested it locally yet but we can tweak separately.
charisk
left a comment
There was a problem hiding this comment.
Thanks for taking this over the line! I've not tested it locally yet but we can tweak separately.
starcke
left a comment
There was a problem hiding this comment.
Comments from reviewing createAutoModelRequest.
starcke
left a comment
There was a problem hiding this comment.
Looks good, just some minor comments/questions.
| const predictedBySignature: Record<string, Method[]> = {}; | ||
| for (const method of predicted) { | ||
| if (!method.classification) { | ||
| continue; |
There was a problem hiding this comment.
Should we log or warn if some samples come back unclassified?
There was a problem hiding this comment.
I don't think so, I think we should log that on the Turbomodel side, the user can't do anything with this information.
| // Order the sinks by the input alphabetically. This will ensure that the first argument is always | ||
| // first in the list of sinks, the second argument is always second, etc. | ||
| // If we get back "Argument[1]" and "Argument[3]", "Argument[1]" should always be first | ||
| sinks.sort((a, b) => (a.input ?? "").localeCompare(b.input ?? "")); |
There was a problem hiding this comment.
It doesnt matter I think but is Argument[10] also going to be sorted properly?
There was a problem hiding this comment.
No, it currently doesn't sort properly. I'll change it so it properly sorts them.
This adds an initial implementation of auto-modeling using the
/repos/:owner/:name/code-scanning/codeql/auto-modelendpoint. The button is hidden behind thecodeQL.dataExtensions.llmGenerationsetting.Checklist
ready-for-doc-reviewlabel there.