Conversation
| break; | ||
| } | ||
| case "setSentToLLMMethods": { | ||
| // TODO: set state |
There was a problem hiding this comment.
We'll need to add this as React state and pass it down through the UI elements to where it's needed, but this can be done in a future PR.
| } | ||
|
|
||
| public addSentToLLMMethods(dbItem: DatabaseItem, sentToLLMMethods: string[]) { | ||
| this.changeSentToLLMMethods(dbItem, (state) => { |
There was a problem hiding this comment.
I decided to follow the existing pattern of having a changeX method, even though it's only called from here and we don't have a removeSentToLLMMethods method yet. Hopefully that keeps the code more consistent and easier to understand, but let me know if you disagree.
| ); | ||
|
|
||
| // Let the UI know which methods have been sent to the LLM | ||
| this.modelingStore.addSentToLLMMethods( |
There was a problem hiding this comment.
I believe the behaviour we want here is to add the methods whenever we get a successful response from the LLM. We don't want to call this until the response comes back, and if there's an error then we don't want to consider them sent because we might want to send them again.
Therefore I think just adding this method call here is enough and we don't need to worry about catch or finally blocks.
koesie10
left a comment
There was a problem hiding this comment.
Changes LGTM, but it seems like there's a test that expects a new method to be mocked.
For naming: Perhaps processedByAutoModel would be better? That hides the implementation detail of the auto modeler using an LLM. An alternative might be modeledByAutoModel, but that might be unclear (since "modeled" could also mean that there's at least 1 model for it).
|
+1 for |
|
I'll change to I originally had it called "processed" because I think that word is clearer, but it's also longer so I changed to "sent" for being shorter. But since you both suggested it I'm very happy to switch back to "processed". I wasn't sure whether to use the term "LLM" but indeed we do use it in a few other places. I think "AutoModel" is a better term to use here and I agree it's good that it could include non-LLM options too. |
This PR adds a new
sentToLLMMethods: Set<string>field toModelingStorealong with all the associated events and plumbing.Note: very open to suggestions for the naming of fields/methods. Naming is hard 😢
The intention here is to very simply track which methods have been sent to the LLM in the current modeling session (i.e. it'll be reset if the model editor is closed and reopened). This state can be used along with the set of modeled methods to identify the set of positive and negative predictions from the LLM.
Methods are added to
sentToLLMMethodswhenever we successfully receive a response from the LLM. We add all methods from that batch, regardless of the modelings included in the LLM response. Currently there's no way to remove methods fromsentToLLMMethodsbecause we want the state to persist for the remainder of the session, but there's no reason a way of removing methods from this set couldn't be added.UI changes for positive and negative LLM predictions will come in future PRs. Here we're just laying the groundwork.
Checklist
ready-for-doc-reviewlabel there.