Skip to content

Add sentToLLM to ModelingStore#3374

Merged
robertbrignull merged 3 commits intomainfrom
robertbrignull/sentToLLMMethods
Feb 20, 2024
Merged

Add sentToLLM to ModelingStore#3374
robertbrignull merged 3 commits intomainfrom
robertbrignull/sentToLLMMethods

Conversation

@robertbrignull
Copy link
Copy Markdown
Contributor

This PR adds a new sentToLLMMethods: Set<string> field to ModelingStore along with all the associated events and plumbing.

Note: very open to suggestions for the naming of fields/methods. Naming is hard 😢

The intention here is to very simply track which methods have been sent to the LLM in the current modeling session (i.e. it'll be reset if the model editor is closed and reopened). This state can be used along with the set of modeled methods to identify the set of positive and negative predictions from the LLM.

Methods are added to sentToLLMMethods whenever we successfully receive a response from the LLM. We add all methods from that batch, regardless of the modelings included in the LLM response. Currently there's no way to remove methods from sentToLLMMethods because we want the state to persist for the remainder of the session, but there's no reason a way of removing methods from this set couldn't be added.

UI changes for positive and negative LLM predictions will come in future PRs. Here we're just laying the groundwork.

Checklist

  • CHANGELOG.md has been updated to incorporate all user visible changes made by this pull request.
  • Issues have been created for any UI or other user-facing changes made by this pull request.
  • [Maintainers only] If this pull request makes user-facing changes that require documentation changes, open a corresponding docs pull request in the github/codeql repo and add the ready-for-doc-review label there.

@robertbrignull robertbrignull requested review from a team as code owners February 19, 2024 16:55
break;
}
case "setSentToLLMMethods": {
// TODO: set state
Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We'll need to add this as React state and pass it down through the UI elements to where it's needed, but this can be done in a future PR.

}

public addSentToLLMMethods(dbItem: DatabaseItem, sentToLLMMethods: string[]) {
this.changeSentToLLMMethods(dbItem, (state) => {
Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I decided to follow the existing pattern of having a changeX method, even though it's only called from here and we don't have a removeSentToLLMMethods method yet. Hopefully that keeps the code more consistent and easier to understand, but let me know if you disagree.

);

// Let the UI know which methods have been sent to the LLM
this.modelingStore.addSentToLLMMethods(
Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe the behaviour we want here is to add the methods whenever we get a successful response from the LLM. We don't want to call this until the response comes back, and if there's an error then we don't want to consider them sent because we might want to send them again.

Therefore I think just adding this method call here is enough and we don't need to worry about catch or finally blocks.

Copy link
Copy Markdown
Member

@koesie10 koesie10 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changes LGTM, but it seems like there's a test that expects a new method to be mocked.

For naming: Perhaps processedByAutoModel would be better? That hides the implementation detail of the auto modeler using an LLM. An alternative might be modeledByAutoModel, but that might be unclear (since "modeled" could also mean that there's at least 1 model for it).

@charisk
Copy link
Copy Markdown
Contributor

charisk commented Feb 20, 2024

+1 for processedByAutoModel or processedByLlm (we've been using the term LLM elsewhere, and automodel could include non-LLM model generation too)

@robertbrignull
Copy link
Copy Markdown
Contributor Author

I'll change to processedByAutoModel. I like that a lot better.

I originally had it called "processed" because I think that word is clearer, but it's also longer so I changed to "sent" for being shorter. But since you both suggested it I'm very happy to switch back to "processed".

I wasn't sure whether to use the term "LLM" but indeed we do use it in a few other places. I think "AutoModel" is a better term to use here and I agree it's good that it could include non-LLM options too.

Copy link
Copy Markdown
Member

@koesie10 koesie10 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, LGTM

@robertbrignull robertbrignull merged commit 3b30f22 into main Feb 20, 2024
@robertbrignull robertbrignull deleted the robertbrignull/sentToLLMMethods branch February 20, 2024 10:05
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants