Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Flex conf #15

Merged
merged 3 commits into from
Jun 26, 2023
Merged

Flex conf #15

merged 3 commits into from
Jun 26, 2023

Conversation

behrica
Copy link
Contributor

@behrica behrica commented Jun 17, 2023

In this PR I "moved" the configuration of the model "up" to the function "gen/copmplete-template" and "complete".
(so it merges the opts from inside the prompt definition and the external opts), external overrides
This has in my view 2 advantages:

  • bosque does not need to decide / assume how the user of bosquet manages his secret keys
  • a bosque prompt definition does not need to specify any model parameter, so this an be done at "usage time".
    and it can be overriden from the prompt definition user (without touching teh prompt definition)
    (this should make the prompt definition completely model agnostic, so allow to use the same prompt definition with any model

I fixed as well the Azure OpenAI issue. (#9 ) Know the code works again with OpenAI and Azure OpenAI, provided that the correct model options are given to "complete-template" (or a specified inside the template)

@behrica
Copy link
Contributor Author

behrica commented Jun 17, 2023

I was as well thinking about the limitation of this approach.
In practice the new "model opts" parameter gets merged / applied for the "llm-generate" tag
This is maybe problematic ones we have several "tags" which call into models.
Or we want that different "llm-generate" tags in a complex prompt definition use different models (and model configuration)
I can see that template approach you have put in place, allows to setup "trees of template definition", which have "tags" in various places. How can we find a way to "externally" configure the tags in the whole tree ?
How to "identify" them in order to configure them ?

@@ -91,7 +91,9 @@ Playwright: This is a synopsis for the above play:
(def synopsis
(gen/complete-template
synopsis-template
{:title "Mr. X" :genre "crime"}))
{:title "Mr. X" :genre "crime"}
{:api-key "sk-xxxxx"}
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

a usage example of the complete-template function.
The user can the "code" whatever he want to get the key, bosquet does not need to prescribe this.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One alternative would be to allow to "configure" any "tag" in this way, so from a user perspective:

(gen/complete-template
    synopsis-template
    {:title "Mr. X" :genre "crime"}
    {:llm-generate
      {:api-key "sk-xxxxx"}}

this would allow to keep the same structure in case of more tags
(but still assuming that all usages of the same tag in a multi-tag complex prompt get configured in the same way.)

Basically saying that all llm-generate tags in a complex prompt need to use the same model. (at least when I want to externally configure it)
By "hardcoding" api-keys inside prompt definition this could still be done differently.

Copy link
Contributor Author

@behrica behrica Jun 17, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A completely different way to achieve the same (external configuration of models), could be by using "global vars" inside a prompt definition, maye in this way:


(def synopsis-template
  "You are a playwright. Given the play's title and it's genre
it is your job to write synopsis for that play.

Title: {{title}}
Genre: {{genre}}

Playwright: This is a synopsis for the above play:
{% llm-generate model=text-davinci-003 
                          var-name=play
                          api-key={{my-global-var-with-the-api-key}}

%}")

I think Selmer supports this already
Disadvantage here: The prompt definition would decide which models opts to use, which might be different "per model". So the prompt definitions are less model agnostic.

@@ -22,7 +25,7 @@
and set it as required inputs for the resolver.
For the output check if themplate is producing generated content
anf if so add a key for it into the output"
[the-key template]
[the-key template model-opts]
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the new map of model-opts which will be merged with the model opts defined in template definition

In practice these are "llm-generate-opts" as used for any "llm-generate" tag.

@behrica
Copy link
Contributor Author

behrica commented Jun 17, 2023

I am new to Selmer, so maybe there is already all we need in Selmer itself.
But I think the idea of being able to specify in some form model parameters in the prompt definition which get merged with an external passed in model config map is important for reaching this 2 goals:

  • bosque leaves it to the user how to manage the secret keys
  • the template prompt definitions can stay as much model agnostic as possible, and allowing the user to add and override (and ideally remove) any keys specified inside the prompt definition

@behrica
Copy link
Contributor Author

behrica commented Jun 17, 2023

I am wondering if and how we should be able to configure externally both calls to completion see here:

image

differently (so using a different model, different key and so on) without specifying this inside the prompt definition.
-> the user of the template can decide if he want to use the same or a different model at each completion call.

@behrica
Copy link
Contributor Author

behrica commented Jun 17, 2023

Probably a convention to "configure" nested llm-generate tags in the form of

(gen/complete
    template
    {:title "Mr. X" :genre "crime"}
    {"review.llm-generate"
      {:impl :openai
        :api-key "sk-xxxxx"}
    "synopsis.llm-generate"
      {:impl :azure
      :api-key "azure key"}}

so "prompt-map-key.tag-name" to inject options into the llm-generate call at a certain place in the template (and merge with ev. existing options) could work.

This is in some way what I would like to achieve.. use different models (and their configuration) at different places of a prompt template, without need to configure this inside the prompt template.

@behrica
Copy link
Contributor Author

behrica commented Jun 17, 2023

Maybe we want as well address the case of the same tag multiple times in a template, by allow to index:

(gen/complete
    template
    {:title "Mr. X" :genre "crime"}
    {"review.llm-generate.1"
      {:impl :openai
        :api-key "sk-xxxxx"}
    "review.llm-generate.2"
      {:impl :azure
      :api-key "azure key"}}

@behrica behrica marked this pull request as draft June 20, 2023 18:26
@behrica
Copy link
Contributor Author

behrica commented Jun 24, 2023

I am know quite happy with the PR.
I adapted the notebook use_guide.clj, but did not check if other places (notebooks or tests) need changes as well.

@behrica behrica marked this pull request as ready for review June 24, 2023 10:49
Copy link
Owner

@zmedelis zmedelis left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good update! Nice separation of concerns for who and where manages config

@zmedelis zmedelis merged commit d150d4f into zmedelis:main Jun 26, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants