-
Notifications
You must be signed in to change notification settings - Fork 267
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enable instructions to LLMs, e.g. chat model system prompts #119
Enable instructions to LLMs, e.g. chat model system prompts #119
Conversation
Codecov ReportPatch coverage:
Additional details and impacted files@@ Coverage Diff @@
## main #119 +/- ##
==========================================
+ Coverage 77.35% 78.07% +0.72%
==========================================
Files 44 48 +4
Lines 2305 2399 +94
==========================================
+ Hits 1783 1873 +90
- Misses 522 526 +4
Flags with carried forward coverage won't be shown. Click here to find out more.
... and 2 files with indirect coverage changes Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here. ☔ View full report in Codecov by Sentry. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is awesome!! thanks for making the change. 🚀
left some small comments.
Can you add a test for instruction formatting? a standalone unit test in a new test_prompt.py
would be great! Alternatively, you could add an integration test similar to test_guard or test_pydantic. Heads up, the integration tests are far more time-consuming to add (and this helper may be useful).
"""Prepare final prompt for nonchat engine.""" | ||
if instructions: | ||
prompt = "\n\n".join([instructions, prompt]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My inclination here is to raise a Warning that instructions
will be ignored since the LLM API is a non-chat API, so that there's no unexpected surprises in terms of the final prompt that is sent.
Thoughts?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Makes a lot of sense, only counterargument is that then users would be inclined to create different guards for text and chat models, which may be just fine.
One thing to consider if there are some integrations with vector dbs / embedding endpoints, it may be useful to have a part of the prompt to be embedded (maybe the prompt), but the full guard be passed to the model (instructions and prompt). Is it relevant?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That seems reasonable.
Re: embedding parts of the prompt -- I think it should be possible to do this by accessing attributes of the Prompt
class.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm! Added a small nit, I can also add a commit to get this closed
"""Prepare final prompt for nonchat engine.""" | ||
if instructions: | ||
prompt = "\n\n".join([instructions, prompt]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That seems reasonable.
Re: embedding parts of the prompt -- I think it should be possible to do this by accessing attributes of the Prompt
class.
This adds
<instructions>
to the Rail spec. LLM calls can use those as they see fit, for example Chat models can pass instructions as system messages, while for "text in, text out" models instructions can simply be catenated to the prompt.Discussed at #109
E: unit tests still needed