Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enable instructions to LLMs, e.g. chat model system prompts #119

Merged
merged 18 commits into from
Apr 17, 2023

Conversation

Mikkolehtimaki
Copy link
Contributor

@Mikkolehtimaki Mikkolehtimaki commented Apr 11, 2023

This adds <instructions> to the Rail spec. LLM calls can use those as they see fit, for example Chat models can pass instructions as system messages, while for "text in, text out" models instructions can simply be catenated to the prompt.

Discussed at #109

E: unit tests still needed

@codecov
Copy link

codecov bot commented Apr 11, 2023

Codecov Report

Patch coverage: 89.01% and project coverage change: +0.72 🎉

Comparison is base (271d62f) 77.35% compared to head (6f5db8c) 78.07%.

Additional details and impacted files
@@            Coverage Diff             @@
##             main     #119      +/-   ##
==========================================
+ Coverage   77.35%   78.07%   +0.72%     
==========================================
  Files          44       48       +4     
  Lines        2305     2399      +94     
==========================================
+ Hits         1783     1873      +90     
- Misses        522      526       +4     
Flag Coverage Δ
unittests 78.07% <89.01%> (+0.72%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

Impacted Files Coverage Δ
guardrails/utils/logs_utils.py 85.39% <40.00%> (-1.82%) ⬇️
guardrails/llm_providers.py 57.40% <55.55%> (+2.69%) ⬆️
tests/integration_tests/mock_llm_outputs.py 69.23% <66.66%> (-2.20%) ⬇️
guardrails/guard.py 89.55% <83.33%> (+0.84%) ⬆️
guardrails/prompt/__init__.py 100.00% <100.00%> (ø)
guardrails/prompt/base_prompt.py 88.88% <100.00%> (ø)
guardrails/prompt/instructions.py 100.00% <100.00%> (ø)
guardrails/prompt/prompt.py 100.00% <100.00%> (ø)
guardrails/rail.py 90.90% <100.00%> (+0.61%) ⬆️
guardrails/run.py 94.73% <100.00%> (+0.35%) ⬆️
... and 3 more

... and 2 files with indirect coverage changes

Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.

☔ View full report in Codecov by Sentry.
📢 Do you have feedback about the report comment? Let us know in this issue.

Copy link
Collaborator

@ShreyaR ShreyaR left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is awesome!! thanks for making the change. 🚀

left some small comments.

Can you add a test for instruction formatting? a standalone unit test in a new test_prompt.py would be great! Alternatively, you could add an integration test similar to test_guard or test_pydantic. Heads up, the integration tests are far more time-consuming to add (and this helper may be useful).

guardrails/llm_providers.py Outdated Show resolved Hide resolved
"""Prepare final prompt for nonchat engine."""
if instructions:
prompt = "\n\n".join([instructions, prompt])
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My inclination here is to raise a Warning that instructions will be ignored since the LLM API is a non-chat API, so that there's no unexpected surprises in terms of the final prompt that is sent.

Thoughts?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Makes a lot of sense, only counterargument is that then users would be inclined to create different guards for text and chat models, which may be just fine.

One thing to consider if there are some integrations with vector dbs / embedding endpoints, it may be useful to have a part of the prompt to be embedded (maybe the prompt), but the full guard be passed to the model (instructions and prompt). Is it relevant?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That seems reasonable.

Re: embedding parts of the prompt -- I think it should be possible to do this by accessing attributes of the Prompt class.

guardrails/prompt/instructions.py Outdated Show resolved Hide resolved
guardrails/rail.py Outdated Show resolved Hide resolved
guardrails/run.py Outdated Show resolved Hide resolved
@Mikkolehtimaki Mikkolehtimaki marked this pull request as ready for review April 14, 2023 05:29
Copy link
Collaborator

@ShreyaR ShreyaR left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm! Added a small nit, I can also add a commit to get this closed

guardrails/llm_providers.py Outdated Show resolved Hide resolved
"""Prepare final prompt for nonchat engine."""
if instructions:
prompt = "\n\n".join([instructions, prompt])
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That seems reasonable.

Re: embedding parts of the prompt -- I think it should be possible to do this by accessing attributes of the Prompt class.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants