-
Notifications
You must be signed in to change notification settings - Fork 13.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.
Already on GitHub? Sign in to your account
prompt ergonomics #7799
prompt ergonomics #7799
Conversation
The latest updates on your projects. Learn more about Vercel for Git 鈫楋笌 1 Ignored Deployment
|
langchain/schema/messages.py
Outdated
@@ -77,6 +77,13 @@ def lc_serializable(self) -> bool: | |||
"""Whether this class is LangChain serializable.""" | |||
return True | |||
|
|||
# Cannot type this since it returns something we can't import |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
could maybe gate on TYPE_CHECKING but maybe not
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah good call
langchain/prompts/chat.py
Outdated
@@ -164,6 +168,18 @@ class ChatPromptTemplate(BaseChatPromptTemplate, ABC): | |||
input_variables: List[str] | |||
messages: List[Union[BaseMessagePromptTemplate, BaseMessage]] | |||
|
|||
def __or__(self, other: Any) -> ChatPromptTemplate: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
feels weird to use or for this, would add not make more sense? like adding strings
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah thats fair
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For piping it seems like it'd make more sense to do it between a prompt and an LLM (basically decompose an llm chain)
parsed_result = my_prompt | HuggingFace() | my_parser;
langchain/prompts/prompt.py
Outdated
input_variables = list( | ||
set(self.input_variables) | set(other.input_variables) | ||
) | ||
template = self.template + "\n\n" + other.template |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It feels a bit limiting to always insert the double newline, especially if we can do concatenation with strings directly. I think I'd rather be able to do
prompt = (
PromptTemplate.from_template("Tell me a joke about {topic}")
+ "\n\nmake it funny"
+ ", and in {language}"
)
And be explicit rather than have that be magical or restrictive
No description provided.