Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Generic LLM wrapper to support chat model interface with configurable chat prompt format #8295

Merged
merged 5 commits into from Nov 18, 2023

Conversation

krasserm
Copy link
Contributor

@krasserm krasserm commented Jul 26, 2023

Update 2023-09-08

This PR now supports further models in addition to Lllama-2 chat models. See this comment for further details. The title of this PR has been updated accordingly.

Original PR description

This PR adds a generic Llama2Chat model, a wrapper for LLMs able to serve Llama-2 chat models (like LlamaCPP, HuggingFaceTextGenInference, ...). It implements BaseChatModel, converts a list of chat messages into the required Llama-2 chat prompt format and forwards the formatted prompt as str to the wrapped LLM. Usage example:

# uses a locally hosted Llama2 chat model
llm = HuggingFaceTextGenInference(
    inference_server_url="http://127.0.0.1:8080/",
    max_new_tokens=512,
    top_k=50,
    temperature=0.1,
    repetition_penalty=1.03,
)

# Wrap llm to support Llama2 chat prompt format.
# Resulting model is a chat model
model = Llama2Chat(llm=llm)

messages = [
    SystemMessage(content="You are a helpful assistant."),
    MessagesPlaceholder(variable_name="chat_history"),
    HumanMessagePromptTemplate.from_template("{text}"),
]

prompt = ChatPromptTemplate.from_messages(messages)
memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)
chain = LLMChain(llm=model, prompt=prompt, memory=memory)

# use chat model in a conversation
# ...

Also part of this PR are tests and a demo notebook.

  • Tag maintainer: @hwchase17
  • Twitter handle: @mrt1nz

@vercel
Copy link

vercel bot commented Jul 26, 2023

The latest updates on your projects. Learn more about Vercel for Git ↗︎

1 Ignored Deployment
Name Status Preview Comments Updated (UTC)
langchain ⬜️ Ignored (Inspect) Visit Preview Nov 17, 2023 10:53pm

@dosubot dosubot bot added Ɑ: memory Related to memory module 🤖:enhancement A large net-new component, integration, or chain. Use sparingly. The largest features labels Jul 26, 2023
@krasserm
Copy link
Contributor Author

Not included yet is support for function calls (i.e. FunctionMessage). I plan to add this in another PR.

@baskaryan baskaryan removed the Ɑ: memory Related to memory module label Jul 26, 2023
@krasserm
Copy link
Contributor Author

krasserm commented Jul 27, 2023

@baskaryan @rlancemartin First of all, apologies for the extra round caused by linting errors from my first commit which are now resolved by my second commit.

This was related to some issues I had running all make targets locally, for which a fix is now available in #8344. I'm still experiencing the issues mentioned in #6182 though but was able to workaround by using /usr/bin/make directly.

@rlancemartin
Copy link
Collaborator

@baskaryan @rlancemartin First of all, apologies for the extra round caused by linting errors from my first commit which are now resolved by my second commit.

This was related to some issues I had running all make targets locally, for which a fix is now available in #8344. I'm still experiencing the issues mentioned in #6182 though but was able to workaround by using /usr/bin/make directly.

Thanks for adding!

Re function calling, have you seen this and do you have a PR in flight to support it? Would be great!

SystemMessage,
)

B_INST, E_INST = "[INST]", "[/INST]"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice to see that these tokens are now officially recommended, as you point out.

I was wondering whether they were going to get broadly suggested.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Inspiring to see how far GPT4 gets deriving a llama2 prompt from analyzing generation.py. Looks good except from the wrongly placed first [INST].

"id": "2ff99380",
"metadata": {},
"source": [
"This is shown with `HuggingFaceTextGenInference` as an example. A `HuggingFaceTextGenInference` LLM encapsulates access to a [text-generation-inference](https://github.com/huggingface/text-generation-inference) server. In the following example, the inference server hosts a [meta-llama/Llama-2-13b-chat-hf](https://huggingface.co/meta-llama/Llama-2-13b-chat-hf) model. It was started with \n",
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would be nice to show this running w/ llama2 locally, as well.

If you don't have it locally, I can add.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good idea, I'll give this a try today or tomorrow, extend the notebook and update this PR.

@krasserm
Copy link
Contributor Author

Thanks for reviewing my PR @rlancemartin! I haven't seen the grammar-based sampling PR, thanks for the pointer. I'll start working on it later this week and submit another PR.

@krasserm
Copy link
Contributor Author

krasserm commented Aug 1, 2023

@rlancemartin I just added two more commits that include an example how to use the Llama2Chat wrapper with a local LlamaCpp LLM and further tests incl. failure cases. I also tried to improve the documentation in the notebook.

@krasserm
Copy link
Contributor Author

krasserm commented Aug 2, 2023

@rlancemartin @baskaryan is there anything else you think I should add/improve to get this PR merged?

@yaoaifiling
Copy link

@krasserm The llm created by HuggingFaceTextGenInference doesn't support async operation, so it fails in async mode. Do you notice that?

@krasserm
Copy link
Contributor Author

krasserm commented Aug 4, 2023

@yaoaifiling async support for HuggingFaceTextGenInference was recently added. Are you using an older LangChain version?

@yaoaifiling
Copy link

@yaoaifiling async support for HuggingFaceTextGenInference was recently added. Are you using an older LangChain version?

Thanks! Yes, using an old one, need to update my langchain version.

@baskaryan
Copy link
Collaborator

@rlancemartin @baskaryan is there anything else you think I should add/improve to get this PR merged?

apologies for the delay, will take another look shortly!

@avoroshilov
Copy link

Thank you @krasserm for doing this!
By the way, I tried it out and wanted to see how good it extends. By default it doesn't work with e.g. Vicuna, mostly because the prompt templates are different. But based on https://github.com/lm-sys/FastChat/blob/main/fastchat/conversation.py#L336 I was able to make it work w/ Vicuna with very simple modifications to _to_llama2_chat_prompt - to make the template look like

{SYS} USER: {message[2n]} ASSISTANT: {message[2n+1]} </s>USER: {message[2(n+1)]} ASSISTANT: {message[2(n+1)+1]} </s>

So, this looks promising as a chat wrapper for some major local llama-based models, if templating mechanism to be modified. I also have AutoGPTQ pipeline added to LangChain locally (will probably fire a PR soon), which allows me to run quantized models easily.

How do you think this should go? This generalization is probably out of scope of this PR? If so, then we can potentially duplicate the Llama2Chat into something like LlamaLocalChat or something with generalized templating.

@krasserm
Copy link
Contributor Author

krasserm commented Aug 8, 2023

Good thoughts @avoroshilov, thank you! I just updated this PR to make the chat wrapper generic. The new base class ChatWrapper can now be configured to support custom chat prompt formats e.g.

class Llama2Chat(ChatWrapper):
    sys_beg: str = "<s>[INST] <<SYS>>\n"
    sys_end: str = "\n<</SYS>>\n\n"
    ai_n_beg: str = " "
    ai_n_end: str = " </s>"
    usr_n_beg: str = "<s>[INST] "
    usr_n_end: str = " [/INST]"
    usr_0_beg: str = ""
    usr_0_end: str = " [/INST]"


class Llama2Instruct(ChatWrapper):
    sys_beg: str = "### System:\n"
    sys_end: str = "\n\n"
    ai_n_beg: str = "### Assistant:\n"
    ai_n_end: str = "\n\n"
    usr_n_beg: str = "### User:\n"
    usr_n_end: str = "\n\n"


class Vicuna(ChatWrapper):
    sys_beg: str = ""
    sys_end: str = " "
    ai_n_beg: str = "ASSISTANT: "
    ai_n_end: str = " </s>"
    usr_n_beg: str = "USER: "
    usr_n_end: str = " "

These classes are part of the PR (as sample implementations). Lllama2Chat is a refactoring of the original implementation. Llama2Instruct is a wrapper for llama-2-*-instruct-v2 models (and also several other Llama-2 derivatives like StableBeluga-2) that require a prompt format like

### System:
{System}

### User:
{User}

### Assistant:
{Assistant}

Vicuna is for the prompt format you posted i.e.

{SYS} USER: {message[2n]} ASSISTANT: {message[2n+1]} </s>USER: {message[2(n+1)]} ASSISTANT: {message[2(n+1)+1]} </s>

I hope this fits your use cases!

@krasserm krasserm changed the title Generic LLM wrapper to support the Llama2 chat prompt format Generic LLM wrapper to support chat model interface with configurable chat prompt format Aug 8, 2023
@krasserm
Copy link
Contributor Author

krasserm commented Aug 8, 2023

@baskaryan @rlancemartin it would be great if you could take another look at this PR. Based on @avoroshilov's thoughts I made this a generic LLM wrapper that now supports a lot more models than just Llama-2 chat models. Models not already covered can now be easily supported by extending ChatWrapper base class. See this comment details.

@cstub
Copy link

cstub commented Aug 8, 2023

@krasserm This looks fantastic! I've been looking for an integration of Llama-2 based chat models in LangChain. Would love to see this merged soon! 👍

@shaw91
Copy link

shaw91 commented Aug 27, 2023

Hi team, any update on this PR? I don't see llama-2-based chat model support in LangChain yet. So being able to merge this would be really awesome.

@leo-gan
Copy link
Collaborator

leo-gan commented Sep 19, 2023

@krasserm Hi , could you, please, resolve the merging issues and address the last comments (if needed)? After that, ping me and I push this PR for the review. Thanks!

@krasserm
Copy link
Contributor Author

krasserm commented Sep 23, 2023

@leo-gan thanks for your support! I just rebased on master and made some changes to the notebook (support for latest llama-cpp-python version, ...). I also squashed all commits from this PR.

@leo-gan
Copy link
Collaborator

leo-gan commented Sep 23, 2023

@baskaryan Please review this PR. TNX

@maziyarpanahi
Copy link

It would be very helpful to the community using foundational LLMs if we can have this PR merged.

@efriis
Copy link
Member

efriis commented Nov 13, 2023

Hey folks! Appreciate your patience. Given this is effectively a model-specific prompt template, I would prefer to have this as a reference prompt at smith.langchain.com/hub , or as a LangChain template for working with open source models instead of a "model" as it's implemented here.

If nobody else picks it up, I can spend some time putting together a little template with alternatives showing how to format this, so people can get started with these llms quicker with that, without merging these prompts into the models codebase.

@efriis efriis assigned efriis and unassigned rlancemartin Nov 13, 2023
@maziyarpanahi
Copy link

Hey folks! Appreciate your patience. Given this is effectively a model-specific prompt template, I would prefer to have this as a reference prompt at smith.langchain.com/hub , or as a LangChain template for working with open source models instead of a "model" as it's implemented here.

If nobody else picks it up, I can spend some time putting together a little template with alternatives showing how to format this, so people can get started with these llms quicker with that, without merging these prompts into the models codebase.

Hi @efriis

I guess this PR could have changed in a way to not call on each LLM model, but any LLM model that requires a specific start/end for system, user, ai, etc messages. However, if we are not going to merge this, I am interested in LangChain template for working with open source models solution which doesn't require downloading something extra from a Hub or installing another dependency just to get something like Llama-2 work.

@krasserm
Copy link
Contributor Author

I decided for this design because I see start/end tokens in chat prompts as an implementation detail of a model. This is something a user shouldn't need to care about when designing an application-specific chat prompt template like the one used in the initial example e.g.

messages = [
    SystemMessage(content="You are a helpful assistant."),
    MessagesPlaceholder(variable_name="chat_history"),
    HumanMessagePromptTemplate.from_template("{text}"),
]

# independent of model-specific chat message start/end tokens
prompt_template = ChatPromptTemplate.from_messages(messages)

Start/end tokens for messages of a chat model are an implementation detail in the same way as a tokenizer is a model-specific implementation detail. I think that's the reason why Hugging Face decided to link chat templates to tokenizers. Their chat templates too render model-independent chat messages into prompts with model-specific start/end tokens (like the wrappers of this PR) and are reusable across tokenizers.

You can also see the wrapper(s) from this PR as a layer in a chat protocol, translating messages of a conversation into a lower-level, model-specific prompt format, freeing the user from dealing with these details. And these wrappers (or protocol layers) are reusable too. For example, you can reuse the Llama2Chat wrapper with all LLMs that require the Llama-2 chat prompt (and many of them do, not only vanilla Llama-2 chat models).

Even if you do not want to have concrete subclasses, like Llama2Chat, part of this PR, I think the ChatWrapper base class is still a useful addition to the models codebase as it provides the generic functionality for implementing a uniform chat model interface for many LangChain LLMs. For example, many models require the ChatML prompt format which can be implemented with ChatWrapper easily like:

class ChatML(ChatWrapper):
    sys_beg: str = "<|im_start|>system\n"
    sys_end: str = "\n<|im_end|>"
    ai_n_beg: str = "<|im_start|>assistant\n"
    ai_n_end: str = "\n<|im_end|>"
    usr_n_beg: str = "<|im_start|>user\n"
    usr_n_end: str = "\n<|im_end|>"

Their wide applicability is one of the main reasons why I thought concrete ChatWrapper implementations should be part of this PR.

@efriis
Copy link
Member

efriis commented Nov 16, 2023

@krasserm thoughts on merging this into experimental initially? Code can stay almost the same - all the dependencies would still come from langchain, but the new stuff would come from langchain_experimental while we ask people to try it out and let us know their thoughts, and how it plays with the different models.

@krasserm
Copy link
Contributor Author

@efriis sounds good to me. I'll update the PR later this week.

@efriis
Copy link
Member

efriis commented Nov 16, 2023

Sounds good. Thanks!

Another (better imo) implementation is as some kind of function that formats ChatPrompts into the format expected by these LLMs, and an output parser that parses the output into a message (or messages).

I would imagine a runnable like

chain = ChatPromptTemplate.from_message([...]) | new_prompt_formatter | HuggingFaceTextGenInterface(...) | new_llama_output_parser 

@maziyarpanahi
Copy link

Sounds good. Thanks!

Another (better imo) implementation is as some kind of function that formats ChatPrompts into the format expected by these LLMs, and an output parser that parses the output into a message (or messages).

I would imagine a runnable like

chain = ChatPromptTemplate.from_message([...]) | new_prompt_formatter | HuggingFaceTextGenInterface(...) | new_llama_output_parser 

It would be great if that ChatPromptTemplate can have a format that can be applied to every chain, agent, etc. that uses any ChatPrompt messages. This does override the ChatPromptValue for everything else #9917

@krasserm
Copy link
Contributor Author

Sounds good. Thanks!

Another (better imo) implementation is as some kind of function that formats ChatPrompts into the format expected by these LLMs, and an output parser that parses the output into a message (or messages).

I would imagine a runnable like

chain = ChatPromptTemplate.from_message([...]) | new_prompt_formatter | HuggingFaceTextGenInterface(...) | new_llama_output_parser 

So you're basically creating a chain that allows using an LLM as a chat model without using an implementation of BaseChatModel. What is the purpose of the BaseChatModel asbtraction then?

From my perspective, BaseChatModel implementations should also be responsible for hiding details like serializing chat messages into low-level prompt strings. Users of BaseChatModel implementations don't (and shouldn't) need to care about these details.

For example, when using ChatOpenAI, details like separating chat messages within a lower-level prompt string are also hidden from the user (it is even hidden behind the OpenAI API but that's an implementation detail too).

So why shouldn't LangChain try to follow these design principles of chat model interfaces consistently i.e. encapsulating details like chat messages separators, tokenization, ... etc. behind chat model interfaces instead of sometimes exposing it to users (like in your proposal) and sometimes not?

Having a common chat model interface with clear semantics (which includes dealing with low-level chat message separators) would simplifiy application development with LangChain a lot.

@krasserm
Copy link
Contributor Author

krasserm commented Nov 17, 2023

I just moved everything from langchain to experimental and updated the demo notebook accordingly. I also added a pytest-asyncio dependency to pyproject.toml needed for executing async tests. An updated poetry.lock is included. Commits are rebased on master.

@efriis
Copy link
Member

efriis commented Nov 17, 2023

Appreciate it!

Re: the previous point about composing the LLM rather than wrapping it in the BaseChatModel - I think this is a discussion of simplicity vs ease of use.

ChatOpenAI makes sense as a purpose-built chat interface because it consistently returns the correct format. With many of these self-hosted LLMs, getting that chat model kind of consistency is difficult, so having simpler abstractions that have to be composed together makes more sense to me.

That being said, these kinds of experiments are what experimental is for! Thanks for moving it

@efriis efriis added the lgtm PR looks good. Use to confirm that a PR is ready for merging. label Nov 17, 2023
@efriis efriis merged commit 79ed66f into langchain-ai:master Nov 18, 2023
24 checks passed
@efriis
Copy link
Member

efriis commented Nov 18, 2023

Landed! Thanks @krasserm and appreciate your patience on this one

@krasserm
Copy link
Contributor Author

Thanks @efriis for discussing and supporting this PR

nicolewhite pushed a commit to autoblocksai/autoblocks-examples that referenced this pull request Nov 20, 2023
[![Mend Renovate logo
banner](https://app.renovatebot.com/images/banner.svg)](https://renovatebot.com)

This PR contains the following updates:

| Package | Change | Age | Adoption | Passing | Confidence |
|---|---|---|---|---|---|
| [@autoblocks/client](https://togithub.com/autoblocksai/javascript-sdk)
| [`^0.0.17` ->
`^0.0.20`](https://renovatebot.com/diffs/npm/@autoblocks%2fclient/0.0.17/0.0.20)
|
[![age](https://developer.mend.io/api/mc/badges/age/npm/@autoblocks%2fclient/0.0.20?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
[![adoption](https://developer.mend.io/api/mc/badges/adoption/npm/@autoblocks%2fclient/0.0.20?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
[![passing](https://developer.mend.io/api/mc/badges/compatibility/npm/@autoblocks%2fclient/0.0.17/0.0.20?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
[![confidence](https://developer.mend.io/api/mc/badges/confidence/npm/@autoblocks%2fclient/0.0.17/0.0.20?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
|
[@types/node](https://togithub.com/DefinitelyTyped/DefinitelyTyped/tree/master/types/node)
([source](https://togithub.com/DefinitelyTyped/DefinitelyTyped)) |
[`20.9.0` ->
`20.9.2`](https://renovatebot.com/diffs/npm/@types%2fnode/20.9.0/20.9.2)
|
[![age](https://developer.mend.io/api/mc/badges/age/npm/@types%2fnode/20.9.2?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
[![adoption](https://developer.mend.io/api/mc/badges/adoption/npm/@types%2fnode/20.9.2?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
[![passing](https://developer.mend.io/api/mc/badges/compatibility/npm/@types%2fnode/20.9.0/20.9.2?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
[![confidence](https://developer.mend.io/api/mc/badges/confidence/npm/@types%2fnode/20.9.0/20.9.2?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
| [ai](https://sdk.vercel.ai/docs)
([source](https://togithub.com/vercel/ai)) | [`2.2.22` ->
`2.2.24`](https://renovatebot.com/diffs/npm/ai/2.2.22/2.2.24) |
[![age](https://developer.mend.io/api/mc/badges/age/npm/ai/2.2.24?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
[![adoption](https://developer.mend.io/api/mc/badges/adoption/npm/ai/2.2.24?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
[![passing](https://developer.mend.io/api/mc/badges/compatibility/npm/ai/2.2.22/2.2.24?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
[![confidence](https://developer.mend.io/api/mc/badges/confidence/npm/ai/2.2.22/2.2.24?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
| [eslint](https://eslint.org)
([source](https://togithub.com/eslint/eslint)) | [`8.53.0` ->
`8.54.0`](https://renovatebot.com/diffs/npm/eslint/8.53.0/8.54.0) |
[![age](https://developer.mend.io/api/mc/badges/age/npm/eslint/8.54.0?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
[![adoption](https://developer.mend.io/api/mc/badges/adoption/npm/eslint/8.54.0?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
[![passing](https://developer.mend.io/api/mc/badges/compatibility/npm/eslint/8.53.0/8.54.0?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
[![confidence](https://developer.mend.io/api/mc/badges/confidence/npm/eslint/8.53.0/8.54.0?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
|
[eslint-config-next](https://nextjs.org/docs/app/building-your-application/configuring/eslint#eslint-config)
([source](https://togithub.com/vercel/next.js)) | [`14.0.2` ->
`14.0.3`](https://renovatebot.com/diffs/npm/eslint-config-next/14.0.2/14.0.3)
|
[![age](https://developer.mend.io/api/mc/badges/age/npm/eslint-config-next/14.0.3?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
[![adoption](https://developer.mend.io/api/mc/badges/adoption/npm/eslint-config-next/14.0.3?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
[![passing](https://developer.mend.io/api/mc/badges/compatibility/npm/eslint-config-next/14.0.2/14.0.3?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
[![confidence](https://developer.mend.io/api/mc/badges/confidence/npm/eslint-config-next/14.0.2/14.0.3?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
| [langchain](https://togithub.com/langchain-ai/langchain) | `^0.0.335`
-> `^0.0.338` |
[![age](https://developer.mend.io/api/mc/badges/age/pypi/langchain/0.0.338?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
[![adoption](https://developer.mend.io/api/mc/badges/adoption/pypi/langchain/0.0.338?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
[![passing](https://developer.mend.io/api/mc/badges/compatibility/pypi/langchain/0.0.335/0.0.338?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
[![confidence](https://developer.mend.io/api/mc/badges/confidence/pypi/langchain/0.0.335/0.0.338?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
| [langchain](https://togithub.com/langchain-ai/langchainjs) |
[`^0.0.186` ->
`^0.0.193`](https://renovatebot.com/diffs/npm/langchain/0.0.186/0.0.193)
|
[![age](https://developer.mend.io/api/mc/badges/age/npm/langchain/0.0.193?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
[![adoption](https://developer.mend.io/api/mc/badges/adoption/npm/langchain/0.0.193?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
[![passing](https://developer.mend.io/api/mc/badges/compatibility/npm/langchain/0.0.186/0.0.193?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
[![confidence](https://developer.mend.io/api/mc/badges/confidence/npm/langchain/0.0.186/0.0.193?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
| [next](https://nextjs.org)
([source](https://togithub.com/vercel/next.js)) | [`14.0.2` ->
`14.0.3`](https://renovatebot.com/diffs/npm/next/14.0.2/14.0.3) |
[![age](https://developer.mend.io/api/mc/badges/age/npm/next/14.0.3?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
[![adoption](https://developer.mend.io/api/mc/badges/adoption/npm/next/14.0.3?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
[![passing](https://developer.mend.io/api/mc/badges/compatibility/npm/next/14.0.2/14.0.3?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
[![confidence](https://developer.mend.io/api/mc/badges/confidence/npm/next/14.0.2/14.0.3?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
| [openai](https://togithub.com/openai/openai-python) | `1.2.3` ->
`1.3.3` |
[![age](https://developer.mend.io/api/mc/badges/age/pypi/openai/1.3.3?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
[![adoption](https://developer.mend.io/api/mc/badges/adoption/pypi/openai/1.3.3?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
[![passing](https://developer.mend.io/api/mc/badges/compatibility/pypi/openai/1.2.3/1.3.3?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
[![confidence](https://developer.mend.io/api/mc/badges/confidence/pypi/openai/1.2.3/1.3.3?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
| [openai](https://togithub.com/openai/openai-python) | `0.28.1` ->
`1.3.3` |
[![age](https://developer.mend.io/api/mc/badges/age/pypi/openai/1.3.3?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
[![adoption](https://developer.mend.io/api/mc/badges/adoption/pypi/openai/1.3.3?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
[![passing](https://developer.mend.io/api/mc/badges/compatibility/pypi/openai/0.28.1/1.3.3?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
[![confidence](https://developer.mend.io/api/mc/badges/confidence/pypi/openai/0.28.1/1.3.3?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
| [openai](https://togithub.com/openai/openai-node) | [`4.17.4` ->
`4.19.0`](https://renovatebot.com/diffs/npm/openai/4.17.4/4.19.0) |
[![age](https://developer.mend.io/api/mc/badges/age/npm/openai/4.19.0?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
[![adoption](https://developer.mend.io/api/mc/badges/adoption/npm/openai/4.19.0?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
[![passing](https://developer.mend.io/api/mc/badges/compatibility/npm/openai/4.17.4/4.19.0?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
[![confidence](https://developer.mend.io/api/mc/badges/confidence/npm/openai/4.17.4/4.19.0?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|

---

### Release Notes

<details>
<summary>autoblocksai/javascript-sdk
(@&#8203;autoblocks/client)</summary>

###
[`v0.0.20`](https://togithub.com/autoblocksai/javascript-sdk/compare/0.0.19...0.0.20)

[Compare
Source](https://togithub.com/autoblocksai/javascript-sdk/compare/0.0.19...0.0.20)

###
[`v0.0.19`](https://togithub.com/autoblocksai/javascript-sdk/compare/0.0.18...0.0.19)

[Compare
Source](https://togithub.com/autoblocksai/javascript-sdk/compare/0.0.18...0.0.19)

###
[`v0.0.18`](https://togithub.com/autoblocksai/javascript-sdk/compare/0.0.17...0.0.18)

[Compare
Source](https://togithub.com/autoblocksai/javascript-sdk/compare/0.0.17...0.0.18)

</details>

<details>
<summary>vercel/ai (ai)</summary>

### [`v2.2.24`](https://togithub.com/vercel/ai/releases/tag/ai%402.2.24)

[Compare
Source](https://togithub.com/vercel/ai/compare/ai@2.2.23...ai@2.2.24)

##### Patch Changes

- [`69ca8f5`](https://togithub.com/vercel/ai/commit/69ca8f5): ai/react:
add experimental_useAssistant hook and experimental_AssistantResponse
- [`3e2299e`](https://togithub.com/vercel/ai/commit/3e2299e):
experimental_StreamData/StreamingReactResponse: optimize parsing,
improve types
- [`70bd2ac`](https://togithub.com/vercel/ai/commit/70bd2ac): ai/solid:
add experimental_StreamData support to useChat

Proper documentation for the new features will be ready in the near
future, but in the meantime you can refer to [this
document](https://togithub.com/vercel/ai/blob/main/examples/next-openai/app/api/assistant/assistant-setup.md)
and the accompanying
[example](https://togithub.com/vercel/ai/blob/main/examples/next-openai/app/api/assistant/route.ts)
for the Assistants API, and [this
example](https://togithub.com/vercel/ai/blob/fbda5b20afe33f1e9b73644a1052954b2d2e7602/examples/next-openai/app/api/chat-with-vision)
for working with the new `data` API for vision.

Thanks [@&#8203;lgrammel](https://togithub.com/lgrammel) for the great
work in this release!

### [`v2.2.23`](https://togithub.com/vercel/ai/releases/tag/ai%402.2.23)

[Compare
Source](https://togithub.com/vercel/ai/compare/ai@2.2.22...ai@2.2.23)

##### Patch Changes

- [`5a04321`](https://togithub.com/vercel/ai/commit/5a04321): add
StreamData support to StreamingReactResponse, add client-side data API
to react/use-chat

</details>

<details>
<summary>eslint/eslint (eslint)</summary>

### [`v8.54.0`](https://togithub.com/eslint/eslint/releases/tag/v8.54.0)

[Compare
Source](https://togithub.com/eslint/eslint/compare/v8.53.0...v8.54.0)

#### Features

-
[`a7a883b`](https://togithub.com/eslint/eslint/commit/a7a883bd6ba4f140b60cbbb2be5b53d750f6c8db)
feat: for-direction rule add check for condition in reverse order
([#&#8203;17755](https://togithub.com/eslint/eslint/issues/17755))
(Angelo Annunziata)
-
[`1452dc9`](https://togithub.com/eslint/eslint/commit/1452dc9f12c45c05d7c569f737221f0d988ecef1)
feat: Add suggestions to no-console
([#&#8203;17680](https://togithub.com/eslint/eslint/issues/17680)) (Joel
Mathew Koshy)
-
[`21ebf8a`](https://togithub.com/eslint/eslint/commit/21ebf8a811be9f4b009cf70a10be5062d4fdc736)
feat: update `no-array-constructor` rule
([#&#8203;17711](https://togithub.com/eslint/eslint/issues/17711))
(Francesco Trotta)

#### Bug Fixes

-
[`98926e6`](https://togithub.com/eslint/eslint/commit/98926e6e7323e5dd12a9f016cb558144296665af)
fix: Ensure that extra data is not accidentally stored in the cache file
([#&#8203;17760](https://togithub.com/eslint/eslint/issues/17760))
(Milos Djermanovic)
-
[`e8cf9f6`](https://togithub.com/eslint/eslint/commit/e8cf9f6a524332293f8b2c90a2db4a532e47d919)
fix: Make dark scroll bar in dark theme
([#&#8203;17753](https://togithub.com/eslint/eslint/issues/17753))
(Pavel)
-
[`3cbeaad`](https://togithub.com/eslint/eslint/commit/3cbeaad7b943c153937ce34365cec2c406f2b98b)
fix: Use `cwd` constructor option as config `basePath` in Linter
([#&#8203;17705](https://togithub.com/eslint/eslint/issues/17705))
(Milos Djermanovic)

#### Documentation

-
[`becfdd3`](https://togithub.com/eslint/eslint/commit/becfdd39b25d795e56c9a13eb3e77af6b9c86e8a)
docs: Make clear when rules are removed
([#&#8203;17728](https://togithub.com/eslint/eslint/issues/17728))
(Nicholas C. Zakas)
-
[`05d6e99`](https://togithub.com/eslint/eslint/commit/05d6e99153ed6d94eb30f46c57609371918a41f3)
docs: update "Submit a Pull Request" page
([#&#8203;17712](https://togithub.com/eslint/eslint/issues/17712))
(Francesco Trotta)
-
[`eb2279e`](https://togithub.com/eslint/eslint/commit/eb2279e5148cee8fdea7dae614f4f8af7a2d06c3)
docs: display info about deprecated rules
([#&#8203;17749](https://togithub.com/eslint/eslint/issues/17749))
(Percy Ma)
-
[`d245326`](https://togithub.com/eslint/eslint/commit/d24532601e64714ac5d08507e05aa5c14ecd1d5a)
docs: Correct working in migrating plugin docs
([#&#8203;17722](https://togithub.com/eslint/eslint/issues/17722))
(Filip Tammergård)

#### Chores

-
[`d644de9`](https://togithub.com/eslint/eslint/commit/d644de9a4b593b565617303a095bc9aa69e7b768)
chore: upgrade
[@&#8203;eslint/js](https://togithub.com/eslint/js)[@&#8203;8](https://togithub.com/8).54.0
([#&#8203;17773](https://togithub.com/eslint/eslint/issues/17773))
(Milos Djermanovic)
-
[`1e6e314`](https://togithub.com/eslint/eslint/commit/1e6e31415cc429a3a9fc64b2ec03df0e0ec0c91b)
chore: package.json update for
[@&#8203;eslint/js](https://togithub.com/eslint/js) release (Jenkins)
-
[`6fb8805`](https://togithub.com/eslint/eslint/commit/6fb8805310afe7476d6c404f172177a6d15fcf11)
chore: Fixed grammar in issue_templates/rule_change
([#&#8203;17770](https://togithub.com/eslint/eslint/issues/17770)) (Joel
Mathew Koshy)
-
[`85db724`](https://togithub.com/eslint/eslint/commit/85db7243ddb8706ed60ab64a7ddf604d0d7de493)
chore: upgrade `markdownlint` to 0.31.1
([#&#8203;17754](https://togithub.com/eslint/eslint/issues/17754))
(Nitin Kumar)
-
[`6d470d2`](https://togithub.com/eslint/eslint/commit/6d470d2e74535761bd56dcb1c021b463ef9e8a9c)
chore: update dependency recast to ^0.23.0
([#&#8203;17736](https://togithub.com/eslint/eslint/issues/17736))
(renovate\[bot])
-
[`b7121b5`](https://togithub.com/eslint/eslint/commit/b7121b590d578c9c9b38ee481313317f30e54817)
chore: update dependency markdownlint-cli to ^0.37.0
([#&#8203;17735](https://togithub.com/eslint/eslint/issues/17735))
(renovate\[bot])
-
[`633b9a1`](https://togithub.com/eslint/eslint/commit/633b9a19752b6a22ab4d6c824f27a75ac0e4151b)
chore: update dependency regenerator-runtime to ^0.14.0
([#&#8203;17739](https://togithub.com/eslint/eslint/issues/17739))
(renovate\[bot])
-
[`acac16f`](https://togithub.com/eslint/eslint/commit/acac16fdf8540f7ba86cf637e3c1b253bd35a268)
chore: update dependency vite-plugin-commonjs to ^0.10.0
([#&#8203;17740](https://togithub.com/eslint/eslint/issues/17740))
(renovate\[bot])
-
[`ba8ca7e`](https://togithub.com/eslint/eslint/commit/ba8ca7e3debcba68ee7015b9221cf5acd7870206)
chore: add .github/renovate.json5
([#&#8203;17567](https://togithub.com/eslint/eslint/issues/17567)) (Josh
Goldberg ✨)

</details>

<details>
<summary>vercel/next.js (eslint-config-next)</summary>

###
[`v14.0.3`](https://togithub.com/vercel/next.js/compare/v14.0.2...v14.0.3)

[Compare
Source](https://togithub.com/vercel/next.js/compare/v14.0.2...v14.0.3)

</details>

<details>
<summary>langchain-ai/langchain (langchain)</summary>

###
[`v0.0.338`](https://togithub.com/langchain-ai/langchain/releases/tag/v0.0.338)

[Compare
Source](https://togithub.com/langchain-ai/langchain/compare/v0.0.337...v0.0.338)

#### What's Changed

- Override Keys Option by
[@&#8203;hinthornw](https://togithub.com/hinthornw) in
[https://github.com/langchain-ai/langchain/pull/13537](https://togithub.com/langchain-ai/langchain/pull/13537)
- Neptune graph updates by [@&#8203;3coins](https://togithub.com/3coins)
in
[https://github.com/langchain-ai/langchain/pull/13491](https://togithub.com/langchain-ai/langchain/pull/13491)
- WebResearchRetriever error handling in urls with connection error by
[@&#8203;pedro-inf-custodio](https://togithub.com/pedro-inf-custodio) in
[https://github.com/langchain-ai/langchain/pull/13401](https://togithub.com/langchain-ai/langchain/pull/13401)
- Add execution time by
[@&#8203;hinthornw](https://togithub.com/hinthornw) in
[https://github.com/langchain-ai/langchain/pull/13542](https://togithub.com/langchain-ai/langchain/pull/13542)
- Generic LLM wrapper to support chat model interface with configurable
chat prompt format by [@&#8203;krasserm](https://togithub.com/krasserm)
in
[https://github.com/langchain-ai/langchain/pull/8295](https://togithub.com/langchain-ai/langchain/pull/8295)
- Use random seed by [@&#8203;hinthornw](https://togithub.com/hinthornw)
in
[https://github.com/langchain-ai/langchain/pull/13544](https://togithub.com/langchain-ai/langchain/pull/13544)
- Fix typo/line break in the middle of a word by
[@&#8203;marks](https://togithub.com/marks) in
[https://github.com/langchain-ai/langchain/pull/13314](https://togithub.com/langchain-ai/langchain/pull/13314)
- Adds support for new OctoAI endpoints by
[@&#8203;AI-Bassem](https://togithub.com/AI-Bassem) in
[https://github.com/langchain-ai/langchain/pull/13521](https://togithub.com/langchain-ai/langchain/pull/13521)
- fixed `openai_assistant` namespace by
[@&#8203;leo-gan](https://togithub.com/leo-gan) in
[https://github.com/langchain-ai/langchain/pull/13543](https://togithub.com/langchain-ai/langchain/pull/13543)
- move streaming stdout by
[@&#8203;hwchase17](https://togithub.com/hwchase17) in
[https://github.com/langchain-ai/langchain/pull/13559](https://togithub.com/langchain-ai/langchain/pull/13559)
- update multi index templates by
[@&#8203;baskaryan](https://togithub.com/baskaryan) in
[https://github.com/langchain-ai/langchain/pull/13569](https://togithub.com/langchain-ai/langchain/pull/13569)
- bump 338, exp 42 by
[@&#8203;baskaryan](https://togithub.com/baskaryan) in
[https://github.com/langchain-ai/langchain/pull/13564](https://togithub.com/langchain-ai/langchain/pull/13564)

#### New Contributors

- [@&#8203;pedro-inf-custodio](https://togithub.com/pedro-inf-custodio)
made their first contribution in
[https://github.com/langchain-ai/langchain/pull/13401](https://togithub.com/langchain-ai/langchain/pull/13401)
- [@&#8203;marks](https://togithub.com/marks) made their first
contribution in
[https://github.com/langchain-ai/langchain/pull/13314](https://togithub.com/langchain-ai/langchain/pull/13314)

**Full Changelog**:
https://github.com/langchain-ai/langchain/compare/v0.0.337...v0.0.338

###
[`v0.0.337`](https://togithub.com/langchain-ai/langchain/releases/tag/v0.0.337)

[Compare
Source](https://togithub.com/langchain-ai/langchain/compare/v0.0.336...v0.0.337)

#### What's Changed

- Make pirate-speak-configurable template not require env vars for alte…
by [@&#8203;nfcampos](https://togithub.com/nfcampos) in
[https://github.com/langchain-ai/langchain/pull/13395](https://togithub.com/langchain-ai/langchain/pull/13395)
- Fix a link in docs by
[@&#8203;bracesproul](https://togithub.com/bracesproul) in
[https://github.com/langchain-ai/langchain/pull/13423](https://togithub.com/langchain-ai/langchain/pull/13423)
- updated `clickup` example by
[@&#8203;leo-gan](https://togithub.com/leo-gan) in
[https://github.com/langchain-ai/langchain/pull/13424](https://togithub.com/langchain-ai/langchain/pull/13424)
- DOCS: rag nit by [@&#8203;baskaryan](https://togithub.com/baskaryan)
in
[https://github.com/langchain-ai/langchain/pull/13436](https://togithub.com/langchain-ai/langchain/pull/13436)
- callback refactor by
[@&#8203;hwchase17](https://togithub.com/hwchase17) in
[https://github.com/langchain-ai/langchain/pull/13372](https://togithub.com/langchain-ai/langchain/pull/13372)
- updated `Activeloop DeepMemory` notebook by
[@&#8203;leo-gan](https://togithub.com/leo-gan) in
[https://github.com/langchain-ai/langchain/pull/13428](https://togithub.com/langchain-ai/langchain/pull/13428)
- updated `semadb` example by
[@&#8203;leo-gan](https://togithub.com/leo-gan) in
[https://github.com/langchain-ai/langchain/pull/13431](https://togithub.com/langchain-ai/langchain/pull/13431)
- Bagatur/chain of note by
[@&#8203;baskaryan](https://togithub.com/baskaryan) in
[https://github.com/langchain-ai/langchain/pull/13470](https://togithub.com/langchain-ai/langchain/pull/13470)
- Update multi-modal RAG cookbook by
[@&#8203;rlancemartin](https://togithub.com/rlancemartin) in
[https://github.com/langchain-ai/langchain/pull/13429](https://togithub.com/langchain-ai/langchain/pull/13429)
- Update chain of note README.md by
[@&#8203;baskaryan](https://togithub.com/baskaryan) in
[https://github.com/langchain-ai/langchain/pull/13473](https://togithub.com/langchain-ai/langchain/pull/13473)
- docs: `integrations/text_embeddings/` cleanup by
[@&#8203;leo-gan](https://togithub.com/leo-gan) in
[https://github.com/langchain-ai/langchain/pull/13476](https://togithub.com/langchain-ai/langchain/pull/13476)
- fix for `integratons/document_loaders` sidebar by
[@&#8203;leo-gan](https://togithub.com/leo-gan) in
[https://github.com/langchain-ai/langchain/pull/13471](https://togithub.com/langchain-ai/langchain/pull/13471)
- Add ahandle_event to *all* by
[@&#8203;eyurtsev](https://togithub.com/eyurtsev) in
[https://github.com/langchain-ai/langchain/pull/13469](https://togithub.com/langchain-ai/langchain/pull/13469)
- Astra DB: minor improvements to docstrings and demo notebook by
[@&#8203;hemidactylus](https://togithub.com/hemidactylus) in
[https://github.com/langchain-ai/langchain/pull/13449](https://togithub.com/langchain-ai/langchain/pull/13449)
- Use List instead of list by
[@&#8203;ifduyue](https://togithub.com/ifduyue) in
[https://github.com/langchain-ai/langchain/pull/13443](https://togithub.com/langchain-ai/langchain/pull/13443)
- updated `memory` Titles by
[@&#8203;leo-gan](https://togithub.com/leo-gan) in
[https://github.com/langchain-ai/langchain/pull/13435](https://togithub.com/langchain-ai/langchain/pull/13435)
- BUG Fix app_name in cli app new by
[@&#8203;efriis](https://togithub.com/efriis) in
[https://github.com/langchain-ai/langchain/pull/13482](https://togithub.com/langchain-ai/langchain/pull/13482)
- Lock pydantic v1 in app template, cli 0.0.18 by
[@&#8203;efriis](https://togithub.com/efriis) in
[https://github.com/langchain-ai/langchain/pull/13485](https://togithub.com/langchain-ai/langchain/pull/13485)
- Add serialisation arguments to Bedrock and ChatBedrock by
[@&#8203;dqbd](https://togithub.com/dqbd) in
[https://github.com/langchain-ai/langchain/pull/13465](https://togithub.com/langchain-ai/langchain/pull/13465)
- add input_type to VoyageEmbeddings by
[@&#8203;thomas0809](https://togithub.com/thomas0809) in
[https://github.com/langchain-ai/langchain/pull/13488](https://togithub.com/langchain-ai/langchain/pull/13488)
- Bugfix: OpenAIFunctionsAgentOutputParser doesn't handle functions with
no args by [@&#8203;chrisaffirm](https://togithub.com/chrisaffirm) in
[https://github.com/langchain-ai/langchain/pull/13467](https://togithub.com/langchain-ai/langchain/pull/13467)
- Allow openai v1 in all templates that require it by
[@&#8203;efriis](https://togithub.com/efriis) in
[https://github.com/langchain-ai/langchain/pull/13489](https://togithub.com/langchain-ai/langchain/pull/13489)
- updated `async-faiss` example by
[@&#8203;leo-gan](https://togithub.com/leo-gan) in
[https://github.com/langchain-ai/langchain/pull/13434](https://togithub.com/langchain-ai/langchain/pull/13434)
- docs `integrations/vectorstores/` cleanup by
[@&#8203;leo-gan](https://togithub.com/leo-gan) in
[https://github.com/langchain-ai/langchain/pull/13487](https://togithub.com/langchain-ai/langchain/pull/13487)
- Add optional arguments to FalkorDBGraph constructor by
[@&#8203;gkorland](https://togithub.com/gkorland) in
[https://github.com/langchain-ai/langchain/pull/13459](https://togithub.com/langchain-ai/langchain/pull/13459)
- updated `data_connection` index page by
[@&#8203;leo-gan](https://togithub.com/leo-gan) in
[https://github.com/langchain-ai/langchain/pull/13426](https://togithub.com/langchain-ai/langchain/pull/13426)
- Add Wrapping Library Metadata to MongoDB vector store by
[@&#8203;NoahStapp](https://togithub.com/NoahStapp) in
[https://github.com/langchain-ai/langchain/pull/13084](https://togithub.com/langchain-ai/langchain/pull/13084)
- \[LLMonitorCallbackHandler] Various improvements by
[@&#8203;hughcrt](https://togithub.com/hughcrt) in
[https://github.com/langchain-ai/langchain/pull/13151](https://togithub.com/langchain-ai/langchain/pull/13151)
- TEMPLATES: Add multi-index templates by
[@&#8203;baskaryan](https://togithub.com/baskaryan) in
[https://github.com/langchain-ai/langchain/pull/13490](https://togithub.com/langchain-ai/langchain/pull/13490)
- IMPROVEMENT: update assistants output and doc by
[@&#8203;baskaryan](https://togithub.com/baskaryan) in
[https://github.com/langchain-ai/langchain/pull/13480](https://togithub.com/langchain-ai/langchain/pull/13480)
- Runnable with message history by
[@&#8203;baskaryan](https://togithub.com/baskaryan) in
[https://github.com/langchain-ai/langchain/pull/13418](https://togithub.com/langchain-ai/langchain/pull/13418)
- Add VertexAI Chuck Norris template by
[@&#8203;wietsevenema](https://togithub.com/wietsevenema) in
[https://github.com/langchain-ai/langchain/pull/13531](https://togithub.com/langchain-ai/langchain/pull/13531)
- bump 337 by [@&#8203;baskaryan](https://togithub.com/baskaryan) in
[https://github.com/langchain-ai/langchain/pull/13534](https://togithub.com/langchain-ai/langchain/pull/13534)

#### New Contributors

- [@&#8203;ifduyue](https://togithub.com/ifduyue) made their first
contribution in
[https://github.com/langchain-ai/langchain/pull/13443](https://togithub.com/langchain-ai/langchain/pull/13443)
- [@&#8203;chrisaffirm](https://togithub.com/chrisaffirm) made their
first contribution in
[https://github.com/langchain-ai/langchain/pull/13467](https://togithub.com/langchain-ai/langchain/pull/13467)
- [@&#8203;wietsevenema](https://togithub.com/wietsevenema) made their
first contribution in
[https://github.com/langchain-ai/langchain/pull/13531](https://togithub.com/langchain-ai/langchain/pull/13531)

**Full Changelog**:
https://github.com/langchain-ai/langchain/compare/v0.0.336...v0.0.337

###
[`v0.0.336`](https://togithub.com/langchain-ai/langchain/releases/tag/v0.0.336)

[Compare
Source](https://togithub.com/langchain-ai/langchain/compare/v0.0.335...v0.0.336)

#### What's Changed

- Add new models to openai callback by
[@&#8203;IsakNyberg](https://togithub.com/IsakNyberg) in
[https://github.com/langchain-ai/langchain/pull/13244](https://togithub.com/langchain-ai/langchain/pull/13244)
- Update README.md by
[@&#8203;levalencia](https://togithub.com/levalencia) in
[https://github.com/langchain-ai/langchain/pull/8570](https://togithub.com/langchain-ai/langchain/pull/8570)
- Free knowledge base pod information update by
[@&#8203;mpskex](https://togithub.com/mpskex) in
[https://github.com/langchain-ai/langchain/pull/12813](https://togithub.com/langchain-ai/langchain/pull/12813)
- Update ollama.py by
[@&#8203;glad4enkonm](https://togithub.com/glad4enkonm) in
[https://github.com/langchain-ai/langchain/pull/12895](https://togithub.com/langchain-ai/langchain/pull/12895)
- Improve CSV reader which can't call .strip() on NoneType by
[@&#8203;dennisdegreef](https://togithub.com/dennisdegreef) in
[https://github.com/langchain-ai/langchain/pull/13079](https://togithub.com/langchain-ai/langchain/pull/13079)
- Typo fix to quickstart.mdx by
[@&#8203;marioangst](https://togithub.com/marioangst) in
[https://github.com/langchain-ai/langchain/pull/13178](https://togithub.com/langchain-ai/langchain/pull/13178)
- dalle add model parameter by
[@&#8203;AzeWZ](https://togithub.com/AzeWZ) in
[https://github.com/langchain-ai/langchain/pull/13201](https://togithub.com/langchain-ai/langchain/pull/13201)
- Remove `_get_kwarg_value` function by
[@&#8203;Guillem96](https://togithub.com/Guillem96) in
[https://github.com/langchain-ai/langchain/pull/13184](https://togithub.com/langchain-ai/langchain/pull/13184)
- Update README.md - Added notebook for extraction_openai_tools by
[@&#8203;shauryr](https://togithub.com/shauryr) in
[https://github.com/langchain-ai/langchain/pull/13205](https://togithub.com/langchain-ai/langchain/pull/13205)
- Add dockerfile template by
[@&#8203;langchain-infra](https://togithub.com/langchain-infra) in
[https://github.com/langchain-ai/langchain/pull/13240](https://togithub.com/langchain-ai/langchain/pull/13240)
- added system prompt and template fields to ollama by
[@&#8203;Govind-S-B](https://togithub.com/Govind-S-B) in
[https://github.com/langchain-ai/langchain/pull/13022](https://togithub.com/langchain-ai/langchain/pull/13022)
- Add rag google vertex ai search template by
[@&#8203;juan-calvo-datatonic](https://togithub.com/juan-calvo-datatonic)
in
[https://github.com/langchain-ai/langchain/pull/13294](https://togithub.com/langchain-ai/langchain/pull/13294)
- Add OpenAI API v1 support for ChatAnyscale and fixed a bug with
openai_api_key by [@&#8203;kylehh](https://togithub.com/kylehh) in
[https://github.com/langchain-ai/langchain/pull/13237](https://togithub.com/langchain-ai/langchain/pull/13237)
- Fix typo in timescalevector.ipynb by
[@&#8203;eltociear](https://togithub.com/eltociear) in
[https://github.com/langchain-ai/langchain/pull/13239](https://togithub.com/langchain-ai/langchain/pull/13239)
- docs: align custom_tool document headers by
[@&#8203;edwardzjl](https://togithub.com/edwardzjl) in
[https://github.com/langchain-ai/langchain/pull/13252](https://togithub.com/langchain-ai/langchain/pull/13252)
- chore: bump momento dependency version and refactor search hit usage
by [@&#8203;malandis](https://togithub.com/malandis) in
[https://github.com/langchain-ai/langchain/pull/13111](https://togithub.com/langchain-ai/langchain/pull/13111)
- Add MyScaleWithoutJSON which allows user to wrap columns into
Document's Metadata by [@&#8203;mpskex](https://togithub.com/mpskex) in
[https://github.com/langchain-ai/langchain/pull/13164](https://togithub.com/langchain-ai/langchain/pull/13164)
- Ollama pass kwargs as options instead of top by
[@&#8203;efriis](https://togithub.com/efriis) in
[https://github.com/langchain-ai/langchain/pull/13280](https://togithub.com/langchain-ai/langchain/pull/13280)
- Use endpoint_url if provided with boto3 session for dynamodb by
[@&#8203;chevalmuscle](https://togithub.com/chevalmuscle) in
[https://github.com/langchain-ai/langchain/pull/11622](https://togithub.com/langchain-ai/langchain/pull/11622)
- Add missing filter to max_marginal_relevance_search inner call to
max_marginal_relevance_search_by_vector by
[@&#8203;Frank995](https://togithub.com/Frank995) in
[https://github.com/langchain-ai/langchain/pull/13260](https://togithub.com/langchain-ai/langchain/pull/13260)
- FIX: 'from_texts' method in Weaviate with non-existent kwargs param by
[@&#8203;takatost](https://togithub.com/takatost) in
[https://github.com/langchain-ai/langchain/pull/11604](https://togithub.com/langchain-ai/langchain/pull/11604)
- Refine Weaviate docs and add RAG example by
[@&#8203;iamleonie](https://togithub.com/iamleonie) in
[https://github.com/langchain-ai/langchain/pull/13057](https://togithub.com/langchain-ai/langchain/pull/13057)
- Update error message in evaluation runner by
[@&#8203;hinthornw](https://togithub.com/hinthornw) in
[https://github.com/langchain-ai/langchain/pull/13296](https://togithub.com/langchain-ai/langchain/pull/13296)
- Fix serialization issue in Matching Engine Vector Store by
[@&#8203;konstantin-spiess](https://togithub.com/konstantin-spiess) in
[https://github.com/langchain-ai/langchain/pull/13266](https://togithub.com/langchain-ai/langchain/pull/13266)
- Self-query template by
[@&#8203;baskaryan](https://togithub.com/baskaryan) in
[https://github.com/langchain-ai/langchain/pull/12694](https://togithub.com/langchain-ai/langchain/pull/12694)
- Fix Pinecone cosine relevance score by
[@&#8203;ruiramos](https://togithub.com/ruiramos) in
[https://github.com/langchain-ai/langchain/pull/8920](https://togithub.com/langchain-ai/langchain/pull/8920)
- add: license file to subproject by
[@&#8203;YYYasin19](https://togithub.com/YYYasin19) in
[https://github.com/langchain-ai/langchain/pull/8403](https://togithub.com/langchain-ai/langchain/pull/8403)
- IMPROVEMENT self-query template by
[@&#8203;efriis](https://togithub.com/efriis) in
[https://github.com/langchain-ai/langchain/pull/13305](https://togithub.com/langchain-ai/langchain/pull/13305)
- Cookbook for multi-modal RAG eval by
[@&#8203;rlancemartin](https://togithub.com/rlancemartin) in
[https://github.com/langchain-ai/langchain/pull/13272](https://togithub.com/langchain-ai/langchain/pull/13272)
- Increase flexibility of ElasticVectorSearch by
[@&#8203;mertkayhan](https://togithub.com/mertkayhan) in
[https://github.com/langchain-ai/langchain/pull/6863](https://togithub.com/langchain-ai/langchain/pull/6863)
- add cookbook for RAG with baidu QIANFAN and elasticsearch by
[@&#8203;wemysschen](https://togithub.com/wemysschen) in
[https://github.com/langchain-ai/langchain/pull/13287](https://togithub.com/langchain-ai/langchain/pull/13287)
- IMPROVEMENT redirect root to docs by
[@&#8203;efriis](https://togithub.com/efriis) in
[https://github.com/langchain-ai/langchain/pull/13303](https://togithub.com/langchain-ai/langchain/pull/13303)
- gpt researcher by [@&#8203;hwchase17](https://togithub.com/hwchase17)
in
[https://github.com/langchain-ai/langchain/pull/13062](https://togithub.com/langchain-ai/langchain/pull/13062)
- add retrieval agent by
[@&#8203;hwchase17](https://togithub.com/hwchase17) in
[https://github.com/langchain-ai/langchain/pull/13317](https://togithub.com/langchain-ai/langchain/pull/13317)
- Update main readme by
[@&#8203;baskaryan](https://togithub.com/baskaryan) in
[https://github.com/langchain-ai/langchain/pull/13298](https://togithub.com/langchain-ai/langchain/pull/13298)
- DOCS: cleanup docs directory by
[@&#8203;baskaryan](https://togithub.com/baskaryan) in
[https://github.com/langchain-ai/langchain/pull/13301](https://togithub.com/langchain-ai/langchain/pull/13301)
- Move OAI assistants to langchain and add callbacks by
[@&#8203;baskaryan](https://togithub.com/baskaryan) in
[https://github.com/langchain-ai/langchain/pull/13236](https://togithub.com/langchain-ai/langchain/pull/13236)
- fix litellm openai imports by
[@&#8203;krrishdholakia](https://togithub.com/krrishdholakia) in
[https://github.com/langchain-ai/langchain/pull/13307](https://togithub.com/langchain-ai/langchain/pull/13307)
- arxiv retrieval agent improvement by
[@&#8203;hwchase17](https://togithub.com/hwchase17) in
[https://github.com/langchain-ai/langchain/pull/13329](https://togithub.com/langchain-ai/langchain/pull/13329)
- add more reasonable arxiv retriever by
[@&#8203;hwchase17](https://togithub.com/hwchase17) in
[https://github.com/langchain-ai/langchain/pull/13327](https://togithub.com/langchain-ai/langchain/pull/13327)
- Pgvector template by
[@&#8203;manuel-soria](https://togithub.com/manuel-soria) in
[https://github.com/langchain-ai/langchain/pull/13267](https://togithub.com/langchain-ai/langchain/pull/13267)
- Fix latest message index by
[@&#8203;billytrend-cohere](https://togithub.com/billytrend-cohere) in
[https://github.com/langchain-ai/langchain/pull/13355](https://togithub.com/langchain-ai/langchain/pull/13355)
- CLI interactivity by [@&#8203;efriis](https://togithub.com/efriis) in
[https://github.com/langchain-ai/langchain/pull/13148](https://togithub.com/langchain-ai/langchain/pull/13148)
- cli 0.0.17 by [@&#8203;efriis](https://togithub.com/efriis) in
[https://github.com/langchain-ai/langchain/pull/13359](https://togithub.com/langchain-ai/langchain/pull/13359)
- added `Cookbooks` link by
[@&#8203;leo-gan](https://togithub.com/leo-gan) in
[https://github.com/langchain-ai/langchain/pull/13078](https://togithub.com/langchain-ai/langchain/pull/13078)
- Bump pyarrow from 13.0.0 to 14.0.1 in /libs/langchain by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[https://github.com/langchain-ai/langchain/pull/13363](https://togithub.com/langchain-ai/langchain/pull/13363)
- feat(llms): support Openai API v1 for Azure OpenAI completions by
[@&#8203;mspronesti](https://togithub.com/mspronesti) in
[https://github.com/langchain-ai/langchain/pull/13231](https://togithub.com/langchain-ai/langchain/pull/13231)
- Lint Python notebooks with ruff. by
[@&#8203;obi1kenobi](https://togithub.com/obi1kenobi) in
[https://github.com/langchain-ai/langchain/pull/12677](https://togithub.com/langchain-ai/langchain/pull/12677)
- Bump all libraries to the latest `ruff` version. by
[@&#8203;obi1kenobi](https://togithub.com/obi1kenobi) in
[https://github.com/langchain-ai/langchain/pull/13350](https://togithub.com/langchain-ai/langchain/pull/13350)
- fmt by [@&#8203;baskaryan](https://togithub.com/baskaryan) in
[https://github.com/langchain-ai/langchain/pull/13371](https://togithub.com/langchain-ai/langchain/pull/13371)
- more cli interactivity, bugfix by
[@&#8203;efriis](https://togithub.com/efriis) in
[https://github.com/langchain-ai/langchain/pull/13360](https://togithub.com/langchain-ai/langchain/pull/13360)
- fix cli release by [@&#8203;efriis](https://togithub.com/efriis) in
[https://github.com/langchain-ai/langchain/pull/13373](https://togithub.com/langchain-ai/langchain/pull/13373)
- bump openai by [@&#8203;baskaryan](https://togithub.com/baskaryan) in
[https://github.com/langchain-ai/langchain/pull/13262](https://togithub.com/langchain-ai/langchain/pull/13262)
- `Yi` model from `01.ai` , example by
[@&#8203;leo-gan](https://togithub.com/leo-gan) in
[https://github.com/langchain-ai/langchain/pull/13375](https://togithub.com/langchain-ai/langchain/pull/13375)
- Update `rag-timescale-conversation` to dependencies without CVEs. by
[@&#8203;obi1kenobi](https://togithub.com/obi1kenobi) in
[https://github.com/langchain-ai/langchain/pull/13364](https://togithub.com/langchain-ai/langchain/pull/13364)
- Update `templates/rag-self-query` with newer dependencies without
CVEs. by [@&#8203;obi1kenobi](https://togithub.com/obi1kenobi) in
[https://github.com/langchain-ai/langchain/pull/13362](https://togithub.com/langchain-ai/langchain/pull/13362)
- Add limit_to_domains to APIChain based tools by
[@&#8203;fielding](https://togithub.com/fielding) in
[https://github.com/langchain-ai/langchain/pull/13367](https://togithub.com/langchain-ai/langchain/pull/13367)
- api doc newlines by [@&#8203;efriis](https://togithub.com/efriis) in
[https://github.com/langchain-ai/langchain/pull/13378](https://togithub.com/langchain-ai/langchain/pull/13378)
- docs integration cards site by
[@&#8203;leo-gan](https://togithub.com/leo-gan) in
[https://github.com/langchain-ai/langchain/pull/13379](https://togithub.com/langchain-ai/langchain/pull/13379)
- Add some properties to NotionDBLoader by
[@&#8203;kenta-takeuchi](https://togithub.com/kenta-takeuchi) in
[https://github.com/langchain-ai/langchain/pull/13358](https://togithub.com/langchain-ai/langchain/pull/13358)
- IMPROVEMENT more research-assistant configurability by
[@&#8203;efriis](https://togithub.com/efriis) in
[https://github.com/langchain-ai/langchain/pull/13312](https://togithub.com/langchain-ai/langchain/pull/13312)
- Make it easier to subclass RunnableEach by
[@&#8203;nfcampos](https://togithub.com/nfcampos) in
[https://github.com/langchain-ai/langchain/pull/13346](https://togithub.com/langchain-ai/langchain/pull/13346)
- Agent window management how to by
[@&#8203;baskaryan](https://togithub.com/baskaryan) in
[https://github.com/langchain-ai/langchain/pull/13033](https://togithub.com/langchain-ai/langchain/pull/13033)
- Bedrock cohere embedding support by
[@&#8203;celmore25](https://togithub.com/celmore25) in
[https://github.com/langchain-ai/langchain/pull/13366](https://togithub.com/langchain-ai/langchain/pull/13366)
- docs: install nit by
[@&#8203;baskaryan](https://togithub.com/baskaryan) in
[https://github.com/langchain-ai/langchain/pull/13380](https://togithub.com/langchain-ai/langchain/pull/13380)
- Bagatur/update rag use case by
[@&#8203;baskaryan](https://togithub.com/baskaryan) in
[https://github.com/langchain-ai/langchain/pull/13319](https://togithub.com/langchain-ai/langchain/pull/13319)
- Passthrough kwargs in runnable lambda by
[@&#8203;nfcampos](https://togithub.com/nfcampos) in
[https://github.com/langchain-ai/langchain/pull/13405](https://togithub.com/langchain-ai/langchain/pull/13405)
- PGVector needs to close its connection if it is garbage collected by
[@&#8203;Sumukh](https://togithub.com/Sumukh) in
[https://github.com/langchain-ai/langchain/pull/13232](https://togithub.com/langchain-ai/langchain/pull/13232)
- Fix Runnable Lambda Afunc Repr by
[@&#8203;hinthornw](https://togithub.com/hinthornw) in
[https://github.com/langchain-ai/langchain/pull/13413](https://togithub.com/langchain-ai/langchain/pull/13413)
- Use secretstr for api keys for javelin-ai-gateway by
[@&#8203;eyurtsev](https://togithub.com/eyurtsev) in
[https://github.com/langchain-ai/langchain/pull/13417](https://togithub.com/langchain-ai/langchain/pull/13417)
- FIX: Infer runnable agent single or multi action by
[@&#8203;baskaryan](https://togithub.com/baskaryan) in
[https://github.com/langchain-ai/langchain/pull/13412](https://togithub.com/langchain-ai/langchain/pull/13412)
- bump 336, exp 44 by
[@&#8203;baskaryan](https://togithub.com/baskaryan) in
[https://github.com/langchain-ai/langchain/pull/13420](https://togithub.com/langchain-ai/langchain/pull/13420)
- img update by [@&#8203;baskaryan](https://togithub.com/baskaryan) in
[https://github.com/langchain-ai/langchain/pull/13421](https://togithub.com/langchain-ai/langchain/pull/13421)

#### New Contributors

- [@&#8203;IsakNyberg](https://togithub.com/IsakNyberg) made their first
contribution in
[https://github.com/langchain-ai/langchain/pull/13244](https://togithub.com/langchain-ai/langchain/pull/13244)
- [@&#8203;glad4enkonm](https://togithub.com/glad4enkonm) made their
first contribution in
[https://github.com/langchain-ai/langchain/pull/12895](https://togithub.com/langchain-ai/langchain/pull/12895)
- [@&#8203;dennisdegreef](https://togithub.com/dennisdegreef) made their
first contribution in
[https://github.com/langchain-ai/langchain/pull/13079](https://togithub.com/langchain-ai/langchain/pull/13079)
- [@&#8203;marioangst](https://togithub.com/marioangst) made their first
contribution in
[https://github.com/langchain-ai/langchain/pull/13178](https://togithub.com/langchain-ai/langchain/pull/13178)
- [@&#8203;AzeWZ](https://togithub.com/AzeWZ) made their first
contribution in
[https://github.com/langchain-ai/langchain/pull/13201](https://togithub.com/langchain-ai/langchain/pull/13201)
- [@&#8203;Guillem96](https://togithub.com/Guillem96) made their first
contribution in
[https://github.com/langchain-ai/langchain/pull/13184](https://togithub.com/langchain-ai/langchain/pull/13184)
- [@&#8203;shauryr](https://togithub.com/shauryr) made their first
contribution in
[https://github.com/langchain-ai/langchain/pull/13205](https://togithub.com/langchain-ai/langchain/pull/13205)
- [@&#8203;langchain-infra](https://togithub.com/langchain-infra) made
their first contribution in
[https://github.com/langchain-ai/langchain/pull/13240](https://togithub.com/langchain-ai/langchain/pull/13240)
- [@&#8203;Govind-S-B](https://togithub.com/Govind-S-B) made their first
contribution in
[https://github.com/langchain-ai/langchain/pull/13022](https://togithub.com/langchain-ai/langchain/pull/13022)
-
[@&#8203;juan-calvo-datatonic](https://togithub.com/juan-calvo-datatonic)
made their first contribution in
[https://github.com/langchain-ai/langchain/pull/13294](https://togithub.com/langchain-ai/langchain/pull/13294)
- [@&#8203;chevalmuscle](https://togithub.com/chevalmuscle) made their
first contribution in
[https://github.com/langchain-ai/langchain/pull/11622](https://togithub.com/langchain-ai/langchain/pull/11622)
- [@&#8203;Frank995](https://togithub.com/Frank995) made their first
contribution in
[https://github.com/langchain-ai/langchain/pull/13260](https://togithub.com/langchain-ai/langchain/pull/13260)
- [@&#8203;takatost](https://togithub.com/takatost) made their first
contribution in
[https://github.com/langchain-ai/langchain/pull/11604](https://togithub.com/langchain-ai/langchain/pull/11604)
- [@&#8203;iamleonie](https://togithub.com/iamleonie) made their first
contribution in
[https://github.com/langchain-ai/langchain/pull/13057](https://togithub.com/langchain-ai/langchain/pull/13057)
- [@&#8203;konstantin-spiess](https://togithub.com/konstantin-spiess)
made their first contribution in
[https://github.com/langchain-ai/langchain/pull/13266](https://togithub.com/langchain-ai/langchain/pull/13266)
- [@&#8203;ruiramos](https://togithub.com/ruiramos) made their first
contribution in
[https://github.com/langchain-ai/langchain/pull/8920](https://togithub.com/langchain-ai/langchain/pull/8920)
- [@&#8203;mertkayhan](https://togithub.com/mertkayhan) made their first
contribution in
[https://github.com/langchain-ai/langchain/pull/6863](https://togithub.com/langchain-ai/langchain/pull/6863)
- [@&#8203;kenta-takeuchi](https://togithub.com/kenta-takeuchi) made
their first contribution in
[https://github.com/langchain-ai/langchain/pull/13358](https://togithub.com/langchain-ai/langchain/pull/13358)
- [@&#8203;celmore25](https://togithub.com/celmore25) made their first
contribution in
[https://github.com/langchain-ai/langchain/pull/13366](https://togithub.com/langchain-ai/langchain/pull/13366)
- [@&#8203;Sumukh](https://togithub.com/Sumukh) made their first
contribution in
[https://github.com/langchain-ai/langchain/pull/13232](https://togithub.com/langchain-ai/langchain/pull/13232)

**Full Changelog**:
https://github.com/langchain-ai/langchain/compare/v0.0.335...v0.0.336

</details>

<details>
<summary>langchain-ai/langchainjs (langchain)</summary>

###
[`v0.0.193`](https://togithub.com/langchain-ai/langchainjs/releases/tag/0.0.193)

[Compare
Source](https://togithub.com/langchain-ai/langchainjs/compare/0.0.192...0.0.193)

#### What's Changed

- Release 0.0.192 by
[@&#8203;jacoblee93](https://togithub.com/jacoblee93) in
[https://github.com/langchain-ai/langchainjs/pull/3311](https://togithub.com/langchain-ai/langchainjs/pull/3311)
- Use .invoke for all agent docs and examples by
[@&#8203;jacoblee93](https://togithub.com/jacoblee93) in
[https://github.com/langchain-ai/langchainjs/pull/3319](https://togithub.com/langchain-ai/langchainjs/pull/3319)
- \[AUTO-GENERATED] Add JSDoc examples to classes. by
[@&#8203;bracesproul](https://togithub.com/bracesproul) in
[https://github.com/langchain-ai/langchainjs/pull/3309](https://togithub.com/langchain-ai/langchainjs/pull/3309)
- updated langchain stack img to be svg by
[@&#8203;bracesproul](https://togithub.com/bracesproul) in
[https://github.com/langchain-ai/langchainjs/pull/3324](https://togithub.com/langchain-ai/langchainjs/pull/3324)
- \[AUTO-GENERATED] Add JSDoc examples to classes. by
[@&#8203;bracesproul](https://togithub.com/bracesproul) in
[https://github.com/langchain-ai/langchainjs/pull/3325](https://togithub.com/langchain-ai/langchainjs/pull/3325)
- \[AUTO-GENERATED] Add JSDoc examples to classes. by
[@&#8203;bracesproul](https://togithub.com/bracesproul) in
[https://github.com/langchain-ai/langchainjs/pull/3327](https://togithub.com/langchain-ai/langchainjs/pull/3327)
- \[AUTO-GENERATED] Add JSDoc examples to classes. by
[@&#8203;bracesproul](https://togithub.com/bracesproul) in
[https://github.com/langchain-ai/langchainjs/pull/3329](https://togithub.com/langchain-ai/langchainjs/pull/3329)
- \[AUTO-GENERATED] Add JSDoc examples to classes. by
[@&#8203;bracesproul](https://togithub.com/bracesproul) in
[https://github.com/langchain-ai/langchainjs/pull/3330](https://togithub.com/langchain-ai/langchainjs/pull/3330)
- Update Ollama functions by
[@&#8203;jacoblee93](https://togithub.com/jacoblee93) in
[https://github.com/langchain-ai/langchainjs/pull/3336](https://togithub.com/langchain-ai/langchainjs/pull/3336)
- Remove console.log from googlevertexai-connection.ts by
[@&#8203;raioalbano](https://togithub.com/raioalbano) in
[https://github.com/langchain-ai/langchainjs/pull/3322](https://togithub.com/langchain-ai/langchainjs/pull/3322)
- Add batch size arg by
[@&#8203;hinthornw](https://togithub.com/hinthornw) in
[https://github.com/langchain-ai/langchainjs/pull/3310](https://togithub.com/langchain-ai/langchainjs/pull/3310)
- feat: Added support of terms filter in OpenSearch vector store by
[@&#8203;faileon](https://togithub.com/faileon) in
[https://github.com/langchain-ai/langchainjs/pull/3312](https://togithub.com/langchain-ai/langchainjs/pull/3312)
- Improve MessageContent type by
[@&#8203;netzhuffle](https://togithub.com/netzhuffle) in
[https://github.com/langchain-ai/langchainjs/pull/3318](https://togithub.com/langchain-ai/langchainjs/pull/3318)
- Add missing PrismaVectorStore filter operators by
[@&#8203;Njuelle](https://togithub.com/Njuelle) in
[https://github.com/langchain-ai/langchainjs/pull/3321](https://togithub.com/langchain-ai/langchainjs/pull/3321)

#### New Contributors

- [@&#8203;raioalbano](https://togithub.com/raioalbano) made their first
contribution in
[https://github.com/langchain-ai/langchainjs/pull/3322](https://togithub.com/langchain-ai/langchainjs/pull/3322)
- [@&#8203;faileon](https://togithub.com/faileon) made their first
contribution in
[https://github.com/langchain-ai/langchainjs/pull/3312](https://togithub.com/langchain-ai/langchainjs/pull/3312)
- [@&#8203;netzhuffle](https://togithub.com/netzhuffle) made their first
contribution in
[https://github.com/langchain-ai/langchainjs/pull/3318](https://togithub.com/langchain-ai/langchainjs/pull/3318)

**Full Changelog**:
https://github.com/langchain-ai/langchainjs/compare/0.0.192...0.0.193

###
[`v0.0.192`](https://togithub.com/langchain-ai/langchainjs/releases/tag/0.0.192)

[Compare
Source](https://togithub.com/langchain-ai/langchainjs/compare/0.0.191...0.0.192)

#### What's Changed

- Release 0.0.191 by
[@&#8203;jacoblee93](https://togithub.com/jacoblee93) in
[https://github.com/langchain-ai/langchainjs/pull/3300](https://togithub.com/langchain-ai/langchainjs/pull/3300)
- Delete artifacts by
[@&#8203;jacoblee93](https://togithub.com/jacoblee93) in
[https://github.com/langchain-ai/langchainjs/pull/3305](https://togithub.com/langchain-ai/langchainjs/pull/3305)
- Add missing docs by
[@&#8203;bracesproul](https://togithub.com/bracesproul) in
[https://github.com/langchain-ai/langchainjs/pull/3290](https://togithub.com/langchain-ai/langchainjs/pull/3290)
- Brace/new api refs build by
[@&#8203;bracesproul](https://togithub.com/bracesproul) in
[https://github.com/langchain-ai/langchainjs/pull/3303](https://togithub.com/langchain-ai/langchainjs/pull/3303)
- Fix broken fetch usage for CFW by
[@&#8203;dqbd](https://togithub.com/dqbd) in
[https://github.com/langchain-ai/langchainjs/pull/3302](https://togithub.com/langchain-ai/langchainjs/pull/3302)
- Bump Anthropic + OpenAI versions by
[@&#8203;jacoblee93](https://togithub.com/jacoblee93) in
[https://github.com/langchain-ai/langchainjs/pull/3308](https://togithub.com/langchain-ai/langchainjs/pull/3308)
- Hotfix pdf by [@&#8203;jacoblee93](https://togithub.com/jacoblee93) in
[https://github.com/langchain-ai/langchainjs/pull/3306](https://togithub.com/langchain-ai/langchainjs/pull/3306)
- Add PrismaVectorStore filter IN operator by
[@&#8203;Njuelle](https://togithub.com/Njuelle) in
[https://github.com/langchain-ai/langchainjs/pull/3304](https://togithub.com/langchain-ai/langchainjs/pull/3304)
- feat(apify): support Document\[] return type for mapping function by
[@&#8203;omikader](https://togithub.com/omikader) in
[https://github.com/langchain-ai/langchainjs/pull/3262](https://togithub.com/langchain-ai/langchainjs/pull/3262)
- Integrate Rockset as a vector store by
[@&#8203;kwadhwa18](https://togithub.com/kwadhwa18) in
[https://github.com/langchain-ai/langchainjs/pull/3231](https://togithub.com/langchain-ai/langchainjs/pull/3231)
- feat: add file-system based cache by
[@&#8203;vdeturckheim](https://togithub.com/vdeturckheim) in
[https://github.com/langchain-ai/langchainjs/pull/3089](https://togithub.com/langchain-ai/langchainjs/pull/3089)

#### New Contributors

- [@&#8203;Njuelle](https://togithub.com/Njuelle) made their first
contribution in
[https://github.com/langchain-ai/langchainjs/pull/3304](https://togithub.com/langchain-ai/langchainjs/pull/3304)
- [@&#8203;kwadhwa18](https://togithub.com/kwadhwa18) made their first
contribution in
[https://github.com/langchain-ai/langchainjs/pull/3231](https://togithub.com/langchain-ai/langchainjs/pull/3231)
- [@&#8203;vdeturckheim](https://togithub.com/vdeturckheim) made their
first contribution in
[https://github.com/langchain-ai/langchainjs/pull/3089](https://togithub.com/langchain-ai/langchainjs/pull/3089)

**Full Changelog**:
https://github.com/langchain-ai/langchainjs/compare/0.0.191...0.0.192

###
[`v0.0.191`](https://togithub.com/langchain-ai/langchainjs/releases/tag/0.0.191)

[Compare
Source](https://togithub.com/langchain-ai/langchainjs/compare/0.0.190...0.0.191)

#### What's Changed

- Release 0.0.190 by
[@&#8203;jacoblee93](https://togithub.com/jacoblee93) in
[https://github.com/langchain-ai/langchainjs/pull/3298](https://togithub.com/langchain-ai/langchainjs/pull/3298)

**Full Changelog**:
https://github.com/langchain-ai/langchainjs/compare/0.0.190...0.0.191

###
[`v0.0.190`](https://togithub.com/langchain-ai/langchainjs/releases/tag/0.0.190)

[Compare
Source](https://togithub.com/langchain-ai/langchainjs/compare/0.0.189...0.0.190)

#### What's Changed

- Release 0.0.189 by
[@&#8203;jacoblee93](https://togithub.com/jacoblee93) in
[https://github.com/langchain-ai/langchainjs/pull/3278](https://togithub.com/langchain-ai/langchainjs/pull/3278)
- Brace/move syntaxtypes up by
[@&#8203;bracesproul](https://togithub.com/bracesproul) in
[https://github.com/langchain-ai/langchainjs/pull/3281](https://togithub.com/langchain-ai/langchainjs/pull/3281)
- Brace/api refs css by
[@&#8203;bracesproul](https://togithub.com/bracesproul) in
[https://github.com/langchain-ai/langchainjs/pull/3282](https://togithub.com/langchain-ai/langchainjs/pull/3282)
- Added runnable to xml agent, moved legacy to hidden page by
[@&#8203;bracesproul](https://togithub.com/bracesproul) in
[https://github.com/langchain-ai/langchainjs/pull/3287](https://togithub.com/langchain-ai/langchainjs/pull/3287)
- redo intro docs page by
[@&#8203;bracesproul](https://togithub.com/bracesproul) in
[https://github.com/langchain-ai/langchainjs/pull/3288](https://togithub.com/langchain-ai/langchainjs/pull/3288)
- Add better docstrings for runnables by
[@&#8203;bracesproul](https://togithub.com/bracesproul) in
[https://github.com/langchain-ai/langchainjs/pull/3291](https://togithub.com/langchain-ai/langchainjs/pull/3291)
- Update HTTP response output parser logic by
[@&#8203;jacoblee93](https://togithub.com/jacoblee93) in
[https://github.com/langchain-ai/langchainjs/pull/3295](https://togithub.com/langchain-ai/langchainjs/pull/3295)

**Full Changelog**:
https://github.com/langchain-ai/langchainjs/compare/0.0.189...0.0.190

###
[`v0.0.189`](https://togithub.com/langchain-ai/langchainjs/releases/tag/0.0.189)

[Compare
Source](https://togithub.com/langchain-ai/langchainjs/compare/0.0.188...0.0.189)

#### What's Changed

- Release 0.0.188 by
[@&#8203;jacoblee93](https://togithub.com/jacoblee93) in
[https://github.com/langchain-ai/langchainjs/pull/3276](https://togithub.com/langchain-ai/langchainjs/pull/3276)
- Revert Cohere update by
[@&#8203;jacoblee93](https://togithub.com/jacoblee93) in
[https://github.com/langchain-ai/langchainjs/pull/3277](https://togithub.com/langchain-ai/langchainjs/pull/3277)

**Full Changelog**:
https://github.com/langchain-ai/langchainjs/compare/0.0.188...0.0.189

###
[`v0.0.188`](https://togithub.com/langchain-ai/langchainjs/releases/tag/0.0.188)

[Compare
Source](https://togithub.com/langchain-ai/langchainjs/compare/0.0.187...0.0.188)

#### What's Changed

- Release 0.0.187 by
[@&#8203;jacoblee93](https://togithub.com/jacoblee93) in
[https://github.com/langchain-ai/langchainjs/pull/3255](https://togithub.com/langchain-ai/langchainjs/pull/3255)
- Break words on api refs sidebar instead of scrolling by
[@&#8203;bracesproul](https://togithub.com/bracesproul) in
[https://github.com/langchain-ai/langchainjs/pull/3265](https://togithub.com/langchain-ai/langchainjs/pull/3265)
- Use replaceAll instead of replace when generating operationid. by
[@&#8203;Manouchehri](https://togithub.com/Manouchehri) in
[https://github.com/langchain-ai/langchainjs/pull/3267](https://togithub.com/langchain-ai/langchainjs/pull/3267)
- Brace/bump cohere by
[@&#8203;bracesproul](https://togithub.com/bracesproul) in
[https://github.com/langchain-ai/langchainjs/pull/3263](https://togithub.com/langchain-ai/langchainjs/pull/3263)
- Added documentation for few shot prompting by
[@&#8203;bracesproul](https://togithub.com/bracesproul) in
[https://github.com/langchain-ai/langchainjs/pull/3122](https://togithub.com/langchain-ai/langchainjs/pull/3122)
- Allow custom system prompt for Ollama functions by
[@&#8203;jacoblee93](https://togithub.com/jacoblee93) in
[https://github.com/langchain-ai/langchainjs/pull/3264](https://togithub.com/langchain-ai/langchainjs/pull/3264)
- Brace/add ignore with tsmorph by
[@&#8203;bracesproul](https://togithub.com/bracesproul) in
[https://github.com/langchain-ai/langchainjs/pull/3271](https://togithub.com/langchain-ai/langchainjs/pull/3271)
- Added rag over code example by
[@&#8203;bracesproul](https://togithub.com/bracesproul) in
[https://github.com/langchain-ai/langchainjs/pull/3109](https://togithub.com/langchain-ai/langchainjs/pull/3109)
- Meta Llama2 support for BedrockChat by
[@&#8203;shafkevi](https://togithub.com/shafkevi) in
[https://github.com/langchain-ai/langchainjs/pull/3260](https://togithub.com/langchain-ai/langchainjs/pull/3260)
- Adds HTTP output parser to parse chunks into different content types
by [@&#8203;jacoblee93](https://togithub.com/jacoblee93) in
[https://github.com/langchain-ai/lang

</details>

---

### Configuration

📅 **Schedule**: Branch creation - "before 4am on Monday" (UTC),
Automerge - At any time (no schedule defined).

🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.

👻 **Immortal**: This PR will be recreated if closed unmerged. Get
[config help](https://togithub.com/renovatebot/renovate/discussions) if
that's undesired.

---

- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box

---

This PR has been generated by [Mend
Renovate](https://www.mend.io/free-developer-tools/renovate/). View
repository job log
[here](https://developer.mend.io/github/autoblocksai/autoblocks-examples).

<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiIzNy41OS44IiwidXBkYXRlZEluVmVyIjoiMzcuNTkuOCIsInRhcmdldEJyYW5jaCI6Im1haW4ifQ==-->

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
pprados pushed a commit to pprados/langchain that referenced this pull request Nov 20, 2023
… configurable chat prompt format (langchain-ai#8295)

## Update 2023-09-08

This PR now supports further models in addition to Lllama-2 chat models.
See [this comment](#issuecomment-1668988543) for further details. The
title of this PR has been updated accordingly.

## Original PR description

This PR adds a generic `Llama2Chat` model, a wrapper for LLMs able to
serve Llama-2 chat models (like `LlamaCPP`,
`HuggingFaceTextGenInference`, ...). It implements `BaseChatModel`,
converts a list of chat messages into the [required Llama-2 chat prompt
format](https://huggingface.co/blog/llama2#how-to-prompt-llama-2) and
forwards the formatted prompt as `str` to the wrapped `LLM`. Usage
example:

```python
# uses a locally hosted Llama2 chat model
llm = HuggingFaceTextGenInference(
    inference_server_url="http://127.0.0.1:8080/",
    max_new_tokens=512,
    top_k=50,
    temperature=0.1,
    repetition_penalty=1.03,
)

# Wrap llm to support Llama2 chat prompt format.
# Resulting model is a chat model
model = Llama2Chat(llm=llm)

messages = [
    SystemMessage(content="You are a helpful assistant."),
    MessagesPlaceholder(variable_name="chat_history"),
    HumanMessagePromptTemplate.from_template("{text}"),
]

prompt = ChatPromptTemplate.from_messages(messages)
memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)
chain = LLMChain(llm=model, prompt=prompt, memory=memory)

# use chat model in a conversation
# ...
```

Also part of this PR are tests and a demo notebook.

- Tag maintainer: @hwchase17
- Twitter handle: `@mrt1nz`

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
amiaxys pushed a commit to Haoming-jpg/team-skill-issue-langchain that referenced this pull request Nov 23, 2023
… configurable chat prompt format (langchain-ai#8295)

## Update 2023-09-08

This PR now supports further models in addition to Lllama-2 chat models.
See [this comment](#issuecomment-1668988543) for further details. The
title of this PR has been updated accordingly.

## Original PR description

This PR adds a generic `Llama2Chat` model, a wrapper for LLMs able to
serve Llama-2 chat models (like `LlamaCPP`,
`HuggingFaceTextGenInference`, ...). It implements `BaseChatModel`,
converts a list of chat messages into the [required Llama-2 chat prompt
format](https://huggingface.co/blog/llama2#how-to-prompt-llama-2) and
forwards the formatted prompt as `str` to the wrapped `LLM`. Usage
example:

```python
# uses a locally hosted Llama2 chat model
llm = HuggingFaceTextGenInference(
    inference_server_url="http://127.0.0.1:8080/",
    max_new_tokens=512,
    top_k=50,
    temperature=0.1,
    repetition_penalty=1.03,
)

# Wrap llm to support Llama2 chat prompt format.
# Resulting model is a chat model
model = Llama2Chat(llm=llm)

messages = [
    SystemMessage(content="You are a helpful assistant."),
    MessagesPlaceholder(variable_name="chat_history"),
    HumanMessagePromptTemplate.from_template("{text}"),
]

prompt = ChatPromptTemplate.from_messages(messages)
memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)
chain = LLMChain(llm=model, prompt=prompt, memory=memory)

# use chat model in a conversation
# ...
```

Also part of this PR are tests and a demo notebook.

- Tag maintainer: @hwchase17
- Twitter handle: `@mrt1nz`

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
🤖:enhancement A large net-new component, integration, or chain. Use sparingly. The largest features lgtm PR looks good. Use to confirm that a PR is ready for merging.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

10 participants