Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ollama[patch]: don't try to parse json in case of errored response #18317

Merged
merged 2 commits into from
Mar 1, 2024
Merged

ollama[patch]: don't try to parse json in case of errored response #18317

merged 2 commits into from
Mar 1, 2024

Conversation

StrikerRUS
Copy link
Contributor

Related issue: #13896.

In case Ollama is behind a proxy, proxy error responses cannot be viewed. You aren't even able to check response code.

For example, if your Ollama has basic access authentication and it's not passed, JSONDecodeError will overwrite the truth response error.

Log now:
{
	"name": "JSONDecodeError",
	"message": "Expecting value: line 1 column 1 (char 0)",
	"stack": "---------------------------------------------------------------------------
JSONDecodeError                           Traceback (most recent call last)
File /opt/miniforge3/envs/.gpt/lib/python3.10/site-packages/requests/models.py:971, in Response.json(self, **kwargs)
    970 try:
--> 971     return complexjson.loads(self.text, **kwargs)
    972 except JSONDecodeError as e:
    973     # Catch JSON-related errors and raise as requests.JSONDecodeError
    974     # This aliases json.JSONDecodeError and simplejson.JSONDecodeError

File /opt/miniforge3/envs/.gpt/lib/python3.10/json/__init__.py:346, in loads(s, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, **kw)
    343 if (cls is None and object_hook is None and
    344         parse_int is None and parse_float is None and
    345         parse_constant is None and object_pairs_hook is None and not kw):
--> 346     return _default_decoder.decode(s)
    347 if cls is None:

File /opt/miniforge3/envs/.gpt/lib/python3.10/json/decoder.py:337, in JSONDecoder.decode(self, s, _w)
    333 \"\"\"Return the Python representation of ``s`` (a ``str`` instance
    334 containing a JSON document).
    335 
    336 \"\"\"
--> 337 obj, end = self.raw_decode(s, idx=_w(s, 0).end())
    338 end = _w(s, end).end()

File /opt/miniforge3/envs/.gpt/lib/python3.10/json/decoder.py:355, in JSONDecoder.raw_decode(self, s, idx)
    354 except StopIteration as err:
--> 355     raise JSONDecodeError(\"Expecting value\", s, err.value) from None
    356 return obj, end

JSONDecodeError: Expecting value: line 1 column 1 (char 0)

During handling of the above exception, another exception occurred:

JSONDecodeError                           Traceback (most recent call last)
Cell In[3], line 1
----> 1 print(translate_func().invoke('text'))

File /opt/miniforge3/envs/.gpt/lib/python3.10/site-packages/langchain_core/runnables/base.py:2053, in RunnableSequence.invoke(self, input, config)
   2051 try:
   2052     for i, step in enumerate(self.steps):
-> 2053         input = step.invoke(
   2054             input,
   2055             # mark each step as a child run
   2056             patch_config(
   2057                 config, callbacks=run_manager.get_child(f\"seq:step:{i+1}\")
   2058             ),
   2059         )
   2060 # finish the root run
   2061 except BaseException as e:

File /opt/miniforge3/envs/.gpt/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py:165, in BaseChatModel.invoke(self, input, config, stop, **kwargs)
    154 def invoke(
    155     self,
    156     input: LanguageModelInput,
   (...)
    160     **kwargs: Any,
    161 ) -> BaseMessage:
    162     config = ensure_config(config)
    163     return cast(
    164         ChatGeneration,
--> 165         self.generate_prompt(
    166             [self._convert_input(input)],
    167             stop=stop,
    168             callbacks=config.get(\"callbacks\"),
    169             tags=config.get(\"tags\"),
    170             metadata=config.get(\"metadata\"),
    171             run_name=config.get(\"run_name\"),
    172             **kwargs,
    173         ).generations[0][0],
    174     ).message

File /opt/miniforge3/envs/.gpt/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py:543, in BaseChatModel.generate_prompt(self, prompts, stop, callbacks, **kwargs)
    535 def generate_prompt(
    536     self,
    537     prompts: List[PromptValue],
   (...)
    540     **kwargs: Any,
    541 ) -> LLMResult:
    542     prompt_messages = [p.to_messages() for p in prompts]
--> 543     return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)

File /opt/miniforge3/envs/.gpt/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py:407, in BaseChatModel.generate(self, messages, stop, callbacks, tags, metadata, run_name, **kwargs)
    405         if run_managers:
    406             run_managers[i].on_llm_error(e, response=LLMResult(generations=[]))
--> 407         raise e
    408 flattened_outputs = [
    409     LLMResult(generations=[res.generations], llm_output=res.llm_output)
    410     for res in results
    411 ]
    412 llm_output = self._combine_llm_outputs([res.llm_output for res in results])

File /opt/miniforge3/envs/.gpt/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py:397, in BaseChatModel.generate(self, messages, stop, callbacks, tags, metadata, run_name, **kwargs)
    394 for i, m in enumerate(messages):
    395     try:
    396         results.append(
--> 397             self._generate_with_cache(
    398                 m,
    399                 stop=stop,
    400                 run_manager=run_managers[i] if run_managers else None,
    401                 **kwargs,
    402             )
    403         )
    404     except BaseException as e:
    405         if run_managers:

File /opt/miniforge3/envs/.gpt/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py:576, in BaseChatModel._generate_with_cache(self, messages, stop, run_manager, **kwargs)
    572     raise ValueError(
    573         \"Asked to cache, but no cache found at `langchain.cache`.\"
    574     )
    575 if new_arg_supported:
--> 576     return self._generate(
    577         messages, stop=stop, run_manager=run_manager, **kwargs
    578     )
    579 else:
    580     return self._generate(messages, stop=stop, **kwargs)

File /opt/miniforge3/envs/.gpt/lib/python3.10/site-packages/langchain_community/chat_models/ollama.py:250, in ChatOllama._generate(self, messages, stop, run_manager, **kwargs)
    226 def _generate(
    227     self,
    228     messages: List[BaseMessage],
   (...)
    231     **kwargs: Any,
    232 ) -> ChatResult:
    233     \"\"\"Call out to Ollama's generate endpoint.
    234 
    235     Args:
   (...)
    247             ])
    248     \"\"\"
--> 250     final_chunk = self._chat_stream_with_aggregation(
    251         messages,
    252         stop=stop,
    253         run_manager=run_manager,
    254         verbose=self.verbose,
    255         **kwargs,
    256     )
    257     chat_generation = ChatGeneration(
    258         message=AIMessage(content=final_chunk.text),
    259         generation_info=final_chunk.generation_info,
    260     )
    261     return ChatResult(generations=[chat_generation])

File /opt/miniforge3/envs/.gpt/lib/python3.10/site-packages/langchain_community/chat_models/ollama.py:183, in ChatOllama._chat_stream_with_aggregation(self, messages, stop, run_manager, verbose, **kwargs)
    174 def _chat_stream_with_aggregation(
    175     self,
    176     messages: List[BaseMessage],
   (...)
    180     **kwargs: Any,
    181 ) -> ChatGenerationChunk:
    182     final_chunk: Optional[ChatGenerationChunk] = None
--> 183     for stream_resp in self._create_chat_stream(messages, stop, **kwargs):
    184         if stream_resp:
    185             chunk = _chat_stream_response_to_chat_generation_chunk(stream_resp)

File /opt/miniforge3/envs/.gpt/lib/python3.10/site-packages/langchain_community/chat_models/ollama.py:156, in ChatOllama._create_chat_stream(self, messages, stop, **kwargs)
    147 def _create_chat_stream(
    148     self,
    149     messages: List[BaseMessage],
    150     stop: Optional[List[str]] = None,
    151     **kwargs: Any,
    152 ) -> Iterator[str]:
    153     payload = {
    154         \"messages\": self._convert_messages_to_ollama_messages(messages),
    155     }
--> 156     yield from self._create_stream(
    157         payload=payload, stop=stop, api_url=f\"{self.base_url}/api/chat/\", **kwargs
    158     )

File /opt/miniforge3/envs/.gpt/lib/python3.10/site-packages/langchain_community/llms/ollama.py:234, in _OllamaCommon._create_stream(self, api_url, payload, stop, **kwargs)
    228         raise OllamaEndpointNotFoundError(
    229             \"Ollama call failed with status code 404. \"
    230             \"Maybe your model is not found \"
    231             f\"and you should pull the model with `ollama pull {self.model}`.\"
    232         )
    233     else:
--> 234         optional_detail = response.json().get(\"error\")
    235         raise ValueError(
    236             f\"Ollama call failed with status code {response.status_code}.\"
    237             f\" Details: {optional_detail}\"
    238         )
    239 return response.iter_lines(decode_unicode=True)

File /opt/miniforge3/envs/.gpt/lib/python3.10/site-packages/requests/models.py:975, in Response.json(self, **kwargs)
    971     return complexjson.loads(self.text, **kwargs)
    972 except JSONDecodeError as e:
    973     # Catch JSON-related errors and raise as requests.JSONDecodeError
    974     # This aliases json.JSONDecodeError and simplejson.JSONDecodeError
--> 975     raise RequestsJSONDecodeError(e.msg, e.doc, e.pos)

JSONDecodeError: Expecting value: line 1 column 1 (char 0)"
}
Log after a fix:
{
	"name": "ValueError",
	"message": "Ollama call failed with status code 401. Details: <html>\r
<head><title>401 Authorization Required</title></head>\r
<body>\r
<center><h1>401 Authorization Required</h1></center>\r
<hr><center>nginx/1.18.0 (Ubuntu)</center>\r
</body>\r
</html>\r
",
	"stack": "---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
Cell In[2], line 1
----> 1 print(translate_func().invoke('text'))

File /opt/miniforge3/envs/.gpt/lib/python3.10/site-packages/langchain_core/runnables/base.py:2053, in RunnableSequence.invoke(self, input, config)
   2051 try:
   2052     for i, step in enumerate(self.steps):
-> 2053         input = step.invoke(
   2054             input,
   2055             # mark each step as a child run
   2056             patch_config(
   2057                 config, callbacks=run_manager.get_child(f\"seq:step:{i+1}\")
   2058             ),
   2059         )
   2060 # finish the root run
   2061 except BaseException as e:

File /opt/miniforge3/envs/.gpt/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py:165, in BaseChatModel.invoke(self, input, config, stop, **kwargs)
    154 def invoke(
    155     self,
    156     input: LanguageModelInput,
   (...)
    160     **kwargs: Any,
    161 ) -> BaseMessage:
    162     config = ensure_config(config)
    163     return cast(
    164         ChatGeneration,
--> 165         self.generate_prompt(
    166             [self._convert_input(input)],
    167             stop=stop,
    168             callbacks=config.get(\"callbacks\"),
    169             tags=config.get(\"tags\"),
    170             metadata=config.get(\"metadata\"),
    171             run_name=config.get(\"run_name\"),
    172             **kwargs,
    173         ).generations[0][0],
    174     ).message

File /opt/miniforge3/envs/.gpt/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py:543, in BaseChatModel.generate_prompt(self, prompts, stop, callbacks, **kwargs)
    535 def generate_prompt(
    536     self,
    537     prompts: List[PromptValue],
   (...)
    540     **kwargs: Any,
    541 ) -> LLMResult:
    542     prompt_messages = [p.to_messages() for p in prompts]
--> 543     return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)

File /opt/miniforge3/envs/.gpt/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py:407, in BaseChatModel.generate(self, messages, stop, callbacks, tags, metadata, run_name, **kwargs)
    405         if run_managers:
    406             run_managers[i].on_llm_error(e, response=LLMResult(generations=[]))
--> 407         raise e
    408 flattened_outputs = [
    409     LLMResult(generations=[res.generations], llm_output=res.llm_output)
    410     for res in results
    411 ]
    412 llm_output = self._combine_llm_outputs([res.llm_output for res in results])

File /opt/miniforge3/envs/.gpt/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py:397, in BaseChatModel.generate(self, messages, stop, callbacks, tags, metadata, run_name, **kwargs)
    394 for i, m in enumerate(messages):
    395     try:
    396         results.append(
--> 397             self._generate_with_cache(
    398                 m,
    399                 stop=stop,
    400                 run_manager=run_managers[i] if run_managers else None,
    401                 **kwargs,
    402             )
    403         )
    404     except BaseException as e:
    405         if run_managers:

File /opt/miniforge3/envs/.gpt/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py:576, in BaseChatModel._generate_with_cache(self, messages, stop, run_manager, **kwargs)
    572     raise ValueError(
    573         \"Asked to cache, but no cache found at `langchain.cache`.\"
    574     )
    575 if new_arg_supported:
--> 576     return self._generate(
    577         messages, stop=stop, run_manager=run_manager, **kwargs
    578     )
    579 else:
    580     return self._generate(messages, stop=stop, **kwargs)

File /opt/miniforge3/envs/.gpt/lib/python3.10/site-packages/langchain_community/chat_models/ollama.py:250, in ChatOllama._generate(self, messages, stop, run_manager, **kwargs)
    226 def _generate(
    227     self,
    228     messages: List[BaseMessage],
   (...)
    231     **kwargs: Any,
    232 ) -> ChatResult:
    233     \"\"\"Call out to Ollama's generate endpoint.
    234 
    235     Args:
   (...)
    247             ])
    248     \"\"\"
--> 250     final_chunk = self._chat_stream_with_aggregation(
    251         messages,
    252         stop=stop,
    253         run_manager=run_manager,
    254         verbose=self.verbose,
    255         **kwargs,
    256     )
    257     chat_generation = ChatGeneration(
    258         message=AIMessage(content=final_chunk.text),
    259         generation_info=final_chunk.generation_info,
    260     )
    261     return ChatResult(generations=[chat_generation])

File /storage/gpt-project/Repos/repo_nikita/gpt_lib/langchain/ollama.py:328, in ChatOllamaCustom._chat_stream_with_aggregation(self, messages, stop, run_manager, verbose, **kwargs)
    319 def _chat_stream_with_aggregation(
    320     self,
    321     messages: List[BaseMessage],
   (...)
    325     **kwargs: Any,
    326 ) -> ChatGenerationChunk:
    327     final_chunk: Optional[ChatGenerationChunk] = None
--> 328     for stream_resp in self._create_chat_stream(messages, stop, **kwargs):
    329         if stream_resp:
    330             chunk = _chat_stream_response_to_chat_generation_chunk(stream_resp)

File /storage/gpt-project/Repos/repo_nikita/gpt_lib/langchain/ollama.py:301, in ChatOllamaCustom._create_chat_stream(self, messages, stop, **kwargs)
    292 def _create_chat_stream(
    293     self,
    294     messages: List[BaseMessage],
    295     stop: Optional[List[str]] = None,
    296     **kwargs: Any,
    297 ) -> Iterator[str]:
    298     payload = {
    299         \"messages\": self._convert_messages_to_ollama_messages(messages),
    300     }
--> 301     yield from self._create_stream(
    302         payload=payload, stop=stop, api_url=f\"{self.base_url}/api/chat\", **kwargs
    303     )

File /storage/gpt-project/Repos/repo_nikita/gpt_lib/langchain/ollama.py:134, in _OllamaCommonCustom._create_stream(self, api_url, payload, stop, **kwargs)
    132     else:
    133         optional_detail = response.text
--> 134         raise ValueError(
    135             f\"Ollama call failed with status code {response.status_code}.\"
    136             f\" Details: {optional_detail}\"
    137         )
    138 return response.iter_lines(decode_unicode=True)

ValueError: Ollama call failed with status code 401. Details: <html>\r
<head><title>401 Authorization Required</title></head>\r
<body>\r
<center><h1>401 Authorization Required</h1></center>\r
<hr><center>nginx/1.18.0 (Ubuntu)</center>\r
</body>\r
</html>\r
"
}

The same is true for timeout errors or when you simply mistyped in base_url arg and get response from some other service, for instance.

Real Ollama errors are still clearly readable:

ValueError: Ollama call failed with status code 400. Details: {"error":"invalid options: unknown_option"}

@dosubot dosubot bot added the size:XS This PR changes 0-9 lines, ignoring generated files. label Feb 29, 2024
Copy link

vercel bot commented Feb 29, 2024

The latest updates on your projects. Learn more about Vercel for Git ↗︎

1 Ignored Deployment
Name Status Preview Comments Updated (UTC)
langchain ⬜️ Ignored (Inspect) Visit Preview Mar 1, 2024 7:43pm

@dosubot dosubot bot added Ɑ: models Related to LLMs or chat model modules 🤖:bug Related to a bug, vulnerability, unexpected error with an existing feature labels Feb 29, 2024
@dosubot dosubot bot added the lgtm PR looks good. Use to confirm that a PR is ready for merging. label Feb 29, 2024
@baskaryan baskaryan merged commit 9f2ab37 into langchain-ai:master Mar 1, 2024
59 checks passed
renovate-bot added a commit to renovate-bot/GoogleCloudPlatform-_-database-query-extension that referenced this pull request Mar 7, 2024
…latform#254)

[![Mend
Renovate](https://app.renovatebot.com/images/banner.svg)](https://renovatebot.com)

This PR contains the following updates:

| Package | Change | Age | Adoption | Passing | Confidence |
|---|---|---|---|---|---|
| [langchain](https://togithub.com/langchain-ai/langchain) | `==0.1.5`
-> `==0.1.11` |
[![age](https://developer.mend.io/api/mc/badges/age/pypi/langchain/0.1.11?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
[![adoption](https://developer.mend.io/api/mc/badges/adoption/pypi/langchain/0.1.11?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
[![passing](https://developer.mend.io/api/mc/badges/compatibility/pypi/langchain/0.1.5/0.1.11?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
[![confidence](https://developer.mend.io/api/mc/badges/confidence/pypi/langchain/0.1.5/0.1.11?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|

### GitHub Vulnerability Alerts

#### [CVE-2024-28088](https://nvd.nist.gov/vuln/detail/CVE-2024-28088)

LangChain through 0.1.10 allows ../ directory traversal by an actor who
is able to control the final part of the path parameter in a load_chain
call. This bypasses the intended behavior of loading configurations only
from the hwchase17/langchain-hub GitHub repository. The outcome can be
disclosure of an API key for a large language model online service, or
remote code execution.

---

### Release Notes

<details>
<summary>langchain-ai/langchain (langchain)</summary>

###
[`v0.1.11`](https://togithub.com/langchain-ai/langchain/releases/tag/v0.1.11)

[Compare
Source](https://togithub.com/langchain-ai/langchain/compare/v0.1.10...v0.1.11)

##### What's Changed

- more query analysis docs by
[@&#8203;hwchase17](https://togithub.com/hwchase17) in
[https://github.com/langchain-ai/langchain/pull/18358](https://togithub.com/langchain-ai/langchain/pull/18358)
- docs: anthropic qa quickstart by
[@&#8203;baskaryan](https://togithub.com/baskaryan) in
[https://github.com/langchain-ai/langchain/pull/18459](https://togithub.com/langchain-ai/langchain/pull/18459)
- docs: anthropic quickstart by
[@&#8203;baskaryan](https://togithub.com/baskaryan) in
[https://github.com/langchain-ai/langchain/pull/18440](https://togithub.com/langchain-ai/langchain/pull/18440)
- Adding Azure Cosmos Mongo vCore Vector DB Cache by
[@&#8203;aayush3011](https://togithub.com/aayush3011) in
[https://github.com/langchain-ai/langchain/pull/16856](https://togithub.com/langchain-ai/langchain/pull/16856)
- community\[patch]: Invoke callback prior to yielding token by
[@&#8203;williamdevena](https://togithub.com/williamdevena) in
[https://github.com/langchain-ai/langchain/pull/18454](https://togithub.com/langchain-ai/langchain/pull/18454)
- community\[patch]: Invoke callback prior to yielding token by
[@&#8203;williamdevena](https://togithub.com/williamdevena) in
[https://github.com/langchain-ai/langchain/pull/18452](https://togithub.com/langchain-ai/langchain/pull/18452)
- community\[patch]: Invoke callback prior to yielding token by
[@&#8203;williamdevena](https://togithub.com/williamdevena) in
[https://github.com/langchain-ai/langchain/pull/18449](https://togithub.com/langchain-ai/langchain/pull/18449)
- community\[patch]: Invoke callback prior to yielding token by
[@&#8203;williamdevena](https://togithub.com/williamdevena) in
[https://github.com/langchain-ai/langchain/pull/18448](https://togithub.com/langchain-ai/langchain/pull/18448)
- community\[patch]: Invoke callback prior to yielding token by
[@&#8203;williamdevena](https://togithub.com/williamdevena) in
[https://github.com/langchain-ai/langchain/pull/18447](https://togithub.com/langchain-ai/langchain/pull/18447)
- nvidia-trt\[patch]: Invoke callback prior to yielding token by
[@&#8203;williamdevena](https://togithub.com/williamdevena) in
[https://github.com/langchain-ai/langchain/pull/18446](https://togithub.com/langchain-ai/langchain/pull/18446)
- improve query analysis docs by
[@&#8203;hwchase17](https://togithub.com/hwchase17) in
[https://github.com/langchain-ai/langchain/pull/18426](https://togithub.com/langchain-ai/langchain/pull/18426)
- langchain\[patch]: add tools renderer for various non-openai agents by
[@&#8203;mackong](https://togithub.com/mackong) in
[https://github.com/langchain-ai/langchain/pull/18307](https://togithub.com/langchain-ai/langchain/pull/18307)
- community: Add you.com tool, add async to retriever, add async
testing, add You tool doc by
[@&#8203;scottnath](https://togithub.com/scottnath) in
[https://github.com/langchain-ai/langchain/pull/18032](https://togithub.com/langchain-ai/langchain/pull/18032)
- \[Evals] Session-level feedback by
[@&#8203;hwchase17](https://togithub.com/hwchase17) in
[https://github.com/langchain-ai/langchain/pull/18463](https://togithub.com/langchain-ai/langchain/pull/18463)
- Update Notebook Image by
[@&#8203;hinthornw](https://togithub.com/hinthornw) in
[https://github.com/langchain-ai/langchain/pull/18470](https://togithub.com/langchain-ai/langchain/pull/18470)
- Evaluate on Version by
[@&#8203;hinthornw](https://togithub.com/hinthornw) in
[https://github.com/langchain-ai/langchain/pull/18471](https://togithub.com/langchain-ai/langchain/pull/18471)
- Improve notebook wording by
[@&#8203;hinthornw](https://togithub.com/hinthornw) in
[https://github.com/langchain-ai/langchain/pull/18472](https://togithub.com/langchain-ai/langchain/pull/18472)
- 👥 Update LangChain people data by
[@&#8203;jacoblee93](https://togithub.com/jacoblee93) in
[https://github.com/langchain-ai/langchain/pull/18473](https://togithub.com/langchain-ai/langchain/pull/18473)
- Docs: Updated callbacks/index.mdx adding example on invoke method by
[@&#8203;williamdevena](https://togithub.com/williamdevena) in
[https://github.com/langchain-ai/langchain/pull/18403](https://togithub.com/langchain-ai/langchain/pull/18403)
- anthropic\[minor]: claude 3 by
[@&#8203;efriis](https://togithub.com/efriis) in
[https://github.com/langchain-ai/langchain/pull/18508](https://togithub.com/langchain-ai/langchain/pull/18508)
- docs: add groq to list of providers by
[@&#8203;sepiatone](https://togithub.com/sepiatone) in
[https://github.com/langchain-ai/langchain/pull/18503](https://togithub.com/langchain-ai/langchain/pull/18503)
- docs: quickstart models by
[@&#8203;baskaryan](https://togithub.com/baskaryan) in
[https://github.com/langchain-ai/langchain/pull/18511](https://togithub.com/langchain-ai/langchain/pull/18511)
- docs:Update function "run" to "invoke" in llm_math.ipynb by
[@&#8203;standby24x7](https://togithub.com/standby24x7) in
[https://github.com/langchain-ai/langchain/pull/18505](https://togithub.com/langchain-ai/langchain/pull/18505)
- docs: Update function "run" to "invoke" by
[@&#8203;standby24x7](https://togithub.com/standby24x7) in
[https://github.com/langchain-ai/langchain/pull/18499](https://togithub.com/langchain-ai/langchain/pull/18499)
- community: Improved notebook for vector store "HANA Cloud" by
[@&#8203;MartinKolbAtWork](https://togithub.com/MartinKolbAtWork) in
[https://github.com/langchain-ai/langchain/pull/18496](https://togithub.com/langchain-ai/langchain/pull/18496)
- partners\[anthropic]: update to docstrings of ChatAnthropic class by
[@&#8203;sepiatone](https://togithub.com/sepiatone) in
[https://github.com/langchain-ai/langchain/pull/18493](https://togithub.com/langchain-ai/langchain/pull/18493)
- docs: update documentation of stackexchange component by
[@&#8203;sepiatone](https://togithub.com/sepiatone) in
[https://github.com/langchain-ai/langchain/pull/18486](https://togithub.com/langchain-ai/langchain/pull/18486)
- RAPTOR by [@&#8203;rlancemartin](https://togithub.com/rlancemartin) in
[https://github.com/langchain-ai/langchain/pull/18467](https://togithub.com/langchain-ai/langchain/pull/18467)
- \[Evals] Support list examples by dataset version tag by
[@&#8203;hinthornw](https://togithub.com/hinthornw) in
[https://github.com/langchain-ai/langchain/pull/18534](https://togithub.com/langchain-ai/langchain/pull/18534)
- core: Release 0.1.29 by
[@&#8203;baskaryan](https://togithub.com/baskaryan) in
[https://github.com/langchain-ai/langchain/pull/18530](https://togithub.com/langchain-ai/langchain/pull/18530)
- docs: update stack graphic by
[@&#8203;efriis](https://togithub.com/efriis) in
[https://github.com/langchain-ai/langchain/pull/18532](https://togithub.com/langchain-ai/langchain/pull/18532)
- docs\[minor]: Add thumbs up/down to all docs pages by
[@&#8203;bracesproul](https://togithub.com/bracesproul) in
[https://github.com/langchain-ai/langchain/pull/18526](https://togithub.com/langchain-ai/langchain/pull/18526)
- Evals wording by [@&#8203;hinthornw](https://togithub.com/hinthornw)
in
[https://github.com/langchain-ai/langchain/pull/18542](https://togithub.com/langchain-ai/langchain/pull/18542)
- community\[patch]: deprecate community fireworks by
[@&#8203;efriis](https://togithub.com/efriis) in
[https://github.com/langchain-ai/langchain/pull/18544](https://togithub.com/langchain-ai/langchain/pull/18544)
- anthropic\[patch]: multimodal by
[@&#8203;efriis](https://togithub.com/efriis) in
[https://github.com/langchain-ai/langchain/pull/18517](https://togithub.com/langchain-ai/langchain/pull/18517)
- Update chain.py fixewd typo in chain by
[@&#8203;akashAD98](https://togithub.com/akashAD98) in
[https://github.com/langchain-ai/langchain/pull/18551](https://togithub.com/langchain-ai/langchain/pull/18551)
- anthropic\[patch]: model type string by
[@&#8203;efriis](https://togithub.com/efriis) in
[https://github.com/langchain-ai/langchain/pull/18510](https://togithub.com/langchain-ai/langchain/pull/18510)
- langchain\[patch]: Release 0.1.11 by
[@&#8203;baskaryan](https://togithub.com/baskaryan) in
[https://github.com/langchain-ai/langchain/pull/18558](https://togithub.com/langchain-ai/langchain/pull/18558)

##### New Contributors

- [@&#8203;aayush3011](https://togithub.com/aayush3011) made their first
contribution in
[https://github.com/langchain-ai/langchain/pull/16856](https://togithub.com/langchain-ai/langchain/pull/16856)

**Full Changelog**:
https://github.com/langchain-ai/langchain/compare/v0.1.10...v0.1.11

###
[`v0.1.10`](https://togithub.com/langchain-ai/langchain/releases/tag/v0.1.10)

[Compare
Source](https://togithub.com/langchain-ai/langchain/compare/v0.1.9...v0.1.10)

#### What's Changed

- community: Fix SparkLLM error by
[@&#8203;liugddx](https://togithub.com/liugddx) in
[https://github.com/langchain-ai/langchain/pull/18015](https://togithub.com/langchain-ai/langchain/pull/18015)
- community\[patch]: Release 0.0.23 by
[@&#8203;baskaryan](https://togithub.com/baskaryan) in
[https://github.com/langchain-ai/langchain/pull/18027](https://togithub.com/langchain-ai/langchain/pull/18027)
- community: fix openai streaming throws 'AIMessageChunk' object has no
attribute 'text' by
[@&#8203;nicoloboschi](https://togithub.com/nicoloboschi) in
[https://github.com/langchain-ai/langchain/pull/18006](https://togithub.com/langchain-ai/langchain/pull/18006)
- community\[patch]: BaseLLM typing in init by
[@&#8203;efriis](https://togithub.com/efriis) in
[https://github.com/langchain-ai/langchain/pull/18029](https://togithub.com/langchain-ai/langchain/pull/18029)
- community\[patch]: Release 0.0.24 by
[@&#8203;baskaryan](https://togithub.com/baskaryan) in
[https://github.com/langchain-ai/langchain/pull/18038](https://togithub.com/langchain-ai/langchain/pull/18038)
- infra: CI success for partner packages by
[@&#8203;efriis](https://togithub.com/efriis) in
[https://github.com/langchain-ai/langchain/pull/18037](https://togithub.com/langchain-ai/langchain/pull/18037)
- infra: CI success for partner packages 2 by
[@&#8203;efriis](https://togithub.com/efriis) in
[https://github.com/langchain-ai/langchain/pull/18043](https://togithub.com/langchain-ai/langchain/pull/18043)
- docs: recommend lambdas over runnablebranch by
[@&#8203;efriis](https://togithub.com/efriis) in
[https://github.com/langchain-ai/langchain/pull/18033](https://togithub.com/langchain-ai/langchain/pull/18033)
- partners: Add Fireworks partner packages by
[@&#8203;benjibc](https://togithub.com/benjibc) in
[https://github.com/langchain-ai/langchain/pull/17694](https://togithub.com/langchain-ai/langchain/pull/17694)
- Updates to partners/exa README by
[@&#8203;DannyMac180](https://togithub.com/DannyMac180) in
[https://github.com/langchain-ai/langchain/pull/18047](https://togithub.com/langchain-ai/langchain/pull/18047)
- openai\[patch]: remove numpy dep by
[@&#8203;efriis](https://togithub.com/efriis) in
[https://github.com/langchain-ai/langchain/pull/18034](https://togithub.com/langchain-ai/langchain/pull/18034)
- docs: fireworks fixes by [@&#8203;efriis](https://togithub.com/efriis)
in
[https://github.com/langchain-ai/langchain/pull/18056](https://togithub.com/langchain-ai/langchain/pull/18056)
- infra: simplify and fix CI for docs-only changes by
[@&#8203;efriis](https://togithub.com/efriis) in
[https://github.com/langchain-ai/langchain/pull/18058](https://togithub.com/langchain-ai/langchain/pull/18058)
- docs: fireworks tool calling docs by
[@&#8203;efriis](https://togithub.com/efriis) in
[https://github.com/langchain-ai/langchain/pull/18057](https://togithub.com/langchain-ai/langchain/pull/18057)
- openai\[patch]: refactor with_structured_output by
[@&#8203;baskaryan](https://togithub.com/baskaryan) in
[https://github.com/langchain-ai/langchain/pull/18052](https://togithub.com/langchain-ai/langchain/pull/18052)
- core\[patch]: Runnable with message history to use add_messages by
[@&#8203;eyurtsev](https://togithub.com/eyurtsev) in
[https://github.com/langchain-ai/langchain/pull/17958](https://togithub.com/langchain-ai/langchain/pull/17958)
- community: Add async_client for Anyscale Chat model by
[@&#8203;kylehh](https://togithub.com/kylehh) in
[https://github.com/langchain-ai/langchain/pull/18050](https://togithub.com/langchain-ai/langchain/pull/18050)
- experimental: docstrings update by
[@&#8203;leo-gan](https://togithub.com/leo-gan) in
[https://github.com/langchain-ai/langchain/pull/18048](https://togithub.com/langchain-ai/langchain/pull/18048)
- community: Add document manager and mongo document manager by
[@&#8203;2jimoo](https://togithub.com/2jimoo) in
[https://github.com/langchain-ai/langchain/pull/17320](https://togithub.com/langchain-ai/langchain/pull/17320)
- docs\[patch]: Remove redundant Pinecone import by
[@&#8203;jacoblee93](https://togithub.com/jacoblee93) in
[https://github.com/langchain-ai/langchain/pull/18079](https://togithub.com/langchain-ai/langchain/pull/18079)
- docs:Add Cohere examples in documentation by
[@&#8203;BeatrixCohere](https://togithub.com/BeatrixCohere) in
[https://github.com/langchain-ai/langchain/pull/17794](https://togithub.com/langchain-ai/langchain/pull/17794)
- langchain_community: fix llama index imports and fields access by
[@&#8203;maximeperrindev](https://togithub.com/maximeperrindev) in
[https://github.com/langchain-ai/langchain/pull/17870](https://togithub.com/langchain-ai/langchain/pull/17870)
- partners/astradb: Add AstraDBChatMessageHistory to langchain-astradb
package by [@&#8203;cbornet](https://togithub.com/cbornet) in
[https://github.com/langchain-ai/langchain/pull/17732](https://togithub.com/langchain-ai/langchain/pull/17732)
- community \[enh] : adds callback handler for Fiddler AI by
[@&#8203;bhalder](https://togithub.com/bhalder) in
[https://github.com/langchain-ai/langchain/pull/17708](https://togithub.com/langchain-ai/langchain/pull/17708)
- community: Remove model limitation on Anyscale LLM by
[@&#8203;kylehh](https://togithub.com/kylehh) in
[https://github.com/langchain-ai/langchain/pull/17662](https://togithub.com/langchain-ai/langchain/pull/17662)
- docs: update azure search langchain notebook by
[@&#8203;mattgotteiner](https://togithub.com/mattgotteiner) in
[https://github.com/langchain-ai/langchain/pull/18053](https://togithub.com/langchain-ai/langchain/pull/18053)
- langchain: Make BooleanOutputParser more robust to non-binary
responses by [@&#8203;dokato](https://togithub.com/dokato) in
[https://github.com/langchain-ai/langchain/pull/17810](https://togithub.com/langchain-ai/langchain/pull/17810)
- Add additional examples for other modules to partners/exa README by
[@&#8203;DannyMac180](https://togithub.com/DannyMac180) in
[https://github.com/langchain-ai/langchain/pull/18081](https://togithub.com/langchain-ai/langchain/pull/18081)
- Use astrapy's upsert_one method in AstraDBStore by
[@&#8203;cbornet](https://togithub.com/cbornet) in
[https://github.com/langchain-ai/langchain/pull/18063](https://togithub.com/langchain-ai/langchain/pull/18063)
- community: Fix GraphSparqlQAChain so that it works with Ontotext
GraphDB by [@&#8203;nelly-hateva](https://togithub.com/nelly-hateva) in
[https://github.com/langchain-ai/langchain/pull/15009](https://togithub.com/langchain-ai/langchain/pull/15009)
- community: Fix GenericRequestsWrapper \_aget_resp_content must be
async by [@&#8203;cbornet](https://togithub.com/cbornet) in
[https://github.com/langchain-ai/langchain/pull/18065](https://togithub.com/langchain-ai/langchain/pull/18065)
- anthropic\[minor]: package move by
[@&#8203;efriis](https://togithub.com/efriis) in
[https://github.com/langchain-ai/langchain/pull/17974](https://togithub.com/langchain-ai/langchain/pull/17974)
- google-genai, google-vertexai: move to langchain-google by
[@&#8203;efriis](https://togithub.com/efriis) in
[https://github.com/langchain-ai/langchain/pull/17899](https://togithub.com/langchain-ai/langchain/pull/17899)
- docs: api docs for external repos by
[@&#8203;efriis](https://togithub.com/efriis) in
[https://github.com/langchain-ai/langchain/pull/17904](https://togithub.com/langchain-ai/langchain/pull/17904)
- docs: Fix the bug in MongoDBChatMessageHistory notebook by
[@&#8203;rongchenlin](https://togithub.com/rongchenlin) in
[https://github.com/langchain-ai/langchain/pull/18128](https://togithub.com/langchain-ai/langchain/pull/18128)
- langchain: Import from langchain_core in langchain.smith to avoid
deprecation warning by
[@&#8203;simonschmidt](https://togithub.com/simonschmidt) in
[https://github.com/langchain-ai/langchain/pull/18129](https://togithub.com/langchain-ai/langchain/pull/18129)
- langchain\[patch]: Update doc-string for a method in
ConversationBufferWindowMemory by
[@&#8203;keenborder786](https://togithub.com/keenborder786) in
[https://github.com/langchain-ai/langchain/pull/18090](https://togithub.com/langchain-ai/langchain/pull/18090)
- add run name for query constructor by
[@&#8203;hwchase17](https://togithub.com/hwchase17) in
[https://github.com/langchain-ai/langchain/pull/18101](https://togithub.com/langchain-ai/langchain/pull/18101)
- Add BaseMessage.id by
[@&#8203;nfcampos](https://togithub.com/nfcampos) in
[https://github.com/langchain-ai/langchain/pull/17835](https://togithub.com/langchain-ai/langchain/pull/17835)
- docs: anthropic partner package docs by
[@&#8203;efriis](https://togithub.com/efriis) in
[https://github.com/langchain-ai/langchain/pull/18109](https://togithub.com/langchain-ai/langchain/pull/18109)
- Fix bug with using configurable_fields after configurable_alternatives
by [@&#8203;nfcampos](https://togithub.com/nfcampos) in
[https://github.com/langchain-ai/langchain/pull/18139](https://togithub.com/langchain-ai/langchain/pull/18139)
- Improve runnable generator error messages by
[@&#8203;nfcampos](https://togithub.com/nfcampos) in
[https://github.com/langchain-ai/langchain/pull/18142](https://togithub.com/langchain-ai/langchain/pull/18142)
- infra: create api rst for specific pkg by
[@&#8203;baskaryan](https://togithub.com/baskaryan) in
[https://github.com/langchain-ai/langchain/pull/18144](https://togithub.com/langchain-ai/langchain/pull/18144)
- core\[patch], langchain\[patch], templates: move openai functions
parser to core by [@&#8203;baskaryan](https://togithub.com/baskaryan) in
[https://github.com/langchain-ai/langchain/pull/18060](https://togithub.com/langchain-ai/langchain/pull/18060)
- docs \[patch] : fix import to use community path for handler in
fiddler notebook by [@&#8203;bhalder](https://togithub.com/bhalder) in
[https://github.com/langchain-ai/langchain/pull/18140](https://togithub.com/langchain-ai/langchain/pull/18140)
- \[docs] Update doc-string for buffer_as_messages method in
ConversationBufferWindowMemory by
[@&#8203;lgabs](https://togithub.com/lgabs) in
[https://github.com/langchain-ai/langchain/pull/18136](https://togithub.com/langchain-ai/langchain/pull/18136)
- Docs: azuresearch.ipynb (in docs/docs/integrations/vectorstores) --
fixed headings and comments by
[@&#8203;HeidiSteen](https://togithub.com/HeidiSteen) in
[https://github.com/langchain-ai/langchain/pull/18135](https://togithub.com/langchain-ai/langchain/pull/18135)
- infra: api docs build commit dir by
[@&#8203;efriis](https://togithub.com/efriis) in
[https://github.com/langchain-ai/langchain/pull/18147](https://togithub.com/langchain-ai/langchain/pull/18147)
- infra: api docs setup action location by
[@&#8203;efriis](https://togithub.com/efriis) in
[https://github.com/langchain-ai/langchain/pull/18148](https://togithub.com/langchain-ai/langchain/pull/18148)
- community: Add Laser Embedding Integration by
[@&#8203;dstambler17](https://togithub.com/dstambler17) in
[https://github.com/langchain-ai/langchain/pull/18111](https://togithub.com/langchain-ai/langchain/pull/18111)
- community: make `SET allow_experimental_[engine]_index` configurabe in
vectorstores.clickhouse by [@&#8203;bgdsh](https://togithub.com/bgdsh)
in
[https://github.com/langchain-ai/langchain/pull/18107](https://togithub.com/langchain-ai/langchain/pull/18107)
- langchain\[patch], core\[patch], openai\[patch], fireworks\[minor]:
ChatFireworks.with_structured_output by
[@&#8203;baskaryan](https://togithub.com/baskaryan) in
[https://github.com/langchain-ai/langchain/pull/18078](https://togithub.com/langchain-ai/langchain/pull/18078)
- Langchain vectorstore integration with Kinetica by
[@&#8203;am-kinetica](https://togithub.com/am-kinetica) in
[https://github.com/langchain-ai/langchain/pull/18102](https://togithub.com/langchain-ai/langchain/pull/18102)
- community: vectorstores.kdbai - Added support for when no docs are
present by [@&#8203;jaskirat8](https://togithub.com/jaskirat8) in
[https://github.com/langchain-ai/langchain/pull/18103](https://togithub.com/langchain-ai/langchain/pull/18103)
- Experimental: Add other threshold types to SemanticChunker by
[@&#8203;matthaigh27](https://togithub.com/matthaigh27) in
[https://github.com/langchain-ai/langchain/pull/16807](https://togithub.com/langchain-ai/langchain/pull/16807)
- partners: add Elasticsearch package by
[@&#8203;maxjakob](https://togithub.com/maxjakob) in
[https://github.com/langchain-ai/langchain/pull/17467](https://togithub.com/langchain-ai/langchain/pull/17467)
- add optimization notebook by
[@&#8203;hwchase17](https://togithub.com/hwchase17) in
[https://github.com/langchain-ai/langchain/pull/18155](https://togithub.com/langchain-ai/langchain/pull/18155)
- core\[patch]: support JS message serial namespaces by
[@&#8203;baskaryan](https://togithub.com/baskaryan) in
[https://github.com/langchain-ai/langchain/pull/18151](https://togithub.com/langchain-ai/langchain/pull/18151)
- mistral\[minor]: Function calling and with_structured_output by
[@&#8203;baskaryan](https://togithub.com/baskaryan) in
[https://github.com/langchain-ai/langchain/pull/18150](https://togithub.com/langchain-ai/langchain/pull/18150)
- move document compressor base by
[@&#8203;hwchase17](https://togithub.com/hwchase17) in
[https://github.com/langchain-ai/langchain/pull/17910](https://togithub.com/langchain-ai/langchain/pull/17910)
- core\[patch]: Release 0.1.27 by
[@&#8203;baskaryan](https://togithub.com/baskaryan) in
[https://github.com/langchain-ai/langchain/pull/18159](https://togithub.com/langchain-ai/langchain/pull/18159)
- openai\[patch], mistral\[patch], fireworks\[patch]: releases 0.0.8,
0.0.5, 0.0.2 by [@&#8203;baskaryan](https://togithub.com/baskaryan) in
[https://github.com/langchain-ai/langchain/pull/18186](https://togithub.com/langchain-ai/langchain/pull/18186)
- Harrison/add structured output by
[@&#8203;hwchase17](https://togithub.com/hwchase17) in
[https://github.com/langchain-ai/langchain/pull/18165](https://togithub.com/langchain-ai/langchain/pull/18165)
- Adding documentation for deprecation of OpenAI functions by
[@&#8203;isahers1](https://togithub.com/isahers1) in
[https://github.com/langchain-ai/langchain/pull/18164](https://togithub.com/langchain-ai/langchain/pull/18164)
- Assign message id in ChatOpenAI by
[@&#8203;nfcampos](https://togithub.com/nfcampos) in
[https://github.com/langchain-ai/langchain/pull/17837](https://togithub.com/langchain-ai/langchain/pull/17837)
- community\[feat]: Adds LLMLingua as a document compressor by
[@&#8203;thehapyone](https://togithub.com/thehapyone) in
[https://github.com/langchain-ai/langchain/pull/17711](https://togithub.com/langchain-ai/langchain/pull/17711)
- airbyte\[patch]: init pkg by
[@&#8203;efriis](https://togithub.com/efriis) in
[https://github.com/langchain-ai/langchain/pull/18236](https://togithub.com/langchain-ai/langchain/pull/18236)
- airbyte\[patch]: core version 0.1.5 by
[@&#8203;efriis](https://togithub.com/efriis) in
[https://github.com/langchain-ai/langchain/pull/18244](https://togithub.com/langchain-ai/langchain/pull/18244)
- docs: add documentation for Google Cloud database integrations by
[@&#8203;averikitsch](https://togithub.com/averikitsch) in
[https://github.com/langchain-ai/langchain/pull/18225](https://togithub.com/langchain-ai/langchain/pull/18225)
- IBM\[patch]: release 0.1.0 Add possibility to pass ModelInference or
Model object to WatsonxLLM class by
[@&#8203;MateuszOssGit](https://togithub.com/MateuszOssGit) in
[https://github.com/langchain-ai/langchain/pull/18189](https://togithub.com/langchain-ai/langchain/pull/18189)
- infra: api docs folder move by
[@&#8203;efriis](https://togithub.com/efriis) in
[https://github.com/langchain-ai/langchain/pull/18223](https://togithub.com/langchain-ai/langchain/pull/18223)
- docs: update documentation for Google Cloud database integrations by
[@&#8203;jackwotherspoon](https://togithub.com/jackwotherspoon) in
[https://github.com/langchain-ai/langchain/pull/18265](https://togithub.com/langchain-ai/langchain/pull/18265)
- fix: BigQueryVectorSearch JSON type unsupported for metadatas by
[@&#8203;ashleyxuu](https://togithub.com/ashleyxuu) in
[https://github.com/langchain-ai/langchain/pull/18234](https://togithub.com/langchain-ai/langchain/pull/18234)
- docs: airbyte github cookbook by
[@&#8203;efriis](https://togithub.com/efriis) in
[https://github.com/langchain-ai/langchain/pull/18275](https://togithub.com/langchain-ai/langchain/pull/18275)
- langchain_nvidia_ai_endpoints\[patch]: Invoke callback prior to
yielding token by
[@&#8203;williamdevena](https://togithub.com/williamdevena) in
[https://github.com/langchain-ai/langchain/pull/18271](https://togithub.com/langchain-ai/langchain/pull/18271)
- Remove check preventing passing non-declared config keys by
[@&#8203;nfcampos](https://togithub.com/nfcampos) in
[https://github.com/langchain-ai/langchain/pull/18276](https://togithub.com/langchain-ai/langchain/pull/18276)
- Add PNG drawer for Runnable.get_graph() by
[@&#8203;nfcampos](https://togithub.com/nfcampos) in
[https://github.com/langchain-ai/langchain/pull/18239](https://togithub.com/langchain-ai/langchain/pull/18239)
- community\[patch]: added latin-1 decoder to gmail search tool by
[@&#8203;Sanjaypranav](https://togithub.com/Sanjaypranav) in
[https://github.com/langchain-ai/langchain/pull/18116](https://togithub.com/langchain-ai/langchain/pull/18116)
- community\[minor]: add hugging_face_model document loader by
[@&#8203;ruanwz](https://togithub.com/ruanwz) in
[https://github.com/langchain-ai/langchain/pull/17323](https://togithub.com/langchain-ai/langchain/pull/17323)
- langchain_anthropic\[patch]: Invoke callback prior to yielding token
by [@&#8203;williamdevena](https://togithub.com/williamdevena) in
[https://github.com/langchain-ai/langchain/pull/18274](https://togithub.com/langchain-ai/langchain/pull/18274)
- community\[minor]: Add `SQLDatabaseLoader` document loader by
[@&#8203;eyurtsev](https://togithub.com/eyurtsev) in
[https://github.com/langchain-ai/langchain/pull/18281](https://togithub.com/langchain-ai/langchain/pull/18281)
- langchain\[patch]: Invoke callback prior to yielding token by
[@&#8203;williamdevena](https://togithub.com/williamdevena) in
[https://github.com/langchain-ai/langchain/pull/18282](https://togithub.com/langchain-ai/langchain/pull/18282)
- ci: Update issue template required checks by
[@&#8203;bracesproul](https://togithub.com/bracesproul) in
[https://github.com/langchain-ai/langchain/pull/18283](https://togithub.com/langchain-ai/langchain/pull/18283)
- community\[patch]: Invoke callback prior to yielding token for
volcengine_maas by
[@&#8203;williamdevena](https://togithub.com/williamdevena) in
[https://github.com/langchain-ai/langchain/pull/18288](https://togithub.com/langchain-ai/langchain/pull/18288)
- ollama\[patch]: delete suffix slash to avoid redirect by
[@&#8203;mackong](https://togithub.com/mackong) in
[https://github.com/langchain-ai/langchain/pull/18260](https://togithub.com/langchain-ai/langchain/pull/18260)
- docs: remove duplicate word in lcel/streaming by
[@&#8203;zhangkai803](https://togithub.com/zhangkai803) in
[https://github.com/langchain-ai/langchain/pull/18249](https://togithub.com/langchain-ai/langchain/pull/18249)
- partner: Astra DB clients identify themselves as coming through
LangChain package by
[@&#8203;hemidactylus](https://togithub.com/hemidactylus) in
[https://github.com/langchain-ai/langchain/pull/18131](https://togithub.com/langchain-ai/langchain/pull/18131)
- community: Fix deprecation version of AstraDB VectorStore by
[@&#8203;cbornet](https://togithub.com/cbornet) in
[https://github.com/langchain-ai/langchain/pull/17991](https://togithub.com/langchain-ai/langchain/pull/17991)
- update extraction use-case docs by
[@&#8203;ccurme](https://togithub.com/ccurme) in
[https://github.com/langchain-ai/langchain/pull/17979](https://togithub.com/langchain-ai/langchain/pull/17979)
- docs: update to the list of partner packages in the list of providers
by [@&#8203;sepiatone](https://togithub.com/sepiatone) in
[https://github.com/langchain-ai/langchain/pull/18252](https://togithub.com/langchain-ai/langchain/pull/18252)
- langchain_groq\[patch]: Invoke callback prior to yielding token by
[@&#8203;williamdevena](https://togithub.com/williamdevena) in
[https://github.com/langchain-ai/langchain/pull/18272](https://togithub.com/langchain-ai/langchain/pull/18272)
- langchain_openai\[patch]: Invoke callback prior to yielding token by
[@&#8203;williamdevena](https://togithub.com/williamdevena) in
[https://github.com/langchain-ai/langchain/pull/18269](https://togithub.com/langchain-ai/langchain/pull/18269)
- docs: `google` provider page fixes by
[@&#8203;leo-gan](https://togithub.com/leo-gan) in
[https://github.com/langchain-ai/langchain/pull/18290](https://togithub.com/langchain-ai/langchain/pull/18290)
- docs: update Google documentation by
[@&#8203;averikitsch](https://togithub.com/averikitsch) in
[https://github.com/langchain-ai/langchain/pull/18297](https://togithub.com/langchain-ai/langchain/pull/18297)
- \[Evaluation] Config Fix by
[@&#8203;hinthornw](https://togithub.com/hinthornw) in
[https://github.com/langchain-ai/langchain/pull/18231](https://togithub.com/langchain-ai/langchain/pull/18231)
- experimental\[patch]: Release 0.0.53 by
[@&#8203;baskaryan](https://togithub.com/baskaryan) in
[https://github.com/langchain-ai/langchain/pull/18330](https://togithub.com/langchain-ai/langchain/pull/18330)
- docs: update func calling doc by
[@&#8203;baskaryan](https://togithub.com/baskaryan) in
[https://github.com/langchain-ai/langchain/pull/18300](https://togithub.com/langchain-ai/langchain/pull/18300)
- skip airbyte api docs by [@&#8203;efriis](https://togithub.com/efriis)
in
[https://github.com/langchain-ai/langchain/pull/18334](https://togithub.com/langchain-ai/langchain/pull/18334)
- infra: skip ibm api docs by
[@&#8203;efriis](https://togithub.com/efriis) in
[https://github.com/langchain-ai/langchain/pull/18335](https://togithub.com/langchain-ai/langchain/pull/18335)
- docs `providers` update by
[@&#8203;leo-gan](https://togithub.com/leo-gan) in
[https://github.com/langchain-ai/langchain/pull/18336](https://togithub.com/langchain-ai/langchain/pull/18336)
- community: Add PolygonFinancials Tool by
[@&#8203;virattt](https://togithub.com/virattt) in
[https://github.com/langchain-ai/langchain/pull/18324](https://togithub.com/langchain-ai/langchain/pull/18324)
- Add links to relevant DataCamp code alongs by
[@&#8203;filipsch](https://togithub.com/filipsch) in
[https://github.com/langchain-ai/langchain/pull/18332](https://togithub.com/langchain-ai/langchain/pull/18332)
- docs: remove duplicate quote in AzureOpenAIEmbeddings doc by
[@&#8203;zhangkai803](https://togithub.com/zhangkai803) in
[https://github.com/langchain-ai/langchain/pull/18315](https://togithub.com/langchain-ai/langchain/pull/18315)
- docs: query analysis use case by
[@&#8203;baskaryan](https://togithub.com/baskaryan) in
[https://github.com/langchain-ai/langchain/pull/17766](https://togithub.com/langchain-ai/langchain/pull/17766)
- Add optional output_parser param in create_react_agent by
[@&#8203;hasansustcse13](https://togithub.com/hasansustcse13) in
[https://github.com/langchain-ai/langchain/pull/18320](https://togithub.com/langchain-ai/langchain/pull/18320)
- Add support for parameters in neo4j retrieval query by
[@&#8203;tomasonjo](https://togithub.com/tomasonjo) in
[https://github.com/langchain-ai/langchain/pull/18310](https://togithub.com/langchain-ai/langchain/pull/18310)
- core\[patch]: Release 0.1.28 by
[@&#8203;baskaryan](https://togithub.com/baskaryan) in
[https://github.com/langchain-ai/langchain/pull/18341](https://togithub.com/langchain-ai/langchain/pull/18341)
- ci dirs in wrong order by
[@&#8203;efriis](https://togithub.com/efriis) in
[https://github.com/langchain-ai/langchain/pull/18340](https://togithub.com/langchain-ai/langchain/pull/18340)
- Updated partners/ibm README by
[@&#8203;williamdevena](https://togithub.com/williamdevena) in
[https://github.com/langchain-ai/langchain/pull/18268](https://togithub.com/langchain-ai/langchain/pull/18268)
- community\[patch]: remove llmlingua extended tests by
[@&#8203;efriis](https://togithub.com/efriis) in
[https://github.com/langchain-ai/langchain/pull/18344](https://togithub.com/langchain-ai/langchain/pull/18344)
- community\[patch]: Fixing embedchain document mapping by
[@&#8203;kellerkind84](https://togithub.com/kellerkind84) in
[https://github.com/langchain-ai/langchain/pull/18255](https://togithub.com/langchain-ai/langchain/pull/18255)
- Updated partners/fireworks README by
[@&#8203;williamdevena](https://togithub.com/williamdevena) in
[https://github.com/langchain-ai/langchain/pull/18267](https://togithub.com/langchain-ai/langchain/pull/18267)
- partners: MongoDB Partner Package -- Porting MongoDBAtlasVectorSearch
by [@&#8203;Jibola](https://togithub.com/Jibola) in
[https://github.com/langchain-ai/langchain/pull/17652](https://togithub.com/langchain-ai/langchain/pull/17652)
- infra: mongodb env vars by
[@&#8203;efriis](https://togithub.com/efriis) in
[https://github.com/langchain-ai/langchain/pull/18347](https://togithub.com/langchain-ai/langchain/pull/18347)
- mongodb\[patch]: core 0.1.5 dep by
[@&#8203;efriis](https://togithub.com/efriis) in
[https://github.com/langchain-ai/langchain/pull/18348](https://togithub.com/langchain-ai/langchain/pull/18348)
- docs: airbyte deps note by
[@&#8203;efriis](https://togithub.com/efriis) in
[https://github.com/langchain-ai/langchain/pull/18243](https://togithub.com/langchain-ai/langchain/pull/18243)
- deprecation docstring with lib by
[@&#8203;efriis](https://togithub.com/efriis) in
[https://github.com/langchain-ai/langchain/pull/18350](https://togithub.com/langchain-ai/langchain/pull/18350)
- multiple\[patch]: fix deprecation versions by
[@&#8203;efriis](https://togithub.com/efriis) in
[https://github.com/langchain-ai/langchain/pull/18349](https://togithub.com/langchain-ai/langchain/pull/18349)
- Fix fireworks bind tools by
[@&#8203;hinthornw](https://togithub.com/hinthornw) in
[https://github.com/langchain-ai/langchain/pull/18352](https://togithub.com/langchain-ai/langchain/pull/18352)
- infra: tolerate partner package move in ci by
[@&#8203;efriis](https://togithub.com/efriis) in
[https://github.com/langchain-ai/langchain/pull/18355](https://togithub.com/langchain-ai/langchain/pull/18355)
- \[Core] Patch: rm dumpd of outputs from runnables/base by
[@&#8203;hinthornw](https://togithub.com/hinthornw) in
[https://github.com/langchain-ai/langchain/pull/18295](https://togithub.com/langchain-ai/langchain/pull/18295)
- Fix missing labels by
[@&#8203;nfcampos](https://togithub.com/nfcampos) in
[https://github.com/langchain-ai/langchain/pull/18356](https://togithub.com/langchain-ai/langchain/pull/18356)
- text-splitters\[minor], langchain\[minor], community\[patch],
templates, docs: langchain-text-splitters 0.0.1 by
[@&#8203;baskaryan](https://togithub.com/baskaryan) in
[https://github.com/langchain-ai/langchain/pull/18346](https://togithub.com/langchain-ai/langchain/pull/18346)
- community\[patch]: BaseLoader load method should just delegate to
lazy_load by [@&#8203;eyurtsev](https://togithub.com/eyurtsev) in
[https://github.com/langchain-ai/langchain/pull/18289](https://togithub.com/langchain-ai/langchain/pull/18289)
- langchain\[patch]: langchain-text-splitters dep by
[@&#8203;baskaryan](https://togithub.com/baskaryan) in
[https://github.com/langchain-ai/langchain/pull/18357](https://togithub.com/langchain-ai/langchain/pull/18357)
- docs: text splitters readme by
[@&#8203;baskaryan](https://togithub.com/baskaryan) in
[https://github.com/langchain-ai/langchain/pull/18359](https://togithub.com/langchain-ai/langchain/pull/18359)
- templates: use langchain-text-splitters by
[@&#8203;efriis](https://togithub.com/efriis) in
[https://github.com/langchain-ai/langchain/pull/18360](https://togithub.com/langchain-ai/langchain/pull/18360)
- infra: update create_api_rst by
[@&#8203;baskaryan](https://togithub.com/baskaryan) in
[https://github.com/langchain-ai/langchain/pull/18361](https://togithub.com/langchain-ai/langchain/pull/18361)
- docs: update api ref nav by
[@&#8203;baskaryan](https://togithub.com/baskaryan) in
[https://github.com/langchain-ai/langchain/pull/18362](https://togithub.com/langchain-ai/langchain/pull/18362)
- fireworks\[patch]: remove custom async and stream implementations by
[@&#8203;efriis](https://togithub.com/efriis) in
[https://github.com/langchain-ai/langchain/pull/18363](https://togithub.com/langchain-ai/langchain/pull/18363)
- docs\[patch]: Add Neo4j GraphAcademy to tutorials section by
[@&#8203;jacoblee93](https://togithub.com/jacoblee93) in
[https://github.com/langchain-ai/langchain/pull/18353](https://togithub.com/langchain-ai/langchain/pull/18353)
- chore(deps): FastEmbed to latest by
[@&#8203;Anush008](https://togithub.com/Anush008) in
[https://github.com/langchain-ai/langchain/pull/18040](https://togithub.com/langchain-ai/langchain/pull/18040)
- Add dataset version info by
[@&#8203;hinthornw](https://togithub.com/hinthornw) in
[https://github.com/langchain-ai/langchain/pull/18299](https://togithub.com/langchain-ai/langchain/pull/18299)
- partner: Fix fireworks async stream by
[@&#8203;benjibc](https://togithub.com/benjibc) in
[https://github.com/langchain-ai/langchain/pull/18372](https://togithub.com/langchain-ai/langchain/pull/18372)
- community: Change github endpoint in GithubLoader by
[@&#8203;RadhikaBansal97](https://togithub.com/RadhikaBansal97) in
[https://github.com/langchain-ai/langchain/pull/17622](https://togithub.com/langchain-ai/langchain/pull/17622)
- docs: nvidia: provider page update by
[@&#8203;leo-gan](https://togithub.com/leo-gan) in
[https://github.com/langchain-ai/langchain/pull/18054](https://togithub.com/langchain-ai/langchain/pull/18054)
- `runnable` module description by
[@&#8203;leo-gan](https://togithub.com/leo-gan) in
[https://github.com/langchain-ai/langchain/pull/17966](https://togithub.com/langchain-ai/langchain/pull/17966)
- Add openvino backend support by
[@&#8203;OpenVINO-dev-contest](https://togithub.com/OpenVINO-dev-contest)
in
[https://github.com/langchain-ai/langchain/pull/11591](https://togithub.com/langchain-ai/langchain/pull/11591)
- community: add BigDL-LLM integrations by
[@&#8203;shane-huang](https://togithub.com/shane-huang) in
[https://github.com/langchain-ai/langchain/pull/17953](https://togithub.com/langchain-ai/langchain/pull/17953)
- community: Voyage AI updates default model and batch size by
[@&#8203;thomas0809](https://togithub.com/thomas0809) in
[https://github.com/langchain-ai/langchain/pull/17655](https://togithub.com/langchain-ai/langchain/pull/17655)
- Community: Fix ChatModel for sparkllm Bug. by
[@&#8203;liugddx](https://togithub.com/liugddx) in
[https://github.com/langchain-ai/langchain/pull/18375](https://togithub.com/langchain-ai/langchain/pull/18375)
- templates: Lanceb RAG template by
[@&#8203;akashAD98](https://togithub.com/akashAD98) in
[https://github.com/langchain-ai/langchain/pull/17809](https://togithub.com/langchain-ai/langchain/pull/17809)
- astradb: move to langchain-datastax repo by
[@&#8203;efriis](https://togithub.com/efriis) in
[https://github.com/langchain-ai/langchain/pull/18354](https://togithub.com/langchain-ai/langchain/pull/18354)
- docs: `Tutorials` update by
[@&#8203;leo-gan](https://togithub.com/leo-gan) in
[https://github.com/langchain-ai/langchain/pull/18230](https://togithub.com/langchain-ai/langchain/pull/18230)
- fireworks\[patch]: support "any" tool_choice by
[@&#8203;baskaryan](https://togithub.com/baskaryan) in
[https://github.com/langchain-ai/langchain/pull/18343](https://togithub.com/langchain-ai/langchain/pull/18343)
- community: Implement lazy_load() for CSVLoader by
[@&#8203;cbornet](https://togithub.com/cbornet) in
[https://github.com/langchain-ai/langchain/pull/18391](https://togithub.com/langchain-ai/langchain/pull/18391)
- Refactor/type ignore fixes by
[@&#8203;DanielChico](https://togithub.com/DanielChico) in
[https://github.com/langchain-ai/langchain/pull/18395](https://togithub.com/langchain-ai/langchain/pull/18395)
- community: fix, better error message in deeplake vectoriser by
[@&#8203;mmajewsk](https://togithub.com/mmajewsk) in
[https://github.com/langchain-ai/langchain/pull/18397](https://togithub.com/langchain-ai/langchain/pull/18397)
- docs: Update Google El Carro for Oracle Workload Documentation. by
[@&#8203;tabbyl21](https://togithub.com/tabbyl21) in
[https://github.com/langchain-ai/langchain/pull/18394](https://togithub.com/langchain-ai/langchain/pull/18394)
- docs: fix typo in milvus.ipynb by
[@&#8203;eltociear](https://togithub.com/eltociear) in
[https://github.com/langchain-ai/langchain/pull/18373](https://togithub.com/langchain-ai/langchain/pull/18373)
- core\[patch]: Invoke callback prior to yielding token by
[@&#8203;williamdevena](https://togithub.com/williamdevena) in
[https://github.com/langchain-ai/langchain/pull/18286](https://togithub.com/langchain-ai/langchain/pull/18286)
- community: Use default load() implementation in doc loaders by
[@&#8203;cbornet](https://togithub.com/cbornet) in
[https://github.com/langchain-ai/langchain/pull/18385](https://togithub.com/langchain-ai/langchain/pull/18385)
- cookbook on gemma integrations by
[@&#8203;lkuligin](https://togithub.com/lkuligin) in
[https://github.com/langchain-ai/langchain/pull/18213](https://togithub.com/langchain-ai/langchain/pull/18213)
- community: removing "response_mode" parameter in llama_index retriever
by [@&#8203;maximeperrindev](https://togithub.com/maximeperrindev) in
[https://github.com/langchain-ai/langchain/pull/18180](https://togithub.com/langchain-ai/langchain/pull/18180)
- community: fix RecursiveUrlLoader metadata_extractor return type by
[@&#8203;hemslo](https://togithub.com/hemslo) in
[https://github.com/langchain-ai/langchain/pull/18193](https://togithub.com/langchain-ai/langchain/pull/18193)
- docs: Fix typo in baidu_qianfan_endpoint.ipynb &
baidu_qianfan_endpoint.ipynb by
[@&#8203;laoazhang](https://togithub.com/laoazhang) in
[https://github.com/langchain-ai/langchain/pull/18176](https://togithub.com/langchain-ai/langchain/pull/18176)
- docs: update pinecone README to use PineconeVectorStore by
[@&#8203;galtay](https://togithub.com/galtay) in
[https://github.com/langchain-ai/langchain/pull/18170](https://togithub.com/langchain-ai/langchain/pull/18170)
- community\[patch]: chat message histrory mypy fix by
[@&#8203;Lord-Haji](https://togithub.com/Lord-Haji) in
[https://github.com/langchain-ai/langchain/pull/18250](https://togithub.com/langchain-ai/langchain/pull/18250)
- ollama\[patch]: don't try to parse json in case of errored response by
[@&#8203;StrikerRUS](https://togithub.com/StrikerRUS) in
[https://github.com/langchain-ai/langchain/pull/18317](https://togithub.com/langchain-ai/langchain/pull/18317)
- community: Fix MongoDBAtlasVectorSearch max_marginal_relevance_search
by [@&#8203;certified-dodo](https://togithub.com/certified-dodo) in
[https://github.com/langchain-ai/langchain/pull/17971](https://togithub.com/langchain-ai/langchain/pull/17971)
- Fix: the syntax error for Redis generated query by
[@&#8203;sarahberenji](https://togithub.com/sarahberenji) in
[https://github.com/langchain-ai/langchain/pull/17717](https://togithub.com/langchain-ai/langchain/pull/17717)
- community: add maritalk chat by
[@&#8203;rodrigo-f-nogueira](https://togithub.com/rodrigo-f-nogueira) in
[https://github.com/langchain-ai/langchain/pull/17675](https://togithub.com/langchain-ai/langchain/pull/17675)
- Community/Partners: Add support for Perplexity AI by
[@&#8203;atherfawaz](https://togithub.com/atherfawaz) in
[https://github.com/langchain-ai/langchain/pull/17024](https://togithub.com/langchain-ai/langchain/pull/17024)
- community/langchain/docs: Gremlin Graph Store and QA Chain by
[@&#8203;piizei](https://togithub.com/piizei) in
[https://github.com/langchain-ai/langchain/pull/17683](https://togithub.com/langchain-ai/langchain/pull/17683)
- Correct WebBaseLoader URL: docs:
python.langchain.com/docs/get_started/quickstartQuickstart by
[@&#8203;rmeinzer-copado](https://togithub.com/rmeinzer-copado) in
[https://github.com/langchain-ai/langchain/pull/17981](https://togithub.com/langchain-ai/langchain/pull/17981)
- community\[patch]: Make cohere_api_key a SecretStr by
[@&#8203;arunsathiya](https://togithub.com/arunsathiya) in
[https://github.com/langchain-ai/langchain/pull/12188](https://togithub.com/langchain-ai/langchain/pull/12188)
- docs\[chatopenai]: update module import path and calling method by
[@&#8203;sepiatone](https://togithub.com/sepiatone) in
[https://github.com/langchain-ai/langchain/pull/18169](https://togithub.com/langchain-ai/langchain/pull/18169)
- Add an option for indexed generic label when import neo4j graph
documents by [@&#8203;tomasonjo](https://togithub.com/tomasonjo) in
[https://github.com/langchain-ai/langchain/pull/18122](https://togithub.com/langchain-ai/langchain/pull/18122)
- docs: add llamafile info to 'Local LLMs' guides by
[@&#8203;k8si](https://togithub.com/k8si) in
[https://github.com/langchain-ai/langchain/pull/18049](https://togithub.com/langchain-ai/langchain/pull/18049)
- langchain-mongodb: Set delete_many only if count_documents is not 0 by
[@&#8203;Jibola](https://togithub.com/Jibola) in
[https://github.com/langchain-ai/langchain/pull/18402](https://togithub.com/langchain-ai/langchain/pull/18402)
- langchain_ibm\[patch] update docstring, dependencies, tests by
[@&#8203;MateuszOssGit](https://togithub.com/MateuszOssGit) in
[https://github.com/langchain-ai/langchain/pull/18386](https://togithub.com/langchain-ai/langchain/pull/18386)
- docs: update Azure OpenAI to v1 and langchain API to 0.1 by
[@&#8203;mspronesti](https://togithub.com/mspronesti) in
[https://github.com/langchain-ai/langchain/pull/18005](https://togithub.com/langchain-ai/langchain/pull/18005)
- community: llamafile embeddings support by
[@&#8203;k8si](https://togithub.com/k8si) in
[https://github.com/langchain-ai/langchain/pull/17976](https://togithub.com/langchain-ai/langchain/pull/17976)
- templates: remove gemini_function_agent unused file by
[@&#8203;Sanjaypranav](https://togithub.com/Sanjaypranav) in
[https://github.com/langchain-ai/langchain/pull/18112](https://togithub.com/langchain-ai/langchain/pull/18112)
- package: community/langchain_community/vectorstores/chroma.py by
[@&#8203;WhitePegasis](https://togithub.com/WhitePegasis) in
[https://github.com/langchain-ai/langchain/pull/17964](https://togithub.com/langchain-ai/langchain/pull/17964)
- docs: stop copying source by
[@&#8203;efriis](https://togithub.com/efriis) in
[https://github.com/langchain-ai/langchain/pull/18404](https://togithub.com/langchain-ai/langchain/pull/18404)
- docs: Fix spelling typos in apache_kafka notebook by
[@&#8203;standby24x7](https://togithub.com/standby24x7) in
[https://github.com/langchain-ai/langchain/pull/17998](https://togithub.com/langchain-ai/langchain/pull/17998)
- infra: update to pathspec for 'git grep' in lint check by
[@&#8203;sepiatone](https://togithub.com/sepiatone) in
[https://github.com/langchain-ai/langchain/pull/18178](https://togithub.com/langchain-ai/langchain/pull/18178)
- community\[patch]: release 0.0.25 by
[@&#8203;efriis](https://togithub.com/efriis) in
[https://github.com/langchain-ai/langchain/pull/18408](https://togithub.com/langchain-ai/langchain/pull/18408)
- langchain\[patch]: release 0.1.10 by
[@&#8203;efriis](https://togithub.com/efriis) in
[https://github.com/langchain-ai/langchain/pull/18410](https://togithub.com/langchain-ai/langchain/pull/18410)

#### New Contributors

- [@&#8203;benjibc](https://togithub.com/benjibc) made their first
contribution in
[https://github.com/langchain-ai/langchain/pull/17694](https://togithub.com/langchain-ai/langchain/pull/17694)
- [@&#8203;DannyMac180](https://togithub.com/DannyMac180) made their
first contribution in
[https://github.com/langchain-ai/langchain/pull/18047](https://togithub.com/langchain-ai/langchain/pull/18047)
- [@&#8203;2jimoo](https://togithub.com/2jimoo) made their first
contribution in
[https://github.com/langchain-ai/langchain/pull/17320](https://togithub.com/langchain-ai/langchain/pull/17320)
- [@&#8203;maximeperrindev](https://togithub.com/maximeperrindev) made
their first contribution in
[https://github.com/langchain-ai/langchain/pull/17870](https://togithub.com/langchain-ai/langchain/pull/17870)
- [@&#8203;bhalder](https://togithub.com/bhalder) made their first
contribution in
[https://github.com/langchain-ai/langchain/pull/17708](https://togithub.com/langchain-ai/langchain/pull/17708)
- [@&#8203;dokato](https://togithub.com/dokato) made their first
contribution in
[https://github.com/langchain-ai/langchain/pull/17810](https://togithub.com/langchain-ai/langchain/pull/17810)
- [@&#8203;rongchenlin](https://togithub.com/rongchenlin) made their
first contribution in
[https://github.com/langchain-ai/langchain/pull/18128](https://togithub.com/langchain-ai/langchain/pull/18128)
- [@&#8203;simonschmidt](https://togithub.com/simonschmidt) made their
first contribution in
[https://github.com/langchain-ai/langchain/pull/18129](https://togithub.com/langchain-ai/langchain/pull/18129)
- [@&#8203;lgabs](https://togithub.com/lgabs) made their first
contribution in
[https://github.com/langchain-ai/langchain/pull/18136](https://togithub.com/langchain-ai/langchain/pull/18136)
- [@&#8203;HeidiSteen](https://togithub.com/HeidiSteen) made their first
contribution in
[https://github.com/langchain-ai/langchain/pull/18135](https://togithub.com/langchain-ai/langchain/pull/18135)
- [@&#8203;dstambler17](https://togithub.com/dstambler17) made their
first contribution in
[https://github.com/langchain-ai/langchain/pull/18111](https://togithub.com/langchain-ai/langchain/pull/18111)
- [@&#8203;bgdsh](https://togithub.com/bgdsh) made their first
contribution in
[https://github.com/langchain-ai/langchain/pull/18107](https://togithub.com/langchain-ai/langchain/pull/18107)
- [@&#8203;am-kinetica](https://togithub.com/am-kinetica) made their
first contribution in
[https://github.com/langchain-ai/langchain/pull/18102](https://togithub.com/langchain-ai/langchain/pull/18102)
- [@&#8203;jaskirat8](https://togithub.com/jaskirat8) made their first
contribution in
[https://github.com/langchain-ai/langchain/pull/18103](https://togithub.com/langchain-ai/langchain/pull/18103)
- [@&#8203;matthaigh27](https://togithub.com/matthaigh27) made their
first contribution in
[https://github.com/langchain-ai/langchain/pull/16807](https://togithub.com/langchain-ai/langchain/pull/16807)
- [@&#8203;isahers1](https://togithub.com/isahers1) made their first
contribution in
[https://github.com/langchain-ai/langchain/pull/18164](https://togithub.com/langchain-ai/langchain/pull/18164)
- [@&#8203;thehapyone](https://togithub.com/thehapyone) made their first
contribution in
[https://github.com/langchain-ai/langchain/pull/17711](https://togithub.com/langchain-ai/langchain/pull/17711)
- [@&#8203;jackwotherspoon](https://togithub.com/jackwotherspoon) made
their first contribution in
[https://github.com/langchain-ai/langchain/pull/18265](https://togithub.com/langchain-ai/langchain/pull/18265)
- [@&#8203;williamdevena](https://togithub.com/williamdevena) made their
first contribution in
[https://github.com/langchain-ai/langchain/pull/18271](https://togithub.com/langchain-ai/langchain/pull/18271)
- [@&#8203;Sanjaypranav](https://togithub.com/Sanjaypranav) made their
first contribution in
[https://github.com/langchain-ai/langchain/pull/18116](https://togithub.com/langchain-ai/langchain/pull/18116)
- [@&#8203;zhangkai803](https://togithub.com/zhangkai803) made their
first contribution in
[https://github.com/langchain-ai/langchain/pull/18249](https://togithub.com/langchain-ai/langchain/pull/18249)
- [@&#8203;filipsch](https://togithub.com/filipsch) made their first
contribution in
[https://github.com/langchain-ai/langchain/pull/18332](https://togithub.com/langchain-ai/langchain/pull/18332)
- [@&#8203;kellerkind84](https://togithub.com/kellerkind84) made their
first contribution in
[https://github.com/langchain-ai/langchain/pull/18255](https://togithub.com/langchain-ai/langchain/pull/18255)
- [@&#8203;Jibola](https://togithub.com/Jibola) made their first
contribution in
[https://github.com/langchain-ai/langchain/pull/17652](https://togithub.com/langchain-ai/langchain/pull/17652)
- [@&#8203;RadhikaBansal97](https://togithub.com/RadhikaBansal97) made
their first contribution in
[https://github.com/langchain-ai/langchain/pull/17622](https://togithub.com/langchain-ai/langchain/pull/17622)
-
[@&#8203;OpenVINO-dev-contest](https://togithub.com/OpenVINO-dev-contest)
made their first contribution in
[https://github.com/langchain-ai/langchain/pull/11591](https://togithub.com/langchain-ai/langchain/pull/11591)
- [@&#8203;shane-huang](https://togithub.com/shane-huang) made their
first contribution in
[https://github.com/langchain-ai/langchain/pull/17953](https://togithub.com/langchain-ai/langchain/pull/17953)
- [@&#8203;akashAD98](https://togithub.com/akashAD98) made their first
contribution in
[https://github.com/langchain-ai/langchain/pull/17809](https://togithub.com/langchain-ai/langchain/pull/17809)
- [@&#8203;DanielChico](https://togithub.com/DanielChico) made their
first contribution in
[https://github.com/langchain-ai/langchain/pull/18395](https://togithub.com/langchain-ai/langchain/pull/18395)
- [@&#8203;mmajewsk](https://togithub.com/mmajewsk) made their first
contribution in
[https://github.com/langchain-ai/langchain/pull/18397](https://togithub.com/langchain-ai/langchain/pull/18397)
- [@&#8203;tabbyl21](https://togithub.com/tabbyl21) made their first
contribution in
[https://github.com/langchain-ai/langchain/pull/18394](https://togithub.com/langchain-ai/langchain/pull/18394)
- [@&#8203;hemslo](https://togithub.com/hemslo) made their first
contribution in
[https://github.com/langchain-ai/langchain/pull/18193](https://togithub.com/langchain-ai/langchain/pull/18193)
- [@&#8203;StrikerRUS](https://togithub.com/StrikerRUS) made their first
contribution in
[https://github.com/langchain-ai/langchain/pull/18317](https://togithub.com/langchain-ai/langchain/pull/18317)
- [@&#8203;certified-dodo](https://togithub.com/certified-dodo) made
their first contribution in
[https://github.com/langchain-ai/langchain/pull/17971](https://togithub.com/langchain-ai/langchain/pull/17971)
- [@&#8203;sarahberenji](https://togithub.com/sarahberenji) made their
first contribution in
[https://github.com/langchain-ai/langchain/pull/17717](https://togithub.com/langchain-ai/langchain/pull/17717)
- [@&#8203;rodrigo-f-nogueira](https://togithub.com/rodrigo-f-nogueira)
made their first contribution in
[https://github.com/langchain-ai/langchain/pull/17675](https://togithub.com/langchain-ai/langchain/pull/17675)
- [@&#8203;atherfawaz](https://togithub.com/atherfawaz) made their first
contribution in
[https://github.com/langchain-ai/langchain/pull/17024](https://togithub.com/langchain-ai/langchain/pull/17024)
- [@&#8203;piizei](https://togithub.com/piizei) made their first
contribution in
[https://github.com/langchain-ai/langchain/pull/17683](https://togithub.com/langchain-ai/langchain/pull/17683)
- [@&#8203;rmeinzer-copado](https://togithub.com/rmeinzer-copado) made
their first contribution in
[https://github.com/langchain-ai/langchain/pull/17981](https://togithub.com/langchain-ai/langchain/pull/17981)
- [@&#8203;arunsathiya](https://togithub.com/arunsathiya) made their
first contribution in
[https://github.com/langchain-ai/langchain/pull/12188](https://togithub.com/langchain-ai/langchain/pull/12188)
- [@&#8203;WhitePegasis](https://togithub.com/WhitePegasis) made their
first contribution in
[https://github.com/langchain-ai/langchain/pull/17964](https://togithub.com/langchain-ai/langchain/pull/17964)

**Full Changelog**:
https://github.com/langchain-ai/langchain/compare/v0.1.9...v0.1.10

###
[`v0.1.9`](https://togithub.com/langchain-ai/langchain/releases/tag/v0.1.9)

[Compare
Source](https://togithub.com/langchain-ai/langchain/compare/v0.1.8...v0.1.9)

#### What's Changed

- experimental\[patch]: Release 0.0.52 by
[@&#8203;baskaryan](https://togithub.com/baskaryan) in
[https://github.com/langchain-ai/langchain/pull/17763](https://togithub.c

</details>

---

### Configuration

📅 **Schedule**: Branch creation - "" (UTC), Automerge - At any time (no
schedule defined).

🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.

♻ **Rebasing**: Never, or you tick the rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about this update
again.

---

- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box

---

This PR has been generated by [Mend
Renovate](https://www.mend.io/free-developer-tools/renovate/). View
repository job log
[here](https://developer.mend.io/github/GoogleCloudPlatform/genai-databases-retrieval-app).

<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiIzNy4yMjcuMiIsInVwZGF0ZWRJblZlciI6IjM3LjIyNy4yIiwidGFyZ2V0QnJhbmNoIjoibWFpbiJ9-->

Co-authored-by: Yuan <45984206+Yuan325@users.noreply.github.com>
gkorland pushed a commit to FalkorDB/langchain that referenced this pull request Mar 30, 2024
…langchain-ai#18317)

Related issue: langchain-ai#13896.

In case Ollama is behind a proxy, proxy error responses cannot be
viewed. You aren't even able to check response code.

For example, if your Ollama has basic access authentication and it's not
passed, `JSONDecodeError` will overwrite the truth response error.

<details>
<summary><b>Log now:</b></summary>

```
{
	"name": "JSONDecodeError",
	"message": "Expecting value: line 1 column 1 (char 0)",
	"stack": "---------------------------------------------------------------------------
JSONDecodeError                           Traceback (most recent call last)
File /opt/miniforge3/envs/.gpt/lib/python3.10/site-packages/requests/models.py:971, in Response.json(self, **kwargs)
    970 try:
--> 971     return complexjson.loads(self.text, **kwargs)
    972 except JSONDecodeError as e:
    973     # Catch JSON-related errors and raise as requests.JSONDecodeError
    974     # This aliases json.JSONDecodeError and simplejson.JSONDecodeError

File /opt/miniforge3/envs/.gpt/lib/python3.10/json/__init__.py:346, in loads(s, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, **kw)
    343 if (cls is None and object_hook is None and
    344         parse_int is None and parse_float is None and
    345         parse_constant is None and object_pairs_hook is None and not kw):
--> 346     return _default_decoder.decode(s)
    347 if cls is None:

File /opt/miniforge3/envs/.gpt/lib/python3.10/json/decoder.py:337, in JSONDecoder.decode(self, s, _w)
    333 \"\"\"Return the Python representation of ``s`` (a ``str`` instance
    334 containing a JSON document).
    335 
    336 \"\"\"
--> 337 obj, end = self.raw_decode(s, idx=_w(s, 0).end())
    338 end = _w(s, end).end()

File /opt/miniforge3/envs/.gpt/lib/python3.10/json/decoder.py:355, in JSONDecoder.raw_decode(self, s, idx)
    354 except StopIteration as err:
--> 355     raise JSONDecodeError(\"Expecting value\", s, err.value) from None
    356 return obj, end

JSONDecodeError: Expecting value: line 1 column 1 (char 0)

During handling of the above exception, another exception occurred:

JSONDecodeError                           Traceback (most recent call last)
Cell In[3], line 1
----> 1 print(translate_func().invoke('text'))

File /opt/miniforge3/envs/.gpt/lib/python3.10/site-packages/langchain_core/runnables/base.py:2053, in RunnableSequence.invoke(self, input, config)
   2051 try:
   2052     for i, step in enumerate(self.steps):
-> 2053         input = step.invoke(
   2054             input,
   2055             # mark each step as a child run
   2056             patch_config(
   2057                 config, callbacks=run_manager.get_child(f\"seq:step:{i+1}\")
   2058             ),
   2059         )
   2060 # finish the root run
   2061 except BaseException as e:

File /opt/miniforge3/envs/.gpt/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py:165, in BaseChatModel.invoke(self, input, config, stop, **kwargs)
    154 def invoke(
    155     self,
    156     input: LanguageModelInput,
   (...)
    160     **kwargs: Any,
    161 ) -> BaseMessage:
    162     config = ensure_config(config)
    163     return cast(
    164         ChatGeneration,
--> 165         self.generate_prompt(
    166             [self._convert_input(input)],
    167             stop=stop,
    168             callbacks=config.get(\"callbacks\"),
    169             tags=config.get(\"tags\"),
    170             metadata=config.get(\"metadata\"),
    171             run_name=config.get(\"run_name\"),
    172             **kwargs,
    173         ).generations[0][0],
    174     ).message

File /opt/miniforge3/envs/.gpt/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py:543, in BaseChatModel.generate_prompt(self, prompts, stop, callbacks, **kwargs)
    535 def generate_prompt(
    536     self,
    537     prompts: List[PromptValue],
   (...)
    540     **kwargs: Any,
    541 ) -> LLMResult:
    542     prompt_messages = [p.to_messages() for p in prompts]
--> 543     return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)

File /opt/miniforge3/envs/.gpt/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py:407, in BaseChatModel.generate(self, messages, stop, callbacks, tags, metadata, run_name, **kwargs)
    405         if run_managers:
    406             run_managers[i].on_llm_error(e, response=LLMResult(generations=[]))
--> 407         raise e
    408 flattened_outputs = [
    409     LLMResult(generations=[res.generations], llm_output=res.llm_output)
    410     for res in results
    411 ]
    412 llm_output = self._combine_llm_outputs([res.llm_output for res in results])

File /opt/miniforge3/envs/.gpt/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py:397, in BaseChatModel.generate(self, messages, stop, callbacks, tags, metadata, run_name, **kwargs)
    394 for i, m in enumerate(messages):
    395     try:
    396         results.append(
--> 397             self._generate_with_cache(
    398                 m,
    399                 stop=stop,
    400                 run_manager=run_managers[i] if run_managers else None,
    401                 **kwargs,
    402             )
    403         )
    404     except BaseException as e:
    405         if run_managers:

File /opt/miniforge3/envs/.gpt/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py:576, in BaseChatModel._generate_with_cache(self, messages, stop, run_manager, **kwargs)
    572     raise ValueError(
    573         \"Asked to cache, but no cache found at `langchain.cache`.\"
    574     )
    575 if new_arg_supported:
--> 576     return self._generate(
    577         messages, stop=stop, run_manager=run_manager, **kwargs
    578     )
    579 else:
    580     return self._generate(messages, stop=stop, **kwargs)

File /opt/miniforge3/envs/.gpt/lib/python3.10/site-packages/langchain_community/chat_models/ollama.py:250, in ChatOllama._generate(self, messages, stop, run_manager, **kwargs)
    226 def _generate(
    227     self,
    228     messages: List[BaseMessage],
   (...)
    231     **kwargs: Any,
    232 ) -> ChatResult:
    233     \"\"\"Call out to Ollama's generate endpoint.
    234 
    235     Args:
   (...)
    247             ])
    248     \"\"\"
--> 250     final_chunk = self._chat_stream_with_aggregation(
    251         messages,
    252         stop=stop,
    253         run_manager=run_manager,
    254         verbose=self.verbose,
    255         **kwargs,
    256     )
    257     chat_generation = ChatGeneration(
    258         message=AIMessage(content=final_chunk.text),
    259         generation_info=final_chunk.generation_info,
    260     )
    261     return ChatResult(generations=[chat_generation])

File /opt/miniforge3/envs/.gpt/lib/python3.10/site-packages/langchain_community/chat_models/ollama.py:183, in ChatOllama._chat_stream_with_aggregation(self, messages, stop, run_manager, verbose, **kwargs)
    174 def _chat_stream_with_aggregation(
    175     self,
    176     messages: List[BaseMessage],
   (...)
    180     **kwargs: Any,
    181 ) -> ChatGenerationChunk:
    182     final_chunk: Optional[ChatGenerationChunk] = None
--> 183     for stream_resp in self._create_chat_stream(messages, stop, **kwargs):
    184         if stream_resp:
    185             chunk = _chat_stream_response_to_chat_generation_chunk(stream_resp)

File /opt/miniforge3/envs/.gpt/lib/python3.10/site-packages/langchain_community/chat_models/ollama.py:156, in ChatOllama._create_chat_stream(self, messages, stop, **kwargs)
    147 def _create_chat_stream(
    148     self,
    149     messages: List[BaseMessage],
    150     stop: Optional[List[str]] = None,
    151     **kwargs: Any,
    152 ) -> Iterator[str]:
    153     payload = {
    154         \"messages\": self._convert_messages_to_ollama_messages(messages),
    155     }
--> 156     yield from self._create_stream(
    157         payload=payload, stop=stop, api_url=f\"{self.base_url}/api/chat/\", **kwargs
    158     )

File /opt/miniforge3/envs/.gpt/lib/python3.10/site-packages/langchain_community/llms/ollama.py:234, in _OllamaCommon._create_stream(self, api_url, payload, stop, **kwargs)
    228         raise OllamaEndpointNotFoundError(
    229             \"Ollama call failed with status code 404. \"
    230             \"Maybe your model is not found \"
    231             f\"and you should pull the model with `ollama pull {self.model}`.\"
    232         )
    233     else:
--> 234         optional_detail = response.json().get(\"error\")
    235         raise ValueError(
    236             f\"Ollama call failed with status code {response.status_code}.\"
    237             f\" Details: {optional_detail}\"
    238         )
    239 return response.iter_lines(decode_unicode=True)

File /opt/miniforge3/envs/.gpt/lib/python3.10/site-packages/requests/models.py:975, in Response.json(self, **kwargs)
    971     return complexjson.loads(self.text, **kwargs)
    972 except JSONDecodeError as e:
    973     # Catch JSON-related errors and raise as requests.JSONDecodeError
    974     # This aliases json.JSONDecodeError and simplejson.JSONDecodeError
--> 975     raise RequestsJSONDecodeError(e.msg, e.doc, e.pos)

JSONDecodeError: Expecting value: line 1 column 1 (char 0)"
}
```

</details>


<details>

<summary><b>Log after a fix:</b></summary>

```
{
	"name": "ValueError",
	"message": "Ollama call failed with status code 401. Details: <html>\r
<head><title>401 Authorization Required</title></head>\r
<body>\r
<center><h1>401 Authorization Required</h1></center>\r
<hr><center>nginx/1.18.0 (Ubuntu)</center>\r
</body>\r
</html>\r
",
	"stack": "---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
Cell In[2], line 1
----> 1 print(translate_func().invoke('text'))

File /opt/miniforge3/envs/.gpt/lib/python3.10/site-packages/langchain_core/runnables/base.py:2053, in RunnableSequence.invoke(self, input, config)
   2051 try:
   2052     for i, step in enumerate(self.steps):
-> 2053         input = step.invoke(
   2054             input,
   2055             # mark each step as a child run
   2056             patch_config(
   2057                 config, callbacks=run_manager.get_child(f\"seq:step:{i+1}\")
   2058             ),
   2059         )
   2060 # finish the root run
   2061 except BaseException as e:

File /opt/miniforge3/envs/.gpt/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py:165, in BaseChatModel.invoke(self, input, config, stop, **kwargs)
    154 def invoke(
    155     self,
    156     input: LanguageModelInput,
   (...)
    160     **kwargs: Any,
    161 ) -> BaseMessage:
    162     config = ensure_config(config)
    163     return cast(
    164         ChatGeneration,
--> 165         self.generate_prompt(
    166             [self._convert_input(input)],
    167             stop=stop,
    168             callbacks=config.get(\"callbacks\"),
    169             tags=config.get(\"tags\"),
    170             metadata=config.get(\"metadata\"),
    171             run_name=config.get(\"run_name\"),
    172             **kwargs,
    173         ).generations[0][0],
    174     ).message

File /opt/miniforge3/envs/.gpt/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py:543, in BaseChatModel.generate_prompt(self, prompts, stop, callbacks, **kwargs)
    535 def generate_prompt(
    536     self,
    537     prompts: List[PromptValue],
   (...)
    540     **kwargs: Any,
    541 ) -> LLMResult:
    542     prompt_messages = [p.to_messages() for p in prompts]
--> 543     return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)

File /opt/miniforge3/envs/.gpt/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py:407, in BaseChatModel.generate(self, messages, stop, callbacks, tags, metadata, run_name, **kwargs)
    405         if run_managers:
    406             run_managers[i].on_llm_error(e, response=LLMResult(generations=[]))
--> 407         raise e
    408 flattened_outputs = [
    409     LLMResult(generations=[res.generations], llm_output=res.llm_output)
    410     for res in results
    411 ]
    412 llm_output = self._combine_llm_outputs([res.llm_output for res in results])

File /opt/miniforge3/envs/.gpt/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py:397, in BaseChatModel.generate(self, messages, stop, callbacks, tags, metadata, run_name, **kwargs)
    394 for i, m in enumerate(messages):
    395     try:
    396         results.append(
--> 397             self._generate_with_cache(
    398                 m,
    399                 stop=stop,
    400                 run_manager=run_managers[i] if run_managers else None,
    401                 **kwargs,
    402             )
    403         )
    404     except BaseException as e:
    405         if run_managers:

File /opt/miniforge3/envs/.gpt/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py:576, in BaseChatModel._generate_with_cache(self, messages, stop, run_manager, **kwargs)
    572     raise ValueError(
    573         \"Asked to cache, but no cache found at `langchain.cache`.\"
    574     )
    575 if new_arg_supported:
--> 576     return self._generate(
    577         messages, stop=stop, run_manager=run_manager, **kwargs
    578     )
    579 else:
    580     return self._generate(messages, stop=stop, **kwargs)

File /opt/miniforge3/envs/.gpt/lib/python3.10/site-packages/langchain_community/chat_models/ollama.py:250, in ChatOllama._generate(self, messages, stop, run_manager, **kwargs)
    226 def _generate(
    227     self,
    228     messages: List[BaseMessage],
   (...)
    231     **kwargs: Any,
    232 ) -> ChatResult:
    233     \"\"\"Call out to Ollama's generate endpoint.
    234 
    235     Args:
   (...)
    247             ])
    248     \"\"\"
--> 250     final_chunk = self._chat_stream_with_aggregation(
    251         messages,
    252         stop=stop,
    253         run_manager=run_manager,
    254         verbose=self.verbose,
    255         **kwargs,
    256     )
    257     chat_generation = ChatGeneration(
    258         message=AIMessage(content=final_chunk.text),
    259         generation_info=final_chunk.generation_info,
    260     )
    261     return ChatResult(generations=[chat_generation])

File /storage/gpt-project/Repos/repo_nikita/gpt_lib/langchain/ollama.py:328, in ChatOllamaCustom._chat_stream_with_aggregation(self, messages, stop, run_manager, verbose, **kwargs)
    319 def _chat_stream_with_aggregation(
    320     self,
    321     messages: List[BaseMessage],
   (...)
    325     **kwargs: Any,
    326 ) -> ChatGenerationChunk:
    327     final_chunk: Optional[ChatGenerationChunk] = None
--> 328     for stream_resp in self._create_chat_stream(messages, stop, **kwargs):
    329         if stream_resp:
    330             chunk = _chat_stream_response_to_chat_generation_chunk(stream_resp)

File /storage/gpt-project/Repos/repo_nikita/gpt_lib/langchain/ollama.py:301, in ChatOllamaCustom._create_chat_stream(self, messages, stop, **kwargs)
    292 def _create_chat_stream(
    293     self,
    294     messages: List[BaseMessage],
    295     stop: Optional[List[str]] = None,
    296     **kwargs: Any,
    297 ) -> Iterator[str]:
    298     payload = {
    299         \"messages\": self._convert_messages_to_ollama_messages(messages),
    300     }
--> 301     yield from self._create_stream(
    302         payload=payload, stop=stop, api_url=f\"{self.base_url}/api/chat\", **kwargs
    303     )

File /storage/gpt-project/Repos/repo_nikita/gpt_lib/langchain/ollama.py:134, in _OllamaCommonCustom._create_stream(self, api_url, payload, stop, **kwargs)
    132     else:
    133         optional_detail = response.text
--> 134         raise ValueError(
    135             f\"Ollama call failed with status code {response.status_code}.\"
    136             f\" Details: {optional_detail}\"
    137         )
    138 return response.iter_lines(decode_unicode=True)

ValueError: Ollama call failed with status code 401. Details: <html>\r
<head><title>401 Authorization Required</title></head>\r
<body>\r
<center><h1>401 Authorization Required</h1></center>\r
<hr><center>nginx/1.18.0 (Ubuntu)</center>\r
</body>\r
</html>\r
"
}
```

</details>

The same is true for timeout errors or when you simply mistyped in
`base_url` arg and get response from some other service, for instance.

Real Ollama errors are still clearly readable:

```
ValueError: Ollama call failed with status code 400. Details: {"error":"invalid options: unknown_option"}
```

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
🤖:bug Related to a bug, vulnerability, unexpected error with an existing feature lgtm PR looks good. Use to confirm that a PR is ready for merging. Ɑ: models Related to LLMs or chat model modules size:XS This PR changes 0-9 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants