Skip to content

issues Search Results · repo:run-llama/llama_index language:Python

Filter by

5k results
 (77 ms)

5k results

inrun-llama/llama_index (press backspace or delete to remove)

Question Validation - [x] I have searched both the documentation and discord for an answer. Question I have deployed my own local model API , how to use it in llamaindex. 1. Is there any convenient ...
question
  • oldunclez
  • 1
  • Opened 
    1 hour ago
  • #18115

Bug Description While using AzureOpenAIMultiModal with VectorStoreIndex as index.as_chat_engine(llm=mm_llm) or index.as_query_engine(llm=mm_llm) it raises AssertionError from /llama_index/core/indices/multi_modal/base.py ...
bug
triage
  • amanchaudhary-95
  • 1
  • Opened 
    14 hours ago
  • #18111

Bug Description I use llamaindex workflow to build Agent. I set llm objects, vector store objects, retrieved nodes in context. We initialize the workflow globally and run the with different context for ...
bug
triage
  • dhanaabhirajk
  • 5
  • Opened 
    22 hours ago
  • #18107

Bug Description when i using QdrantVectorStore IngestionPipeline I got this error NoneType object has no attribute collection_exists Code Snippet vector_store = QdrantVectorStore(client=qdrant_service.client, ...
bug
triage
  • Sanbornzhang
  • 3
  • Opened 
    yesterday
  • #18105

Question Validation - [x] I have searched both the documentation and discord for an answer. Question Hi, To use the Custom Model, I am using the stream_complete function written as follows. https://docs.llamaindex.ai/en/stable/api_reference/llms/custom_llm/ ...
question
  • jiihye
  • 3
  • Opened 
    yesterday
  • #18104

Bug Description loading data using ElasticsearchReader will result in redundancy. Version 0.12.5 Steps to Reproduce Loading data using ElasticsearchReader will result in redundancy, because the ElasticsearchReader ...
bug
triage
  • minmie
  • 1
  • Opened 
    yesterday
  • #18102

Feature Description Because llama-index follows the import class format, same as the library name, maybe we can change the llama-index-integrations/llms/llama-index-llms-huggingface-api to llama-index-integrations/llms/llama-index-llms-huggingface-inference-api. ...
enhancement
triage
  • julurisaichandu
  • Opened 
    yesterday
  • #18101

Question Validation - [x] I have searched both the documentation and discord for an answer. Question question When using Ollama as LLMs and use function calling, the astriam_chat function will not ...
question
  • H4lo
  • 1
  • Opened 
    yesterday
  • #18099

Bug Description the code in bold will fix the issue _client: httpx.AsyncClient async with httpx.AsyncClient( headers=_headers, base_url=self._base_url, params=params, **follow_redirects=True** ...
bug
triage
  • maddogwithrabies
  • 2
  • Opened 
    yesterday
  • #18098

Bug Description Description There s an inconsistency in how the stream_complete method returns data between different LLM provider classes. Specifically, there s a difference in how streaming chunks ...
bug
triage
  • aman-gupta-doc
  • 2
  • Opened 
    yesterday
  • #18096
Issue origami icon

Learn how you can use GitHub Issues to plan and track your work.

Save views for sprints, backlogs, teams, or releases. Rank, sort, and filter issues to suit the occasion. The possibilities are endless.Learn more about GitHub Issues
ProTip! 
Press the
/
key to activate the search input again and adjust your query.
Issue origami icon

Learn how you can use GitHub Issues to plan and track your work.

Save views for sprints, backlogs, teams, or releases. Rank, sort, and filter issues to suit the occasion. The possibilities are endless.Learn more about GitHub Issues
ProTip! 
Restrict your search to the title by using the in:title qualifier.
Issue search results · GitHub