Releases: run-llama/llama_index
Releases · run-llama/llama_index
2024-06-02 (v0.10.43)
llama-index-core
[0.10.43]
- use default UUIDs when possible for property graph index vector stores (#13886)
- avoid empty or duplicate inserts in property graph index (#13891)
- Fix cur depth for
get_rel_map
in simple property graph store (#13888) - (bandaid) disable instrumentation from logging generators (#13901)
- Add backwards compatibility to Dispatcher.get_dispatch_event() method (#13895)
- Fix: Incorrect naming of acreate_plan in StructuredPlannerAgent (#13879)
llama-index-graph-stores-neo4j
[0.2.2]
- Handle cases where type is missing (neo4j property graph) (#13875)
- Rename
Neo4jPGStore
toNeo4jPropertyGraphStore
(with backward compat) (#13891)
llama-index-llms-openai
[0.1.22]
- Improve the retry mechanism of OpenAI (#13878)
llama-index-readers-web
[0.1.18]
- AsyncWebPageReader: made it actually async; it was exhibiting blocking behavior (#13897)
llama-index-vector-stores-opensearch
[0.1.10]
- Fix/OpenSearch filter logic (#13804)
2024-05-31 (v0.10.42)
llama-index-core
[0.10.42]
- Allow proper setting of the vector store in property graph index (#13816)
- fix imports in langchain bridge (#13871)
llama-index-graph-stores-nebula
[0.2.0]
- NebulaGraph support for PropertyGraphStore (#13816)
llama-index-llms-langchain
[0.1.5]
- fix fireworks imports in langchain llm (#13871)
llama-index-llms-openllm
[0.1.5]
- feat(openllm): 0.5 sdk integrations update (#13848)
llama-index-llms-premai
[0.1.5]
- Update SDK compatibility (#13836)
llama-index-readers-google
[0.2.6]
- Fixed a bug with tokens causing an infinite loop in GoogleDriveReader (#13863)
2024-05-30 (v0.10.41)
llama-index-core
[0.10.41]
- pass embeddings from index to property graph retriever (#13843)
- protect instrumentation event/span handlers from each other (#13823)
- add missing events for completion streaming (#13824)
- missing callback_manager.on_event_end when there is exception (#13825)
llama-index-llms-gemini
[0.1.10]
- use
model
kwarg for model name for gemini (#13791)
llama-index-llms-mistralai
[0.1.15]
llama-index-llms-openllm
[0.1.5]
- 0.5 integrations update (#13848)
llama-index-llms-vertex
[0.1.8]
- Safety setting for Pydantic Error for Vertex Integration (#13817)
llama-index-readers-smart-pdf-loader
[0.1.5]
- handle path objects in smart pdf reader (#13847)
2024-05-28 (v0.10.40)
llama-index-core
[0.10.40]
- Added
PropertyGraphIndex
and other supporting abstractions. See the full guide for more details (#13747) - Updated
AutoPrevNextNodePostprocessor
to allow passing in response mode and LLM (#13771) - fix type handling with return direct (#13776)
- Correct the method name to
_aget_retrieved_ids_and_texts
in retrievval evaluator (#13765) - fix: QueryTransformComponent incorrect call
self._query_transform
(#13756) - implement more filters for
SimpleVectorStoreIndex
(#13365)
llama-index-embeddings-bedrock
[0.2.0]
- Added support for Bedrock Titan Embeddings v2 (#13580)
llama-index-embeddings-oci-genai
[0.1.0]
- add Oracle Cloud Infrastructure (OCI) Generative AI (#13631)
llama-index-embeddings-huggingface
[0.2.1]
- Expose "safe_serialization" parameter from AutoModel (#11939)
llama-index-graph-stores-neo4j
[0.2.0]
- Added
Neo4jPGStore
for property graph support (#13747)
llama-index-indices-managed-dashscope
[0.1.1]
- Added dashscope managed index (#13378)
llama-index-llms-oci-genai
[0.1.0]
- add Oracle Cloud Infrastructure (OCI) Generative AI (#13631)
llama-index-readers-feishu-wiki
[0.1.1]
- fix undefined variable (#13768)
llama-index-packs-secgpt
[0.1.0]
- SecGPT - LlamaIndex Integration #13127
llama-index-vector-stores-hologres
[0.1.0]
- Add Hologres vector db (#13619)
llama-index-vector-stores-milvus
[0.1.16]
- Remove FlagEmbedding as Milvus's dependency (#13767)
Unify the collection construction regardless of the value of enable_sparse (#13773)
llama-index-vector-stores-opensearch
[0.1.9]
- refactor to put helper methods inside class definition (#13749)
v0.10.39
v0.10.39
v0.10.38
v0.10.38
v0.10.37
v0.10.37
v0.10.36
v0.10.36
2024-05-07 (v0.10.35)
llama-index-agent-introspective
[0.1.0]
- Add CRITIC and reflection agent integrations (#13108)
llama-index-core
[0.10.35]
- fix
from_defaults()
erasing summary memory buffer history (#13325) - use existing async event loop instead of
asyncio.run()
in core (#13309) - fix async streaming from query engine in condense question chat engine (#13306)
- Handle ValueError in extract_table_summaries in element node parsers (#13318)
- Handle llm properly for QASummaryQueryEngineBuilder and RouterQueryEngine (#13281)
- expand instrumentation payloads (#13302)
- Fix Bug in sql join statement missing schema (#13277)
llama-index-embeddings-jinaai
[0.1.5]
- add encoding_type parameters in JinaEmbedding class (#13172)
- fix encoding type access in JinaEmbeddings (#13315)
llama-index-embeddings-nvidia
[0.1.0]
- add nvidia nim embeddings support (#13177)
llama-index-llms-mistralai
[0.1.12]
- Fix async issue when streaming with Mistral AI (#13292)
llama-index-llms-nvidia
[0.1.0]
- add nvidia nim llm support (#13176)
llama-index-postprocessor-nvidia-rerank
[0.1.0]
- add nvidia nim rerank support (#13178)
llama-index-readers-file
[0.1.21]
- Update MarkdownReader to parse text before first header (#13327)
llama-index-readers-web
[0.1.13]
- feat: Spider Web Loader (#13200)
llama-index-vector-stores-vespa
[0.1.0]
- Add VectorStore integration for Vespa (#13213)
llama-index-vector-stores-vertexaivectorsearch
[0.1.0]
- Add support for Vertex AI Vector Search as Vector Store (#13186)
2024-05-02 (v0.10.34)
llama-index-core
[0.10.34]
- remove error ignoring during chat engine streaming (#13160)
- add structured planning agent (#13149)
- update base class for planner agent (#13228)
- Fix: Error when parse file using SimpleFileNodeParser and file's extension doesn't in FILE_NODE_PARSERS (#13156)
- add matching
source_node.node_id
verification to node parsers (#13109) - Retrieval Metrics: Updating HitRate and MRR for Evaluation@K documents retrieved. Also adding RR as a separate metric (#12997)
- Add chat summary memory buffer (#13155)
llama-index-indices-managed-zilliz
[0.1.3]
llama-index-llms-huggingface
[0.1.7]
- Add tool usage support with text-generation-inference integration from Hugging Face (#12471)
llama-index-llms-maritalk
[0.2.0]
- Add streaming for maritalk (#13207)
llama-index-llms-mistral-rs
[0.1.0]
- Integrate mistral.rs LLM (#13105)
llama-index-llms-mymagic
[0.1.7]
- mymagicai api update (#13148)
llama-index-llms-nvidia-triton
[0.1.5]
- Streaming Support for Nvidia's Triton Integration (#13135)
llama-index-llms-ollama
[0.1.3]
- added async support to ollama llms (#13150)
llama-index-readers-microsoft-sharepoint
[0.2.2]
- Exclude access control metadata keys from LLMs and embeddings - SharePoint Reader (#13184)
llama-index-readers-web
[0.1.11]
- feat: Browserbase Web Reader (#12877)
llama-index-readers-youtube-metadata
[0.1.0]
- Added YouTube Metadata Reader (#12975)
llama-index-storage-kvstore-redis
[0.1.4]
- fix redis kvstore key that was in bytes (#13201)
llama-index-vector-stores-azureaisearch
[0.1.5]
- Respect filter condition for Azure AI Search (#13215)
llama-index-vector-stores-chroma
[0.1.7]
- small bump for new chroma client version (#13158)
llama-index-vector-stores-firestore
[0.1.0]
- Adding Firestore Vector Store (#12048)
llama-index-vector-stores-kdbai
[0.1.5]
- small fix to returned IDs after
add()
(#12515)
llama-index-vector-stores-milvus
[0.1.11]
- Add hybrid retrieval mode to MilvusVectorStore (#13122)
llama-index-vector-stores-postgres
[0.1.7]
- parameterize queries in pgvector store (#13199)