-
Notifications
You must be signed in to change notification settings - Fork 5.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix/confluence reader sort auth parameters priority #14905
Merged
logan-markewich
merged 5 commits into
run-llama:main
from
r13i:fix/confluence-reader-sort-auth-parameters-priority
Jul 25, 2024
Merged
Fix/confluence reader sort auth parameters priority #14905
logan-markewich
merged 5 commits into
run-llama:main
from
r13i:fix/confluence-reader-sort-auth-parameters-priority
Jul 25, 2024
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
…d environment variables
dosubot
bot
added
the
size:L
This PR changes 100-499 lines, ignoring generated files.
label
Jul 23, 2024
logan-markewich
approved these changes
Jul 24, 2024
barduinor
added a commit
to box-community/llama_index
that referenced
this pull request
Jul 31, 2024
* [version] bump version to 0.10.54 (run-llama#14681) * Add user configurations for Cleanlab LLM integration (run-llama#14676) * fix docs (run-llama#14687) * fix: race between concurrent pptx readers over a single temp filename. (run-llama#14686) * fix: race between concurrent pptx readers over a single temp filename. * vbump * rebase * vbump --------- Co-authored-by: Andrei Fajardo <andrei@nerdai.io> * Fix: Update html2text dependency to ^2024.2.26 to fix image src error (run-llama#14683) * Update pyproject.toml * bump llama-index-readers-web to 0.1.23 * [version] bump version to v0.10.54post1 (fixes 0.10.54 release with mismatch llama-index-core version) (run-llama#14699) * toml * lock * bump to post1 * Documentation: update huggingface.ipynb (run-llama#14697) utilties -> utilities * Upgrade llama-cloud client to 0.0.9 and support `retrieval_mode` and `files_top_k` (run-llama#14696) * update observability docs (run-llama#14692) * These docs have been supplanted by the full llamaparse docs site (run-llama#14688) * changes to Exa search tool getting started and example notebook (run-llama#14690) * Add a sample notebook to show llamaindex agents used for managed vertex ai index (run-llama#14704) * feat(ci): cache `poetry` in CI (run-llama#14485) * v0.10.55 (run-llama#14709) * fix fastembed python version (run-llama#14710) * fix fastembed python version * vbump --------- Co-authored-by: Andrei Fajardo <andrei@nerdai.io> * Box reader (run-llama#14685) * remove flakey and unhelpful tests (run-llama#14737) * fix: tools are required for attachments in openai api (run-llama#14609) * Update simple_summarize.py (run-llama#14714) * feat: improve azureai search deleting (run-llama#14693) * Adds Quantization option to QdrantVectorStore (run-llama#14740) * Add GraphRAG Implementation (run-llama#14752) * update docs for OpenAI/AzureOpenAI additional_kwargs (run-llama#14749) * update docs for OpenAI/AzureOpenAI additional_kwargs * code lint * follow odata.nextLink (run-llama#14708) * follow odata.nextLink jsonify response once * removed print statement * bumped version --------- Co-authored-by: Chris Knowles <chris@hockeytape.ai> * 📃 docs(Learn): Loading Data (run-llama#14762) * 📃 docs(Learn): Loading Data 1. add understanding\using_llms\using_llms.md missing full stop 2. fix understanding\loading\loading.md DatabaseReader link 3. add module_guides\loading\node_parsers\modules.md node_parsers modules SemanticSplitterNodeParser video link 4. fix docs\docs\examples\ingestion\document_management_pipeline.ipynb redis link * fix one link --------- Co-authored-by: Andrei Fajardo <andrei@nerdai.io> * [FIX] Issues with `llama-index-readers-box` pyproject.toml (run-llama#14770) fix maintainers issue * Bump setuptools from 69.5.1 to 70.0.0 in /llama-index-integrations/embeddings/llama-index-embeddings-upstage (run-llama#14771) Bump setuptools Bumps [setuptools](https://github.com/pypa/setuptools) from 69.5.1 to 70.0.0. - [Release notes](https://github.com/pypa/setuptools/releases) - [Changelog](https://github.com/pypa/setuptools/blob/main/NEWS.rst) - [Commits](pypa/setuptools@v69.5.1...v70.0.0) --- updated-dependencies: - dependency-name: setuptools dependency-type: indirect ... Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * 📃 docs(examples): fix KnowledgeGraphDemo link error (run-llama#14764) 📃 docs(examples): KnowledgeGraphDemo 1. fix docs\docs\examples\index_structs\knowledge_graph\KnowledgeGraphDemo.ipynb example.html link error * Chore: restrict the scipy verstion to 1.12.0 to fix the error cannot import name triu from scipy.linalg (run-llama#14761) chore: restrict the scipy verstion to 1.12.0 to fix the error cannot import name triu from scipy.linalg * Remove double curly replacing from output parser utils (run-llama#14735) fix(output-parser): remove double curly replacing * Fix OpenWeatherMapToolSpec.forecast_tommorrow_at_location (run-llama#14745) * fix: weather agent tool api call for weather forecast * chore: fix style issue * bump version * docs: Fix wrong Qdrant_metadata_filter doc (run-llama#14773) #docs Fix wrong Qdrant_metadata_filter doc * feat: azureaisearch support collection string (run-llama#14712) * feat: azureaisearch support collections * feat: add odata filters * wip * wip * feat: add azureaisearch supported conditions (run-llama#14787) * chore: add conditions * wip * feat: Add NOT IN filter for Qdrant vector store (run-llama#14791) * feat: add nested filters for azureaisearch (run-llama#14795) * feat: nested filters * bump version * Add filter to get_triples in neo4j (run-llama#14811) Add filter to triples in neo4j * Improve output format system prompt in ReAct agent (run-llama#14814) fix(react agent): output format system prompt * Add support for mistralai nemo model (run-llama#14819) * Add support for gpt-4o-mini (run-llama#14820) * Fix bug when sanitize is used in neo4j property graph (run-llama#14812) * Fix bug when sanitize is used in neo4j property graph * bump version * Fix AgentRunner AgentRunStepStartEvent dispatch (run-llama#14828) * Fix AgentRunner AgentRunStepStartEvent dispatch * Fix index variable name * Fix PineconeRetriever: remove usage of global variables, check embeddings - Init Pinecone properly (run-llama#14799) * Fix PineconeRetriever class usage of global variables, add check to determine if query already has embeddings, and proper initialization of Pinecone and its index. * Apply pre-commit fixes * Fix Azure OpenAI LLM and Embedding async client bug (run-llama#14833) * Fix Azure OpenAI LLM and Embedding async client bug * Bump version of llama-index-llms-azure-openai and llama-index-embeddings-azure-openai * Add a notebook to show llamaindex agent works with graphRAG and Vertex AI (run-llama#14774) * update notebook * update notebook * fix managed indices bug * update sample notebook * add sample notebook for llamaindex agent with managed vertexai index * add agentic graphRAG notebook * add agentic graphRAG notebook * fix typo and move to new folder * Fix OpenAI Embedding async client bug (run-llama#14835) * Integration notebook for RAGChecker (run-llama#14838) * Notebook for RAGChecker integration with LlamaIndex * ragchecker integration * lint --------- Co-authored-by: Andrei Fajardo <andrei@nerdai.io> * add support for nvidia/nv-rerankqa-mistral-4b-v3 (run-llama#14844) * Update docstring for gmailtoolspec's search_messages tool (run-llama#14840) * Update docstring for gmailtoolspec's search_messages tool * vbump --------- Co-authored-by: Andrei Fajardo <andrei@nerdai.io> * Add Context-Only Response Synthesizer (run-llama#14439) * Add new integration for YandexGPT Embedding Model (run-llama#14313) * feat: ✨ Implement async functionality in `BedrockConverse` (run-llama#14326) * Azure AI Inference integration (run-llama#14672) * Fixing the issue where the _apply_node_postprocessors function was ca… (run-llama#14839) Fixing the issue where the _apply_node_postprocessors function was called without passing in a correctly typed object, leading to an inability to convert the object passed into subsequent deeper functions into a QueryBundle and consequently throwing an exception. * chore: read AZURE_POOL_MANAGEMENT_ENDPOINT from env vars (run-llama#14732) * Add an optional parameter similarity_score to VectorContextRetrieve… (run-llama#14831) * align deps (run-llama#14850) * Bugfix: ollama streaming response would not return last object that i… (run-llama#14830) * [version] bump version to 0.10.56 (run-llama#14849) * pyproject tomls * changelog * prepare docs * snuck one in * snuck one in * lock * lock * CHANGELOG - unreleased section (run-llama#14852) cr * [version] bump version to 0.1.7 for MongoDB Vector Store (run-llama#14851) Version bump to fix package issue * Implements `delete_nodes()` and `clear()` for Weviate, Opensearch, Milvus, Postgres, and Pinecone Vector Stores (run-llama#14800) * rename init file (run-llama#14853) * MongoDB Atlas Vector Search: Enhanced Metadata Filtering (run-llama#14856) * optimize ingestion pipeline deduping (run-llama#14858) * Empty array being send to vector store (run-llama#14859) * update notion reader (run-llama#14861) * fix unpicklable attributes (run-llama#14860) * Removed a dead link in Document Management Docs (run-llama#14863) * bump langchain version in integration (run-llama#14879) * 📃 docs(unserstanding): typo link error (run-llama#14867) 1. `getting_started`: change `https://github.com/jerryjliu/llama_index.git` to `https://github.com/run-llama/llama_index.git` 2. `putting_it_all_together`: change link `apps.md` to `apps/index.md` * Bugfix: AzureOpenAI may fail with custom azure_ad_token_provider (run-llama#14869) * Callbacks to Observability in the examples section (run-llama#14888) * WIP: update structured outputs syntax (run-llama#14747) * add property extraction for KGs (run-llama#14707) * v0.10.57 (run-llama#14893) * fireworks ai llama3.1 (run-llama#14914) * Update-mappings (run-llama#14917) update-mappings * Fix TaskStepOutput sources bug (run-llama#14885) * fix the initialization of Pinecone in the low-level ingestion notebook. (run-llama#14919) * Improved deeplake.get_nodes() performance (run-llama#14920) Co-authored-by: Nathan Voxland <nathan@voxland.net> * Bugfix: Don't pass empty list of embeddings to elasticsearch store when using sparse strategy (run-llama#14918) * Bugfix: Don't pass empty list of embeddings to elasticsearch store when using sparse strategy. * bump version and add comment --------- Co-authored-by: Jimmy Longley <jimmy@booknooklearning.com> * feat: allow to limit how many elements retrieve (qdrant) (run-llama#14904) * Add claude 3.5 sonnet to multi modal llms (run-llama#14932) * Add vector store integration of lindorm, including knn search, … (run-llama#14623) * Llamaindex retriever for Vertex AI Search (run-llama#14913) * Fix: Token counter expecting response.raw as dict, got ChatCompletionChunk (run-llama#14937) * 🐞 fix(integrations): BM25Retriever persist missing arg similarity_top_k (run-llama#14933) * structured extraction docs + bug fixes (run-llama#14925) * cr * cr * cr * patch entity extractor * fix cicd --------- Co-authored-by: Logan Markewich <logan.markewich@live.com> * v0.10.58 (run-llama#14944) * add organization_id param to LlamaCloudIndex.from_documents (run-llama#14947) * add organization_id param to from_documents * update version * add org id param to LlamaCloudIndex ctor * add back org id var * Add function calling for Ollama (run-llama#14948) * Fixed Import Error in PandasAIReader (run-llama#14915) * fix: organization id (run-llama#14961) * Fix None type error when using Neo4jPropertyGraphStore (run-llama#14957) * add back kwargs to Ollama (run-llama#14963) * use proper stemmer in bm25 tokenize (run-llama#14965) * fireworks 3.12 support (run-llama#14964) * Feature/azure docstore hotfix (run-llama#14950) * breaking: update to OpenLLM 0.6 (run-llama#14935) * UnstructuredReader fixes V2. (run-llama#14946) * restrict python version to enable publish of pandasai reader (run-llama#14966) * Fix/confluence reader sort auth parameters priority (run-llama#14905) * Feature/azure ai search hotfix (run-llama#14949) * Docs updates (run-llama#14941) * docs: Removed unused import from example * Updated link * Update hierarchical.py * typo: bm25 notebook * linting --------- Co-authored-by: Logan Markewich <logan.markewich@live.com> * toggle for ollama function calling (run-llama#14972) * make re-raising error skip constructor (run-llama#14970) * undo incompatible kwarg in elasticsearch vector store (run-llama#14973) undo incompatible kwarg * honor exclusion keys when creating the index nodes (run-llama#14911) * honor exclusion keys when creating the index nodes * cleaner code based on feedback. * missed a reference. * remove unnecessary embedding key removal * integration[embedding]: support textembed embedding (run-llama#14968) * update: support textembed embedding * Fix: lint and format error * Add: example notebook --------- Co-authored-by: Keval Dekivadiya <keval.dekivadiya@smartsensesolutions.com> * docs: update TiDB Cloud links to public beta! (run-llama#14976) * Feat: expand span coverage for query pipeline (run-llama#14997) Expand span coverage for query pipeline * Adds a LlamaPack that implements LongRAG (run-llama#14916) * Dashscope updates (run-llama#15028) * bump fastembed dep (run-llama#15029) * Update default sparse encoder for Hybrid search (run-llama#15019) * initial implementation FalkorDBPropertyGraphStore (run-llama#14936) * Add extra_info to metadata field on document in RTFReader (run-llama#15025) * Adds option to construct PGVectorStore with a HNSW index (run-llama#15024) * Jimmy/disable embeddings for sparse strategy (run-llama#15032) * MLflow Integration doc Update (run-llama#14977) * Initial commit * Update docs/docs/module_guides/observability/index.md Co-authored-by: Yuki Watanabe <31463517+B-Step62@users.noreply.github.com> --------- Co-authored-by: Yuki Watanabe <31463517+B-Step62@users.noreply.github.com> * Added feature to stream_chat allowing previous chunks to be inserted into the current context window (run-llama#14889) * docs(literalai): add Literal AI integration to documentation (run-llama#15023) * docs(literalai): add Literal AI integration to documentation * fix: white space * fixed import error regarding OpenAIAgent in the code_interpreter example notebook (run-llama#14999) fixed imports * docs(vector_store_index): fix typo (run-llama#15040) * Add py.typed file to vector store packages (run-llama#15031) * GitLab reader integration (run-llama#15030) * Fix: Azure AI inference integration support for tools (run-llama#15044) * [Fireworks] Updates to Default model and support for function calling (run-llama#15046) * Adding support for Llama 3 and Mixtral 22B * Changes to default model, adding support for Firefunction v2 * Fixing merge conflict * Update base.py (run-llama#15049) * Fix typo in docs (run-llama#15059) * Enhance MilvusVectorStore with flexible index management (run-llama#15058) --------- Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: Andrei Fajardo <92402603+nerdai@users.noreply.github.com> Co-authored-by: Ashish Sardana <ashishsardana21@gmail.com> Co-authored-by: Logan <logan.markewich@live.com> Co-authored-by: Jeff Inman <jti-lanl@users.noreply.github.com> Co-authored-by: Andrei Fajardo <andrei@nerdai.io> Co-authored-by: Vitalii Gerasimov <github.vo6lb@simplelogin.com> Co-authored-by: Ikko Eltociear Ashimine <eltociear@gmail.com> Co-authored-by: Simon Suo <simonsdsuo@gmail.com> Co-authored-by: Jerry Liu <jerryjliu98@gmail.com> Co-authored-by: Laurie Voss <github@seldo.com> Co-authored-by: Vkzem <38928153+Vkzem@users.noreply.github.com> Co-authored-by: Dave Wang <wangdave@google.com> Co-authored-by: Saurav Maheshkar <sauravvmaheshkar@gmail.com> Co-authored-by: Huu Le <39040748+leehuwuj@users.noreply.github.com> Co-authored-by: Joey Fallone <9968657+jmfallone@users.noreply.github.com> Co-authored-by: Yang YiHe <108562510+nmhjklnm@users.noreply.github.com> Co-authored-by: Emanuel Ferreira <contatoferreirads@gmail.com> Co-authored-by: Jonathan Liu <81734282+jonathanhliu21@users.noreply.github.com> Co-authored-by: Ravi Theja <ravi03071991@gmail.com> Co-authored-by: Botong Zhu <zbtsebuaa@outlook.com> Co-authored-by: Chris Knowles <christopher.j.knowles@gmail.com> Co-authored-by: Chris Knowles <chris@hockeytape.ai> Co-authored-by: Houtaroy <82852852+houtaroy@users.noreply.github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: richzw <1590890+richzw@users.noreply.github.com> Co-authored-by: Fernando Silva <fernandonsilva16@gmail.com> Co-authored-by: Alexander Fischer <alexfi@pm.me> Co-authored-by: Appletree24 <91041770+Appletree24@users.noreply.github.com> Co-authored-by: Tomaz Bratanic <bratanic.tomaz@gmail.com> Co-authored-by: Garrit Franke <32395585+garritfra@users.noreply.github.com> Co-authored-by: Harsha <harsha.ms7@gmail.com> Co-authored-by: Joel Rorseth <joelrorseth@gmail.com> Co-authored-by: Matin Khajavi <58955268+MatinKhajavi@users.noreply.github.com> Co-authored-by: Jia Le <5955220+jials@users.noreply.github.com> Co-authored-by: Xiangkun Hu <huxk_hit@qq.com> Co-authored-by: Matthew Farrellee <matt@cs.wisc.edu> Co-authored-by: Titus Lim <tituslhy@gmail.com> Co-authored-by: Robin Richtsfeld <robin.richtsfeld@gmail.com> Co-authored-by: Kirill <58888049+KirillKukharev@users.noreply.github.com> Co-authored-by: André Cristóvão Neves Ferreira <andrecnf@gmail.com> Co-authored-by: Facundo Santiago <fasantia@microsoft.com> Co-authored-by: weiweizwc98 <43433941+weiweizwc98@users.noreply.github.com> Co-authored-by: Wassim Chegham <github@wassim.dev> Co-authored-by: Shashank Gowda V <shashankgowda517@gmail.com> Co-authored-by: Koufax <38049545+nsgodshall@users.noreply.github.com> Co-authored-by: Chetan Choudhary <chetanchoudhary975@gmail.com> Co-authored-by: Brandon Max <bmax@users.noreply.github.com> Co-authored-by: Kanav <66438237+kanjurer@users.noreply.github.com> Co-authored-by: Nathan Voxland (Activeloop) <151186252+nvoxland-al@users.noreply.github.com> Co-authored-by: Nathan Voxland <nathan@voxland.net> Co-authored-by: Jimmy Longley <jimmylongley@gmail.com> Co-authored-by: Jimmy Longley <jimmy@booknooklearning.com> Co-authored-by: Javier Martinez <javiermartinezalvarez98@gmail.com> Co-authored-by: Diicell <44242534+diicellman@users.noreply.github.com> Co-authored-by: Rainy Guo <1060859719@qq.com> Co-authored-by: Joel Barmettler <24369532+joelbarmettlerUZH@users.noreply.github.com> Co-authored-by: Sourabh Desai <sourabhdesai@gmail.com> Co-authored-by: Aaryan Kaushik <105153447+aaryan200@users.noreply.github.com> Co-authored-by: yaqiang.sun <sunyaking@163.com> Co-authored-by: Francisco Aguilera <fraguile@microsoft.com> Co-authored-by: Aaron Pham <contact@aarnphm.xyz> Co-authored-by: Redouane Achouri <redouane.a.achouri@gmail.com> Co-authored-by: Rohit Amarnath <88762+ramarnat@users.noreply.github.com> Co-authored-by: keval dekivadiya <68591522+kevaldekivadiya2415@users.noreply.github.com> Co-authored-by: Keval Dekivadiya <keval.dekivadiya@smartsensesolutions.com> Co-authored-by: sykp241095 <sykp241095@gmail.com> Co-authored-by: Tibor Reiss <75096465+tibor-reiss@users.noreply.github.com> Co-authored-by: saipjkai <84132316+saipjkai@users.noreply.github.com> Co-authored-by: Avi Avni <avi.avni@gmail.com> Co-authored-by: Henry LeCompte <lecompteh18@gmail.com> Co-authored-by: Michael Berk <michaelberk99@gmail.com> Co-authored-by: Yuki Watanabe <31463517+B-Step62@users.noreply.github.com> Co-authored-by: rohans30 <contactrohans@gmail.com> Co-authored-by: Damien BUTY <dam.buty@gmail.com> Co-authored-by: sahan ruwantha <sahanr.silva@proton.me> Co-authored-by: Julien Bouquillon <julien.bouquillon@beta.gouv.fr> Co-authored-by: Christophe Bornet <cbornet@hotmail.com> Co-authored-by: Jiacheng Zhang <29214704+jiachengzhang1@users.noreply.github.com> Co-authored-by: Aravind Putrevu <aravind.putrevu@gmail.com> Co-authored-by: Di Wang <jiankong3@gmail.com> Co-authored-by: Shubham <dudeperf3ct@users.noreply.github.com>
barduinor
added a commit
to box-community/llama_index
that referenced
this pull request
Jul 31, 2024
* [version] bump version to 0.10.54 (run-llama#14681) * Add user configurations for Cleanlab LLM integration (run-llama#14676) * fix docs (run-llama#14687) * fix: race between concurrent pptx readers over a single temp filename. (run-llama#14686) * fix: race between concurrent pptx readers over a single temp filename. * vbump * rebase * vbump --------- Co-authored-by: Andrei Fajardo <andrei@nerdai.io> * Fix: Update html2text dependency to ^2024.2.26 to fix image src error (run-llama#14683) * Update pyproject.toml * bump llama-index-readers-web to 0.1.23 * [version] bump version to v0.10.54post1 (fixes 0.10.54 release with mismatch llama-index-core version) (run-llama#14699) * toml * lock * bump to post1 * Documentation: update huggingface.ipynb (run-llama#14697) utilties -> utilities * Upgrade llama-cloud client to 0.0.9 and support `retrieval_mode` and `files_top_k` (run-llama#14696) * update observability docs (run-llama#14692) * These docs have been supplanted by the full llamaparse docs site (run-llama#14688) * changes to Exa search tool getting started and example notebook (run-llama#14690) * Add a sample notebook to show llamaindex agents used for managed vertex ai index (run-llama#14704) * feat(ci): cache `poetry` in CI (run-llama#14485) * v0.10.55 (run-llama#14709) * fix fastembed python version (run-llama#14710) * fix fastembed python version * vbump --------- Co-authored-by: Andrei Fajardo <andrei@nerdai.io> * Box reader (run-llama#14685) * remove flakey and unhelpful tests (run-llama#14737) * fix: tools are required for attachments in openai api (run-llama#14609) * Update simple_summarize.py (run-llama#14714) * feat: improve azureai search deleting (run-llama#14693) * Adds Quantization option to QdrantVectorStore (run-llama#14740) * Add GraphRAG Implementation (run-llama#14752) * update docs for OpenAI/AzureOpenAI additional_kwargs (run-llama#14749) * update docs for OpenAI/AzureOpenAI additional_kwargs * code lint * follow odata.nextLink (run-llama#14708) * follow odata.nextLink jsonify response once * removed print statement * bumped version --------- Co-authored-by: Chris Knowles <chris@hockeytape.ai> * 📃 docs(Learn): Loading Data (run-llama#14762) * 📃 docs(Learn): Loading Data 1. add understanding\using_llms\using_llms.md missing full stop 2. fix understanding\loading\loading.md DatabaseReader link 3. add module_guides\loading\node_parsers\modules.md node_parsers modules SemanticSplitterNodeParser video link 4. fix docs\docs\examples\ingestion\document_management_pipeline.ipynb redis link * fix one link --------- Co-authored-by: Andrei Fajardo <andrei@nerdai.io> * [FIX] Issues with `llama-index-readers-box` pyproject.toml (run-llama#14770) fix maintainers issue * Bump setuptools from 69.5.1 to 70.0.0 in /llama-index-integrations/embeddings/llama-index-embeddings-upstage (run-llama#14771) Bump setuptools Bumps [setuptools](https://github.com/pypa/setuptools) from 69.5.1 to 70.0.0. - [Release notes](https://github.com/pypa/setuptools/releases) - [Changelog](https://github.com/pypa/setuptools/blob/main/NEWS.rst) - [Commits](pypa/setuptools@v69.5.1...v70.0.0) --- updated-dependencies: - dependency-name: setuptools dependency-type: indirect ... Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * 📃 docs(examples): fix KnowledgeGraphDemo link error (run-llama#14764) 📃 docs(examples): KnowledgeGraphDemo 1. fix docs\docs\examples\index_structs\knowledge_graph\KnowledgeGraphDemo.ipynb example.html link error * Chore: restrict the scipy verstion to 1.12.0 to fix the error cannot import name triu from scipy.linalg (run-llama#14761) chore: restrict the scipy verstion to 1.12.0 to fix the error cannot import name triu from scipy.linalg * Remove double curly replacing from output parser utils (run-llama#14735) fix(output-parser): remove double curly replacing * Fix OpenWeatherMapToolSpec.forecast_tommorrow_at_location (run-llama#14745) * fix: weather agent tool api call for weather forecast * chore: fix style issue * bump version * docs: Fix wrong Qdrant_metadata_filter doc (run-llama#14773) #docs Fix wrong Qdrant_metadata_filter doc * feat: azureaisearch support collection string (run-llama#14712) * feat: azureaisearch support collections * feat: add odata filters * wip * wip * feat: add azureaisearch supported conditions (run-llama#14787) * chore: add conditions * wip * feat: Add NOT IN filter for Qdrant vector store (run-llama#14791) * feat: add nested filters for azureaisearch (run-llama#14795) * feat: nested filters * bump version * Add filter to get_triples in neo4j (run-llama#14811) Add filter to triples in neo4j * Improve output format system prompt in ReAct agent (run-llama#14814) fix(react agent): output format system prompt * Add support for mistralai nemo model (run-llama#14819) * Add support for gpt-4o-mini (run-llama#14820) * Fix bug when sanitize is used in neo4j property graph (run-llama#14812) * Fix bug when sanitize is used in neo4j property graph * bump version * Fix AgentRunner AgentRunStepStartEvent dispatch (run-llama#14828) * Fix AgentRunner AgentRunStepStartEvent dispatch * Fix index variable name * Fix PineconeRetriever: remove usage of global variables, check embeddings - Init Pinecone properly (run-llama#14799) * Fix PineconeRetriever class usage of global variables, add check to determine if query already has embeddings, and proper initialization of Pinecone and its index. * Apply pre-commit fixes * Fix Azure OpenAI LLM and Embedding async client bug (run-llama#14833) * Fix Azure OpenAI LLM and Embedding async client bug * Bump version of llama-index-llms-azure-openai and llama-index-embeddings-azure-openai * Add a notebook to show llamaindex agent works with graphRAG and Vertex AI (run-llama#14774) * update notebook * update notebook * fix managed indices bug * update sample notebook * add sample notebook for llamaindex agent with managed vertexai index * add agentic graphRAG notebook * add agentic graphRAG notebook * fix typo and move to new folder * Fix OpenAI Embedding async client bug (run-llama#14835) * Integration notebook for RAGChecker (run-llama#14838) * Notebook for RAGChecker integration with LlamaIndex * ragchecker integration * lint --------- Co-authored-by: Andrei Fajardo <andrei@nerdai.io> * add support for nvidia/nv-rerankqa-mistral-4b-v3 (run-llama#14844) * Update docstring for gmailtoolspec's search_messages tool (run-llama#14840) * Update docstring for gmailtoolspec's search_messages tool * vbump --------- Co-authored-by: Andrei Fajardo <andrei@nerdai.io> * Add Context-Only Response Synthesizer (run-llama#14439) * Add new integration for YandexGPT Embedding Model (run-llama#14313) * feat: ✨ Implement async functionality in `BedrockConverse` (run-llama#14326) * Azure AI Inference integration (run-llama#14672) * Fixing the issue where the _apply_node_postprocessors function was ca… (run-llama#14839) Fixing the issue where the _apply_node_postprocessors function was called without passing in a correctly typed object, leading to an inability to convert the object passed into subsequent deeper functions into a QueryBundle and consequently throwing an exception. * chore: read AZURE_POOL_MANAGEMENT_ENDPOINT from env vars (run-llama#14732) * Add an optional parameter similarity_score to VectorContextRetrieve… (run-llama#14831) * align deps (run-llama#14850) * Bugfix: ollama streaming response would not return last object that i… (run-llama#14830) * [version] bump version to 0.10.56 (run-llama#14849) * pyproject tomls * changelog * prepare docs * snuck one in * snuck one in * lock * lock * CHANGELOG - unreleased section (run-llama#14852) cr * [version] bump version to 0.1.7 for MongoDB Vector Store (run-llama#14851) Version bump to fix package issue * Implements `delete_nodes()` and `clear()` for Weviate, Opensearch, Milvus, Postgres, and Pinecone Vector Stores (run-llama#14800) * rename init file (run-llama#14853) * MongoDB Atlas Vector Search: Enhanced Metadata Filtering (run-llama#14856) * optimize ingestion pipeline deduping (run-llama#14858) * Empty array being send to vector store (run-llama#14859) * update notion reader (run-llama#14861) * fix unpicklable attributes (run-llama#14860) * Removed a dead link in Document Management Docs (run-llama#14863) * bump langchain version in integration (run-llama#14879) * 📃 docs(unserstanding): typo link error (run-llama#14867) 1. `getting_started`: change `https://github.com/jerryjliu/llama_index.git` to `https://github.com/run-llama/llama_index.git` 2. `putting_it_all_together`: change link `apps.md` to `apps/index.md` * Bugfix: AzureOpenAI may fail with custom azure_ad_token_provider (run-llama#14869) * Callbacks to Observability in the examples section (run-llama#14888) * WIP: update structured outputs syntax (run-llama#14747) * add property extraction for KGs (run-llama#14707) * v0.10.57 (run-llama#14893) * fireworks ai llama3.1 (run-llama#14914) * Update-mappings (run-llama#14917) update-mappings * Fix TaskStepOutput sources bug (run-llama#14885) * fix the initialization of Pinecone in the low-level ingestion notebook. (run-llama#14919) * Improved deeplake.get_nodes() performance (run-llama#14920) Co-authored-by: Nathan Voxland <nathan@voxland.net> * Bugfix: Don't pass empty list of embeddings to elasticsearch store when using sparse strategy (run-llama#14918) * Bugfix: Don't pass empty list of embeddings to elasticsearch store when using sparse strategy. * bump version and add comment --------- Co-authored-by: Jimmy Longley <jimmy@booknooklearning.com> * feat: allow to limit how many elements retrieve (qdrant) (run-llama#14904) * Add claude 3.5 sonnet to multi modal llms (run-llama#14932) * Add vector store integration of lindorm, including knn search, … (run-llama#14623) * Llamaindex retriever for Vertex AI Search (run-llama#14913) * Fix: Token counter expecting response.raw as dict, got ChatCompletionChunk (run-llama#14937) * 🐞 fix(integrations): BM25Retriever persist missing arg similarity_top_k (run-llama#14933) * structured extraction docs + bug fixes (run-llama#14925) * cr * cr * cr * patch entity extractor * fix cicd --------- Co-authored-by: Logan Markewich <logan.markewich@live.com> * v0.10.58 (run-llama#14944) * add organization_id param to LlamaCloudIndex.from_documents (run-llama#14947) * add organization_id param to from_documents * update version * add org id param to LlamaCloudIndex ctor * add back org id var * Add function calling for Ollama (run-llama#14948) * Fixed Import Error in PandasAIReader (run-llama#14915) * fix: organization id (run-llama#14961) * Fix None type error when using Neo4jPropertyGraphStore (run-llama#14957) * add back kwargs to Ollama (run-llama#14963) * use proper stemmer in bm25 tokenize (run-llama#14965) * fireworks 3.12 support (run-llama#14964) * Feature/azure docstore hotfix (run-llama#14950) * breaking: update to OpenLLM 0.6 (run-llama#14935) * UnstructuredReader fixes V2. (run-llama#14946) * restrict python version to enable publish of pandasai reader (run-llama#14966) * Fix/confluence reader sort auth parameters priority (run-llama#14905) * Feature/azure ai search hotfix (run-llama#14949) * Docs updates (run-llama#14941) * docs: Removed unused import from example * Updated link * Update hierarchical.py * typo: bm25 notebook * linting --------- Co-authored-by: Logan Markewich <logan.markewich@live.com> * toggle for ollama function calling (run-llama#14972) * make re-raising error skip constructor (run-llama#14970) * undo incompatible kwarg in elasticsearch vector store (run-llama#14973) undo incompatible kwarg * honor exclusion keys when creating the index nodes (run-llama#14911) * honor exclusion keys when creating the index nodes * cleaner code based on feedback. * missed a reference. * remove unnecessary embedding key removal * integration[embedding]: support textembed embedding (run-llama#14968) * update: support textembed embedding * Fix: lint and format error * Add: example notebook --------- Co-authored-by: Keval Dekivadiya <keval.dekivadiya@smartsensesolutions.com> * docs: update TiDB Cloud links to public beta! (run-llama#14976) * Feat: expand span coverage for query pipeline (run-llama#14997) Expand span coverage for query pipeline * Adds a LlamaPack that implements LongRAG (run-llama#14916) * Dashscope updates (run-llama#15028) * bump fastembed dep (run-llama#15029) * Update default sparse encoder for Hybrid search (run-llama#15019) * initial implementation FalkorDBPropertyGraphStore (run-llama#14936) * Add extra_info to metadata field on document in RTFReader (run-llama#15025) * Adds option to construct PGVectorStore with a HNSW index (run-llama#15024) * Jimmy/disable embeddings for sparse strategy (run-llama#15032) * MLflow Integration doc Update (run-llama#14977) * Initial commit * Update docs/docs/module_guides/observability/index.md Co-authored-by: Yuki Watanabe <31463517+B-Step62@users.noreply.github.com> --------- Co-authored-by: Yuki Watanabe <31463517+B-Step62@users.noreply.github.com> * Added feature to stream_chat allowing previous chunks to be inserted into the current context window (run-llama#14889) * docs(literalai): add Literal AI integration to documentation (run-llama#15023) * docs(literalai): add Literal AI integration to documentation * fix: white space * fixed import error regarding OpenAIAgent in the code_interpreter example notebook (run-llama#14999) fixed imports * docs(vector_store_index): fix typo (run-llama#15040) * Add py.typed file to vector store packages (run-llama#15031) * GitLab reader integration (run-llama#15030) * Fix: Azure AI inference integration support for tools (run-llama#15044) * [Fireworks] Updates to Default model and support for function calling (run-llama#15046) * Adding support for Llama 3 and Mixtral 22B * Changes to default model, adding support for Firefunction v2 * Fixing merge conflict * Update base.py (run-llama#15049) * Fix typo in docs (run-llama#15059) * Enhance MilvusVectorStore with flexible index management (run-llama#15058) --------- Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: Andrei Fajardo <92402603+nerdai@users.noreply.github.com> Co-authored-by: Ashish Sardana <ashishsardana21@gmail.com> Co-authored-by: Logan <logan.markewich@live.com> Co-authored-by: Jeff Inman <jti-lanl@users.noreply.github.com> Co-authored-by: Andrei Fajardo <andrei@nerdai.io> Co-authored-by: Vitalii Gerasimov <github.vo6lb@simplelogin.com> Co-authored-by: Ikko Eltociear Ashimine <eltociear@gmail.com> Co-authored-by: Simon Suo <simonsdsuo@gmail.com> Co-authored-by: Jerry Liu <jerryjliu98@gmail.com> Co-authored-by: Laurie Voss <github@seldo.com> Co-authored-by: Vkzem <38928153+Vkzem@users.noreply.github.com> Co-authored-by: Dave Wang <wangdave@google.com> Co-authored-by: Saurav Maheshkar <sauravvmaheshkar@gmail.com> Co-authored-by: Huu Le <39040748+leehuwuj@users.noreply.github.com> Co-authored-by: Joey Fallone <9968657+jmfallone@users.noreply.github.com> Co-authored-by: Yang YiHe <108562510+nmhjklnm@users.noreply.github.com> Co-authored-by: Emanuel Ferreira <contatoferreirads@gmail.com> Co-authored-by: Jonathan Liu <81734282+jonathanhliu21@users.noreply.github.com> Co-authored-by: Ravi Theja <ravi03071991@gmail.com> Co-authored-by: Botong Zhu <zbtsebuaa@outlook.com> Co-authored-by: Chris Knowles <christopher.j.knowles@gmail.com> Co-authored-by: Chris Knowles <chris@hockeytape.ai> Co-authored-by: Houtaroy <82852852+houtaroy@users.noreply.github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: richzw <1590890+richzw@users.noreply.github.com> Co-authored-by: Fernando Silva <fernandonsilva16@gmail.com> Co-authored-by: Alexander Fischer <alexfi@pm.me> Co-authored-by: Appletree24 <91041770+Appletree24@users.noreply.github.com> Co-authored-by: Tomaz Bratanic <bratanic.tomaz@gmail.com> Co-authored-by: Garrit Franke <32395585+garritfra@users.noreply.github.com> Co-authored-by: Harsha <harsha.ms7@gmail.com> Co-authored-by: Joel Rorseth <joelrorseth@gmail.com> Co-authored-by: Matin Khajavi <58955268+MatinKhajavi@users.noreply.github.com> Co-authored-by: Jia Le <5955220+jials@users.noreply.github.com> Co-authored-by: Xiangkun Hu <huxk_hit@qq.com> Co-authored-by: Matthew Farrellee <matt@cs.wisc.edu> Co-authored-by: Titus Lim <tituslhy@gmail.com> Co-authored-by: Robin Richtsfeld <robin.richtsfeld@gmail.com> Co-authored-by: Kirill <58888049+KirillKukharev@users.noreply.github.com> Co-authored-by: André Cristóvão Neves Ferreira <andrecnf@gmail.com> Co-authored-by: Facundo Santiago <fasantia@microsoft.com> Co-authored-by: weiweizwc98 <43433941+weiweizwc98@users.noreply.github.com> Co-authored-by: Wassim Chegham <github@wassim.dev> Co-authored-by: Shashank Gowda V <shashankgowda517@gmail.com> Co-authored-by: Koufax <38049545+nsgodshall@users.noreply.github.com> Co-authored-by: Chetan Choudhary <chetanchoudhary975@gmail.com> Co-authored-by: Brandon Max <bmax@users.noreply.github.com> Co-authored-by: Kanav <66438237+kanjurer@users.noreply.github.com> Co-authored-by: Nathan Voxland (Activeloop) <151186252+nvoxland-al@users.noreply.github.com> Co-authored-by: Nathan Voxland <nathan@voxland.net> Co-authored-by: Jimmy Longley <jimmylongley@gmail.com> Co-authored-by: Jimmy Longley <jimmy@booknooklearning.com> Co-authored-by: Javier Martinez <javiermartinezalvarez98@gmail.com> Co-authored-by: Diicell <44242534+diicellman@users.noreply.github.com> Co-authored-by: Rainy Guo <1060859719@qq.com> Co-authored-by: Joel Barmettler <24369532+joelbarmettlerUZH@users.noreply.github.com> Co-authored-by: Sourabh Desai <sourabhdesai@gmail.com> Co-authored-by: Aaryan Kaushik <105153447+aaryan200@users.noreply.github.com> Co-authored-by: yaqiang.sun <sunyaking@163.com> Co-authored-by: Francisco Aguilera <fraguile@microsoft.com> Co-authored-by: Aaron Pham <contact@aarnphm.xyz> Co-authored-by: Redouane Achouri <redouane.a.achouri@gmail.com> Co-authored-by: Rohit Amarnath <88762+ramarnat@users.noreply.github.com> Co-authored-by: keval dekivadiya <68591522+kevaldekivadiya2415@users.noreply.github.com> Co-authored-by: Keval Dekivadiya <keval.dekivadiya@smartsensesolutions.com> Co-authored-by: sykp241095 <sykp241095@gmail.com> Co-authored-by: Tibor Reiss <75096465+tibor-reiss@users.noreply.github.com> Co-authored-by: saipjkai <84132316+saipjkai@users.noreply.github.com> Co-authored-by: Avi Avni <avi.avni@gmail.com> Co-authored-by: Henry LeCompte <lecompteh18@gmail.com> Co-authored-by: Michael Berk <michaelberk99@gmail.com> Co-authored-by: Yuki Watanabe <31463517+B-Step62@users.noreply.github.com> Co-authored-by: rohans30 <contactrohans@gmail.com> Co-authored-by: Damien BUTY <dam.buty@gmail.com> Co-authored-by: sahan ruwantha <sahanr.silva@proton.me> Co-authored-by: Julien Bouquillon <julien.bouquillon@beta.gouv.fr> Co-authored-by: Christophe Bornet <cbornet@hotmail.com> Co-authored-by: Jiacheng Zhang <29214704+jiachengzhang1@users.noreply.github.com> Co-authored-by: Aravind Putrevu <aravind.putrevu@gmail.com> Co-authored-by: Di Wang <jiankong3@gmail.com> Co-authored-by: Shubham <dudeperf3ct@users.noreply.github.com>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Description
This change sorts the priority of authentication parameters and environment variables for the Confluence reader.
Fixes # (issue)
#14836
New Package?
Did I fill in the
tool.llamahub
section in thepyproject.toml
and provide a detailed README.md for my new integration or package?Version Bump?
Did I bump the version in the
pyproject.toml
file of the package I am updating? (Except for thellama-index-core
package)Type of Change
Please delete options that are not relevant.
How Has This Been Tested?
Please describe the tests that you ran to verify your changes. Provide instructions so we can reproduce. Please also list any relevant details for your test configuration
Suggested Checklist:
make format; make lint
to appease the lint gods