Skip to content

Latest commit

 

History

History
115 lines (79 loc) · 13 KB

CHANGELOG.md

File metadata and controls

115 lines (79 loc) · 13 KB

Changelog

0.5.0 (2024-04-02)

Features

  • code: improve concat of strings in ui (#1785) (bac818a)
  • docker: set default Docker to use Ollama (#1812) (f83abff)
  • docs: Add guide Llama-CPP Linux AMD GPU support (#1782) (8a836e4)
  • docs: Feature/upgrade docs (#1741) (5725181)
  • docs: upgrade fern (#1596) (84ad16a)
  • ingest: Created a faster ingestion mode - pipeline (#1750) (134fc54)
  • llm - embed: Add support for Azure OpenAI (#1698) (1efac6a)
  • llm: adds serveral settings for llamacpp and ollama (#1703) (02dc83e)
  • llm: Ollama LLM-Embeddings decouple + longer keep_alive settings (#1800) (b3b0140)
  • llm: Ollama timeout setting (#1773) (6f6c785)
  • local: tiktoken cache within repo for offline (#1467) (821bca3)
  • nodestore: add Postgres for the doc and index store (#1706) (68b3a34)
  • rag: expose similarity_top_k and similarity_score to settings (#1771) (087cb0b)
  • RAG: Introduce SentenceTransformer Reranker (#1810) (83adc12)
  • scripts: Wipe qdrant and obtain db Stats command (#1783) (ea153fb)
  • ui: Add Model Information to ChatInterface label (f0b174c)
  • ui: add sources check to not repeat identical sources (#1705) (290b9fb)
  • UI: Faster startup and document listing (#1763) (348df78)
  • ui: maintain score order when curating sources (#1643) (410bf7a)
  • unify settings for vector and nodestore connections to PostgreSQL (#1730) (63de7e4)
  • wipe per storage type (#1772) (c2d6948)

Bug Fixes

0.4.0 (2024-03-06)

Features

0.3.0 (2024-02-16)

Features

  • add mistral + chatml prompts (#1426) (e326126)
  • Add stream information to generate SDKs (#1569) (24fae66)
  • API: Ingest plain text (#1417) (6eeb95e)
  • bulk-ingest: Add --ignored Flag to Exclude Specific Files and Directories During Ingestion (#1432) (b178b51)
  • llm: Add openailike llm mode (#1447) (2d27a9f), closes #1424
  • llm: Add support for Ollama LLM (#1526) (6bbec79)
  • settings: Configurable context_window and tokenizer (#1437) (4780540)
  • settings: Update default model to TheBloke/Mistral-7B-Instruct-v0.2-GGUF (#1415) (8ec7cf4)
  • ui: make chat area stretch to fill the screen (#1397) (c71ae7c)
  • UI: Select file to Query or Delete + Delete ALL (#1612) (aa13afd)

Bug Fixes

  • Adding an LLM param to fix broken generator from llamacpp (#1519) (869233f)
  • deploy: fix local and external dockerfiles (fde2b94)
  • docker: docker broken copy (#1419) (059f358)
  • docs: Update quickstart doc and set version in pyproject.toml to 0.2.0 (0a89d76)
  • minor bug in chat stream output - python error being serialized (#1449) (6191bcd)
  • settings: correct yaml multiline string (#1403) (2564f8d)
  • tests: load the test settings only when running tests (d3acd85)
  • UI: Updated ui.py. Frees up the CPU to not be bottlenecked. (24fb80c)

0.2.0 (2023-12-10)

Features

  • llm: drop default_system_prompt (#1385) (a3ed14c)
  • ui: Allows User to Set System Prompt via "Additional Options" in Chat Interface (#1353) (145f3ec)

0.1.0 (2023-11-30)

Features

Bug Fixes

0.0.2 (2023-10-20)

Bug Fixes

0.0.1 (2023-10-20)

Miscellaneous Chores