Skip to content

Releases: zylon-ai/private-gpt

v0.5.0

02 Apr 15:45
94ef38c
Compare
Choose a tag to compare

0.5.0 (2024-04-02)

Features

  • code: improve concat of strings in ui (#1785) (bac818a)
  • docker: set default Docker to use Ollama (#1812) (f83abff)
  • docs: Add guide Llama-CPP Linux AMD GPU support (#1782) (8a836e4)
  • docs: Feature/upgrade docs (#1741) (5725181)
  • docs: upgrade fern (#1596) (84ad16a)
  • ingest: Created a faster ingestion mode - pipeline (#1750) (134fc54)
  • llm - embed: Add support for Azure OpenAI (#1698) (1efac6a)
  • llm: adds serveral settings for llamacpp and ollama (#1703) (02dc83e)
  • llm: Ollama LLM-Embeddings decouple + longer keep_alive settings (#1800) (b3b0140)
  • llm: Ollama timeout setting (#1773) (6f6c785)
  • local: tiktoken cache within repo for offline (#1467) (821bca3)
  • nodestore: add Postgres for the doc and index store (#1706) (68b3a34)
  • rag: expose similarity_top_k and similarity_score to settings (#1771) (087cb0b)
  • RAG: Introduce SentenceTransformer Reranker (#1810) (83adc12)
  • scripts: Wipe qdrant and obtain db Stats command (#1783) (ea153fb)
  • ui: Add Model Information to ChatInterface label (f0b174c)
  • ui: add sources check to not repeat identical sources (#1705) (290b9fb)
  • UI: Faster startup and document listing (#1763) (348df78)
  • ui: maintain score order when curating sources (#1643) (410bf7a)
  • unify settings for vector and nodestore connections to PostgreSQL (#1730) (63de7e4)
  • wipe per storage type (#1772) (c2d6948)

Bug Fixes

v0.4.0

06 Mar 16:53
1b03b36
Compare
Choose a tag to compare

0.4.0 (2024-03-06)

Features

v0.3.0

16 Feb 16:42
066ea5b
Compare
Choose a tag to compare

0.3.0 (2024-02-16)

Features

  • add mistral + chatml prompts (#1426) (e326126)
  • Add stream information to generate SDKs (#1569) (24fae66)
  • API: Ingest plain text (#1417) (6eeb95e)
  • bulk-ingest: Add --ignored Flag to Exclude Specific Files and Directories During Ingestion (#1432) (b178b51)
  • llm: Add openailike llm mode (#1447) (2d27a9f), closes #1424
  • llm: Add support for Ollama LLM (#1526) (6bbec79)
  • settings: Configurable context_window and tokenizer (#1437) (4780540)
  • settings: Update default model to TheBloke/Mistral-7B-Instruct-v0.2-GGUF (#1415) (8ec7cf4)
  • ui: make chat area stretch to fill the screen (#1397) (c71ae7c)
  • UI: Select file to Query or Delete + Delete ALL (#1612) (aa13afd)

Bug Fixes

  • Adding an LLM param to fix broken generator from llamacpp (#1519) (869233f)
  • deploy: fix local and external dockerfiles (fde2b94)
  • docker: docker broken copy (#1419) (059f358)
  • docs: Update quickstart doc and set version in pyproject.toml to 0.2.0 (0a89d76)
  • minor bug in chat stream output - python error being serialized (#1449) (6191bcd)
  • settings: correct yaml multiline string (#1403) (2564f8d)
  • tests: load the test settings only when running tests (d3acd85)
  • UI: Updated ui.py. Frees up the CPU to not be bottlenecked. (24fb80c)

v0.2.0

10 Dec 19:08
e8ac51b
Compare
Choose a tag to compare

0.2.0 (2023-12-10)

Features

  • llm: drop default_system_prompt (#1385) (a3ed14c)
  • ui: Allows User to Set System Prompt via "Additional Options" in Chat Interface (#1353) (145f3ec)
  • settings: Allow setting OpenAI model in settings #1386

Fixes

  • docs: delete old documentation #1384

v0.1.0

01 Dec 13:46
3d301d0
Compare
Choose a tag to compare

0.1.0 (2023-11-30)

Features

  • Improved documentation using Fern
  • Fastest ingestion through different ingestions modes ([#1309] (#1309))
  • Add sources to completions APIs and UI
  • Add simple Basic auth
  • Add basic CORS
  • Add "search in docs" to UI
  • LLM and Embeddings model separate configuration
  • Allow using a system prompt in the API to modify the LLM behaviour
  • Expose configuration of the model execution such as max_new_tokens
  • Multiple prompt styles support for different models
  • Update to Gradio 4
  • Document deletion API
  • Sagemaker support
  • Disable Gradio Analytics (#1165) (6583dc8)
  • Drop loguru and use builtin logging (#1133) (64c5ae2)
  • enable resume download for hf_hub_download (#1249) (4197ada)
  • move torch and transformers to local group (#1172) (0d677e1)
  • Qdrant support (#1228) (03d1ae6)
  • Added wipe command to easy up vector database reset

Bug Fixes

v0.0.2

20 Oct 16:29
b8383e0
Compare
Choose a tag to compare

0.0.2 (2023-10-20)

Bug Fixes

v0.0.1

20 Oct 11:09
97d860a
Compare
Choose a tag to compare

0.0.1 (2023-10-20)

Features

Bug Fixes

Miscellaneous Chores