Skip to content
This repository has been archived by the owner on Oct 20, 2022. It is now read-only.

Latest commit

 

History

History
460 lines (421 loc) · 19 KB

document_store.mdx

File metadata and controls

460 lines (421 loc) · 19 KB

Document Stores

You can think of the Document Store as a "database" that:

  • stores your texts and meta data
  • provides them to the Retriever at query time

By far the most common way to use a Document Store in Haystack is to fetch documents using a Retriever. A Document Store needs to be provided as an argument to the initialization of a Retriever. Note that the Retriever functions as a Node while a Document Store does not.

Initialisation

Initialising a new DocumentStore within Haystack is straightforward.

<Disclosures options={[ { title: "Elasticsearch", content: (

Install  Elasticsearch and then start  an instance.



If you have Docker set up, we recommend pulling the Docker image and running it.

docker pull docker.elastic.co/elasticsearch/elasticsearch:7.9.2
docker run -d -p 9200:9200 -e "discovery.type=single-node" elasticsearch:7.9.2
Next you can initialize the Haystack object that will connect to this instance.

document_store = ElasticsearchDocumentStore()
) }, { title: "Open Distro for Elasticsearch", content: (
Learn how to get started here. If you have Docker set up, we recommend pulling the Docker image and running it.
docker pull amazon/opendistro-for-elasticsearch:1.13.2
docker run -p 9200:9200 -p 9600:9600 -e "discovery.type=single-node" amazon/opendistro-for-elasticsearch:1.13.2
Next you can initialize the Haystack object that will connect to this instance.
from haystack.document_stores import OpenDistroElasticsearchDocumentStore
document_store = OpenDistroElasticsearchDocumentStore()
) }, { title: "OpenSearch", content: (
Learn how to get started here. If you have Docker set up, we recommend pulling the Docker image and running it.
docker pull opensearchproject/opensearch:1.0.1
docker run -p 9200:9200 -p 9600:9600 -e "discovery.type=single-node" opensearchproject/opensearch:1.0.1
Next you can initialize the Haystack object that will connect to this instance.
from haystack.document_stores import OpenSearchDocumentStore
document_store = OpenSearchDocumentStore()
) }, { title: "Milvus", content: (
Follow the official documentation to start a Milvus instance via Docker. Note that we also have a utility function haystack.utils.launch_milvus that can start up a Milvus instance.



You can initialize the Haystack object that will connect to this instance as follows:

from haystack.document_stores import MilvusDocumentStore
document_store = MilvusDocumentStore()
) }, { title: "FAISS", content: (
The FAISSDocumentStore requires no external setup. Start it by simply using this line.

from haystack.document_stores import FAISSDocumentStore
document_store = FAISSDocumentStore(faiss_index_factory_str="Flat")

Save & Load

FAISS document stores can be saved to disk and reloaded:
from haystack.document_stores import FAISSDocumentStore
document_store = FAISSDocumentStore(faiss_index_factory_str="Flat")
# Generates two files: my_faiss_index.faiss and my_faiss_index.json
document_store.save("my_faiss_index.faiss")
# Looks for the two files generated above
new_document_store = FAISSDocumentStore.load("my_faiss_index.faiss")
assert new_document_store.faiss_index_factory_str == "Flat"
While my_faiss_index.faiss contains the index, my_faiss_index.json contains the parameters used to inizialize it (like faiss_index_factory_store). This configuration file is necessary for load() to work. It simply contains the initial parameters in a JSON format.

For example, a hand-written configuration file for the above FAISS index could look like:

{
  faiss_index_factory_store: 'Flat'
}
) }, { title: "In Memory", content: (
The InMemoryDocumentStore() requires no external setup. Start it by simply using this line.
from haystack.document_stores import InMemoryDocumentStore
document_store = InMemoryDocumentStore()
) }, { title: "SQL", content: (
The SQLDocumentStore requires SQLite, PostgresQL or MySQL to be installed and started. Note that SQLite already comes packaged with most operating systems.
from haystack.document_stores import SQLDocumentStore
document_store = SQLDocumentStore()
) }, { title: "Weaviate", content: (
The WeaviateDocumentStore requires a running Weaviate Server version 1.8 or later. You can start a basic instance like this (see the Weaviate docs for details):
docker run -d -p 8080:8080 --env AUTHENTICATION_ANONYMOUS_ACCESS_ENABLED='true' --env PERSISTENCE_DATA_PATH='/var/lib/weaviate' semitechnologies/weaviate:1.12.0
Afterwards, you can use it in Haystack:
from haystack.document_stores import WeaviateDocumentStore
document_store = WeaviateDocumentStore()
Each DocumentStore constructor allows for arguments specifying how to connect to existing databases and the names of indexes. See API documentation for more info.
) }, { title: "Pinecone", content: (
To initialize the Pinecone document store, provide an API key and Pinecone cloud environment. To obtain the API key, create a free Pinecone account. The enviroment is us-west1-gcp by default.

            <pre>
                <code>import os</code>
                <code>from haystack.document_stores import PineconeDocumentStore</code>
                <code>document_store = PineconeDocumentStore(api_key=os.environ["PINECONE_API_KEY"])</code>
            </pre>
            </div>
        )
    }
]}

/>

Input Format

DocumentStores expect Documents in dictionary form, like that below. They are loaded using the DocumentStore.write_documents() method. See PreProcessor} for more information on the cleaning and splitting steps that will help you maximize Haystack's performance.

from haystack.document_stores import ElasticsearchDocumentStore

document_store = ElasticsearchDocumentStore()
dicts = [
    {
        'content': DOCUMENT_TEXT_HERE,
        'meta': {'name': DOCUMENT_NAME, ...}
    }, ...
]
document_store.write_documents(dicts)

Writing Documents (Sparse Retrievers)

Haystack allows for you to write store documents in an optimised fashion so that query times can be kept low. For sparse, keyword based retrievers such as BM25 and TF-IDF, you simply have to call DocumentStore.write_documents(). The creation of the inverted index which optimises querying speed is handled automatically.

document_store.write_documents(dicts)

Writing Documents (Dense Retrievers)

For dense neural network based retrievers like Dense Passage Retrieval, or Embedding Retrieval, indexing involves computing the Document embeddings which will be compared against the Query embedding.

The storing of the text is handled by DocumentStore.write_documents() and the computation of the embeddings is started by DocumentStore.update_embeddings().

document_store.write_documents(dicts)
document_store.update_embeddings(retriever)

This step is computationally intensive since it will engage the transformer based encoders. Having GPU acceleration will significantly speed this up.

Choosing the Right Document Store

The Document Stores have different characteristics. You should choose one depending on the maturity of your project, the use case and technical environment:

<Disclosures options={[ { title: "Elasticsearch", content: (

Pros:
  • Fast & accurate sparse retrieval with many tuning options
  • Basic support for dense retrieval
  • Production-ready
  • Support also for Open Distro
Cons:
  • Slow for dense retrieval with more than ~ 1 Mio documents
) }, { title: "Open Distro for Elasticsearch", content: (
Pros:
  • Fully open source (Apache 2.0 license)
  • Essentially the same features as Elasticsearch
Cons:
  • Slow for dense retrieval with more than ~ 1 Mio documents
) }, { title: "OpenSearch", content: (
Pros:
  • Fully open source (Apache 2.0 license)
  • Essentially the same features as Elasticsearch
  • Has more support for vector similarity comparisons and approximate nearest neighbours algorithms
Cons:
  • Not as optimized as dedicated vector similarity options like Milvus and FAISS
) }, { title: "Milvus", content: (
Pros:
  • Scalable DocumentStore that excels at handling vectors (hence suited to dense retrieval methods like DPR)
  • Encapsulates multiple ANN libraries (e.g. FAISS and ANNOY) and provides added reliability
  • Runs as a separate service (e.g. a Docker container)
  • Allows dynamic data management
Cons:
  • No efficient sparse retrieval
  • Does not support filters for queries
) }, { title: "FAISS", content: (
Pros:
  • Fast & accurate dense retrieval
  • Highly scalable due to approximate nearest neighbour algorithms (ANN)
  • Many options to tune dense retrieval via different index types (more info here)
Cons:
  • No efficient sparse retrieval
  • Does not support filters for queries
) }, { title: "In Memory", content: (
Pros:
  • Simple
  • No extra services or dependencies
Cons:
  • Slow retrieval on larger datasets
  • No Approximate Nearest Neighbours (ANN)
  • Not recommended for production
) }, { title: "SQL", content: (
Pros:
  • Simple & fast to test
  • No database requirements
  • Supports MySQL, PostgreSQL and SQLite
Cons:
  • Not scalable
  • Not persisting your data on disk
) }, { title: "Weaviate", content: (
Pros:
  • Simple vector search
  • Stores everything in one place: documents, meta data and vectors - so less network overhead when scaling this up
  • Allows combination of vector search and scalar filtering, i.e. you can filter for a certain tag and do dense retrieval on that subset
Cons:
  • Less options for ANN algorithms than FAISS or Milvus
  • No BM25 / Tf-idf retrieval
  • Does not support dot product similarity
) }, { title: "Pinecone", content: (
Pros:
  • A fully managed service for large-scale dense retrieval
  • Low query latency at any scale
  • Live index updates
  • Stores embeddings and metadata separately from the document content which makes it easier to setup infrastructure and maintenance
Cons:
  • You still need to run a SQL database locally
  • You need to be on a paid plan to fully benefit from it
) } ]} />

Our Recommendations

Restricted environment: Use the InMemoryDocumentStore, if you are just giving Haystack a quick try on a small sample and are working in a restricted environment that complicates running Elasticsearch or other databases

Allrounder: Use the ElasticSearchDocumentStore, if you want to evaluate the performance of different retrieval options (dense vs. sparse) and are aiming for a smooth transition from PoC to production

Vector Specialist: Use the MilvusDocumentStore, if you want to focus on dense retrieval and possibly deal with larger datasets

Working with Existing Databases

If you have an existing Elasticsearch or OpenSearch database with indexed documents, you can very quickly make a Haystack compliant version using our elasticsearch_index_to_document_store or open_search_index_to_document_store function.

from haystack.document_stores.utils import elasticsearch_index_to_document_store

new_ds = elasticsearch_index_to_document_store(
    document_store=empty_document_store,
    original_content_field="content",
    original_index_name="document",
    original_name_field="title",
    preprocessor=preprocessor,
    port=9201,
    verify_certs=False,
    scheme="https",
    username="admin",
    password="admin"
)
from haystack.document_stores.utils import open_search_index_to_document_store

new_ds = open_search_index_to_document_store(
    document_store=empty_document_store,
    original_content_field="content",
    original_index_name="document",
    original_name_field="title",
    preprocessor=preprocessor,
    port=9201,
    verify_certs=False,
    scheme="https",
    username="admin",
    password="admin"
)