Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adding a guard that prevents the tutorial from being executed in every subprocess on windows #729

Merged
merged 1 commit into from
Jan 13, 2021
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
264 changes: 135 additions & 129 deletions tutorials/Tutorial1_Basic_QA_Pipeline.py
Original file line number Diff line number Diff line change
Expand Up @@ -22,133 +22,139 @@
from haystack.utils import print_answers
from haystack.retriever.sparse import ElasticsearchRetriever

logger = logging.getLogger(__name__)

LAUNCH_ELASTICSEARCH = True

# ## Document Store
#
# Haystack finds answers to queries within the documents stored in a `DocumentStore`. The current implementations of
# `DocumentStore` include `ElasticsearchDocumentStore`, `FAISSDocumentStore`, `SQLDocumentStore`, and `InMemoryDocumentStore`.
#
# **Here:** We recommended Elasticsearch as it comes preloaded with features like full-text queries, BM25 retrieval,
# and vector storage for text embeddings.
# **Alternatives:** If you are unable to setup an Elasticsearch instance, then follow the Tutorial 3
# for using SQL/InMemory document stores.
# **Hint**:
# This tutorial creates a new document store instance with Wikipedia articles on Game of Thrones. However, you can
# configure Haystack to work with your existing document stores.
#
# Start an Elasticsearch server
# You can start Elasticsearch on your local machine instance using Docker. If Docker is not readily available in
# your environment (eg., in Colab notebooks), then you can manually download and execute Elasticsearch from source.

if LAUNCH_ELASTICSEARCH:
logging.info("Starting Elasticsearch ...")
status = subprocess.run(
['docker run -d -p 9200:9200 -e "discovery.type=single-node" elasticsearch:7.6.2'], shell=True
)
if status.returncode:
raise Exception("Failed to launch Elasticsearch. If you want to connect to an existing Elasticsearch instance"
"then set LAUNCH_ELASTICSEARCH in the script to False.")
time.sleep(15)

# Connect to Elasticsearch
document_store = ElasticsearchDocumentStore(host="localhost", username="", password="", index="document")

# ## Preprocessing of documents
#
# Haystack provides a customizable pipeline for:
# - converting files into texts
# - cleaning texts
# - splitting texts
# - writing them to a Document Store

# In this tutorial, we download Wikipedia articles about Game of Thrones, apply a basic cleaning function, and add
# them in Elasticsearch.


# Let's first fetch some documents that we want to query
# Here: 517 Wikipedia articles for Game of Thrones
doc_dir = "data/article_txt_got"
s3_url = "https://s3.eu-central-1.amazonaws.com/deepset.ai-farm-qa/datasets/documents/wiki_gameofthrones_txt.zip"
fetch_archive_from_http(url=s3_url, output_dir=doc_dir)

# convert files to dicts containing documents that can be indexed to our datastore
dicts = convert_files_to_dicts(dir_path=doc_dir, clean_func=clean_wiki_text, split_paragraphs=True)
# You can optionally supply a cleaning function that is applied to each doc (e.g. to remove footers)
# It must take a str as input, and return a str.

# Now, let's write the docs to our DB.
if LAUNCH_ELASTICSEARCH:
document_store.write_documents(dicts)
else:
logger.warning("Since we already have a running ES instance we should not index the same documents again. \n"
"If you still want to do this call: document_store.write_documents(dicts) manually ")

# ## Initalize Retriever, Reader, & Finder
#
# ### Retriever
#
# Retrievers help narrowing down the scope for the Reader to smaller units of text where a given question
# could be answered.
#
# They use some simple but fast algorithm.
# **Here:** We use Elasticsearch's default BM25 algorithm
# **Alternatives:**
# - Customize the `ElasticsearchRetriever`with custom queries (e.g. boosting) and filters
# - Use `EmbeddingRetriever` to find candidate documents based on the similarity of
# embeddings (e.g. created via Sentence-BERT)
# - Use `TfidfRetriever` in combination with a SQL or InMemory Document store for simple prototyping and debugging

retriever = ElasticsearchRetriever(document_store=document_store)

# Alternative: An in-memory TfidfRetriever based on Pandas dataframes for building quick-prototypes
# with SQLite document store.
#
# from haystack.retriever.tfidf import TfidfRetriever
# retriever = TfidfRetriever(document_store=document_store)

# ### Reader
#
# A Reader scans the texts returned by retrievers in detail and extracts the k best answers. They are based
# on powerful, but slower deep learning models.
#
# Haystack currently supports Readers based on the frameworks FARM and Transformers.
# With both you can either load a local model or one from Hugging Face's model hub (https://huggingface.co/models).
# **Here:** a medium sized RoBERTa QA model using a Reader based on
# FARM (https://huggingface.co/deepset/roberta-base-squad2)
# **Alternatives (Reader):** TransformersReader (leveraging the `pipeline` of the Transformers package)
# **Alternatives (Models):** e.g. "distilbert-base-uncased-distilled-squad" (fast) or
# "deepset/bert-large-uncased-whole-word-masking-squad2" (good accuracy)
# **Hint:** You can adjust the model to return "no answer possible" with the no_ans_boost. Higher values mean
# the model prefers "no answer possible"
#
# #### FARMReader

# Load a local model or any of the QA models on
# Hugging Face's model hub (https://huggingface.co/models)
reader = FARMReader(model_name_or_path="deepset/roberta-base-squad2", use_gpu=True)

# #### TransformersReader

# Alternative:
# reader = TransformersReader(
# model_name_or_path="distilbert-base-uncased-distilled-squad", tokenizer="distilbert-base-uncased", use_gpu=-1)

# ### Finder
#
# The Finder sticks together reader and retriever in a pipeline to answer our actual questions.

finder = Finder(reader, retriever)

# ## Voilà! Ask a question!
# You can configure how many candidates the reader and retriever shall return
# The higher top_k_retriever, the better (but also the slower) your answers.
prediction = finder.get_answers(question="Who is the father of Sansa Stark?", top_k_retriever=10, top_k_reader=5)


# prediction = finder.get_answers(question="Who created the Dothraki vocabulary?", top_k_reader=5)
# prediction = finder.get_answers(question="Who is the sister of Sansa?", top_k_reader=5)

print_answers(prediction, details="minimal")
def tutorial1_basic_qa_pipeline():
logger = logging.getLogger(__name__)

LAUNCH_ELASTICSEARCH = True

# ## Document Store
#
# Haystack finds answers to queries within the documents stored in a `DocumentStore`. The current implementations of
# `DocumentStore` include `ElasticsearchDocumentStore`, `FAISSDocumentStore`, `SQLDocumentStore`, and `InMemoryDocumentStore`.
#
# **Here:** We recommended Elasticsearch as it comes preloaded with features like full-text queries, BM25 retrieval,
# and vector storage for text embeddings.
# **Alternatives:** If you are unable to setup an Elasticsearch instance, then follow the Tutorial 3
# for using SQL/InMemory document stores.
# **Hint**:
# This tutorial creates a new document store instance with Wikipedia articles on Game of Thrones. However, you can
# configure Haystack to work with your existing document stores.
#
# Start an Elasticsearch server
# You can start Elasticsearch on your local machine instance using Docker. If Docker is not readily available in
# your environment (eg., in Colab notebooks), then you can manually download and execute Elasticsearch from source.

if LAUNCH_ELASTICSEARCH:
logging.info("Starting Elasticsearch ...")
status = subprocess.run(
['docker run -d -p 9200:9200 -e "discovery.type=single-node" elasticsearch:7.6.2'], shell=True
)
if status.returncode:
raise Exception("Failed to launch Elasticsearch. If you want to connect to an existing Elasticsearch instance"
"then set LAUNCH_ELASTICSEARCH in the script to False.")
time.sleep(15)

# Connect to Elasticsearch
document_store = ElasticsearchDocumentStore(host="localhost", username="", password="", index="document")

# ## Preprocessing of documents
#
# Haystack provides a customizable pipeline for:
# - converting files into texts
# - cleaning texts
# - splitting texts
# - writing them to a Document Store

# In this tutorial, we download Wikipedia articles about Game of Thrones, apply a basic cleaning function, and add
# them in Elasticsearch.


# Let's first fetch some documents that we want to query
# Here: 517 Wikipedia articles for Game of Thrones
doc_dir = "data/article_txt_got"
s3_url = "https://s3.eu-central-1.amazonaws.com/deepset.ai-farm-qa/datasets/documents/wiki_gameofthrones_txt.zip"
fetch_archive_from_http(url=s3_url, output_dir=doc_dir)

# convert files to dicts containing documents that can be indexed to our datastore
dicts = convert_files_to_dicts(dir_path=doc_dir, clean_func=clean_wiki_text, split_paragraphs=True)
# You can optionally supply a cleaning function that is applied to each doc (e.g. to remove footers)
# It must take a str as input, and return a str.

# Now, let's write the docs to our DB.
if LAUNCH_ELASTICSEARCH:
document_store.write_documents(dicts)
else:
logger.warning("Since we already have a running ES instance we should not index the same documents again. \n"
"If you still want to do this call: document_store.write_documents(dicts) manually ")

# ## Initalize Retriever, Reader, & Finder
#
# ### Retriever
#
# Retrievers help narrowing down the scope for the Reader to smaller units of text where a given question
# could be answered.
#
# They use some simple but fast algorithm.
# **Here:** We use Elasticsearch's default BM25 algorithm
# **Alternatives:**
# - Customize the `ElasticsearchRetriever`with custom queries (e.g. boosting) and filters
# - Use `EmbeddingRetriever` to find candidate documents based on the similarity of
# embeddings (e.g. created via Sentence-BERT)
# - Use `TfidfRetriever` in combination with a SQL or InMemory Document store for simple prototyping and debugging

retriever = ElasticsearchRetriever(document_store=document_store)

# Alternative: An in-memory TfidfRetriever based on Pandas dataframes for building quick-prototypes
# with SQLite document store.
#
# from haystack.retriever.tfidf import TfidfRetriever
# retriever = TfidfRetriever(document_store=document_store)

# ### Reader
#
# A Reader scans the texts returned by retrievers in detail and extracts the k best answers. They are based
# on powerful, but slower deep learning models.
#
# Haystack currently supports Readers based on the frameworks FARM and Transformers.
# With both you can either load a local model or one from Hugging Face's model hub (https://huggingface.co/models).
# **Here:** a medium sized RoBERTa QA model using a Reader based on
# FARM (https://huggingface.co/deepset/roberta-base-squad2)
# **Alternatives (Reader):** TransformersReader (leveraging the `pipeline` of the Transformers package)
# **Alternatives (Models):** e.g. "distilbert-base-uncased-distilled-squad" (fast) or
# "deepset/bert-large-uncased-whole-word-masking-squad2" (good accuracy)
# **Hint:** You can adjust the model to return "no answer possible" with the no_ans_boost. Higher values mean
# the model prefers "no answer possible"
#
# #### FARMReader

# Load a local model or any of the QA models on
# Hugging Face's model hub (https://huggingface.co/models)
reader = FARMReader(model_name_or_path="deepset/roberta-base-squad2", use_gpu=True)

# #### TransformersReader

# Alternative:
# reader = TransformersReader(
# model_name_or_path="distilbert-base-uncased-distilled-squad", tokenizer="distilbert-base-uncased", use_gpu=-1)

# ### Finder
#
# The Finder sticks together reader and retriever in a pipeline to answer our actual questions.

finder = Finder(reader, retriever)

# ## Voilà! Ask a question!
# You can configure how many candidates the reader and retriever shall return
# The higher top_k_retriever, the better (but also the slower) your answers.
prediction = finder.get_answers(question="Who is the father of Sansa Stark?", top_k_retriever=10, top_k_reader=5)


# prediction = finder.get_answers(question="Who created the Dothraki vocabulary?", top_k_reader=5)
# prediction = finder.get_answers(question="Who is the sister of Sansa?", top_k_reader=5)

print_answers(prediction, details="minimal")


if __name__ == "__main__":
tutorial1_basic_qa_pipeline()
67 changes: 36 additions & 31 deletions tutorials/Tutorial2_Finetune_a_model_on_your_data.py
Original file line number Diff line number Diff line change
Expand Up @@ -10,38 +10,43 @@
from haystack.reader.farm import FARMReader


# ## Create Training Data
#
# There are two ways to generate training data
#
# 1. **Annotation**: You can use the annotation tool(https://github.com/deepset-ai/haystack#labeling-tool) to label
# your data, i.e. highlighting answers to your questions in a document. The tool supports structuring
# your workflow with organizations, projects, and users. The labels can be exported in SQuAD format
# that is compatible for training with Haystack.
#
# 2. **Feedback**: For production systems, you can collect training data from direct user feedback via Haystack's
# REST API interface. This includes a customizable user feedback API for providing feedback on the
# answer returned by the API. The API provides a feedback export endpoint to obtain the feedback data
# for fine-tuning your model further.
#
#
# ## Fine-tune your model
#
# Once you have collected training data, you can fine-tune your base models.
# We initialize a reader as a base model and fine-tune it on our own custom dataset (should be in SQuAD-like format).
# We recommend using a base model that was trained on SQuAD or a similar QA dataset before to benefit from Transfer
# Learning effects.
def tutorial2_finetune_a_model_on_your_data():
# ## Create Training Data
#
# There are two ways to generate training data
#
# 1. **Annotation**: You can use the annotation tool(https://github.com/deepset-ai/haystack#labeling-tool) to label
# your data, i.e. highlighting answers to your questions in a document. The tool supports structuring
# your workflow with organizations, projects, and users. The labels can be exported in SQuAD format
# that is compatible for training with Haystack.
#
# 2. **Feedback**: For production systems, you can collect training data from direct user feedback via Haystack's
# REST API interface. This includes a customizable user feedback API for providing feedback on the
# answer returned by the API. The API provides a feedback export endpoint to obtain the feedback data
# for fine-tuning your model further.
#
#
# ## Fine-tune your model
#
# Once you have collected training data, you can fine-tune your base models.
# We initialize a reader as a base model and fine-tune it on our own custom dataset (should be in SQuAD-like format).
# We recommend using a base model that was trained on SQuAD or a similar QA dataset before to benefit from Transfer
# Learning effects.

#**Recommendation: Run training on a GPU. To do so change the `use_gpu` arguments below to `True`

reader = FARMReader(model_name_or_path="distilbert-base-uncased-distilled-squad", use_gpu=True)
train_data = "data/squad20"
# train_data = "PATH/TO_YOUR/TRAIN_DATA"
reader.train(data_dir=train_data, train_filename="dev-v2.0.json", use_gpu=True, n_epochs=1, save_dir="my_model")

#**Recommendation: Run training on a GPU. To do so change the `use_gpu` arguments below to `True`
# Saving the model happens automatically at the end of training into the `save_dir` you specified
# However, you could also save a reader manually again via:
reader.save(directory="my_model")

reader = FARMReader(model_name_or_path="distilbert-base-uncased-distilled-squad", use_gpu=True)
train_data = "data/squad20"
# train_data = "PATH/TO_YOUR/TRAIN_DATA"
reader.train(data_dir=train_data, train_filename="dev-v2.0.json", use_gpu=True, n_epochs=1, save_dir="my_model")
# If you want to load it at a later point, just do:
new_reader = FARMReader(model_name_or_path="my_model")

# Saving the model happens automatically at the end of training into the `save_dir` you specified
# However, you could also save a reader manually again via:
reader.save(directory="my_model")

# If you want to load it at a later point, just do:
new_reader = FARMReader(model_name_or_path="my_model")
if __name__ == "__main__":
tutorial2_finetune_a_model_on_your_data()
Loading