AskYourPDF is a powerful Python application built with Streamlit and LangChain, designed to make PDF documents interactive and easily queryable. This project leverages LangChain's capabilities, including text splitting, embeddings, and vector stores, to enhance the user experience when working with PDFs. Whether you want to perform a similarity search, retrieve top-k chunks, or submit questions to language models like OpenAI or Falcon-7B, AskYourPDF streamlines the process with an intuitive and user-friendly interface.
Store them into a .env
file
Utilizing FAISS vector database, our application processes PDFs, creating vector representations of text chunks using OpenAI embeddings. These vectors are efficiently stored in FAISS, enabling quick retrieval of semantically similar chunks in response to user queries. The selected chunks are then input into a Language Model (LLM) for generating contextually relevant responses. The application uses Streamlit to create the GUI and Langchain to deal with the LLM.
To use the application, run the respective .py
files with the streamlit CLI (after having installed streamlit):
streamlit run app.py
Falcon API currently experiences extended question-answering response times, averaging between 10-15 minutes. This delay may be attributed to potential server overloads on the Falcon API side.
This helps you choose between the openai and the falcon llm to do pdf question answering. Choose one of the llm and then upload the file on which you want to ask questions on. Here is the output picture:
This code provides the following output:
- Chunks with Similar Context/Meaning as the Question: Provides chunks of text identified with context or meaning similar to the user's question.
- Top 3 Chunks Similar to the Question: Displays the three most relevant text chunks related to the user's question.
- Answer from the LLM (Language Model): Outputs the question's answer generated by the Language Model.
- Determining 'k' Value for Each Chunk Retrieval: Presents the 'k' value, representing the length of each retrieved text chunk. Output pictures:
This code consists of the openai
llm to do question answering. This does not consists of answering chunks with similar meaning or determining the k value of each chunks.Here is the output picture:
This code consists of the falcon-7b-instruct
llm to do question answering. This does not consists of answering chunks with similar meaning or determining the k value of each chunks. Here is the output picture: