You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
These are some techniques to assess and decide whether it's worth trying or not, but it's up to the assignee to try different techniques that will work on the selected dataset.
mrm1001
changed the title
Find a technique to improve performance on table QA
Research and test different techniques to improve performance on table QA
Jun 14, 2024
I think in addition to the proposed points (which mostly focus on retrieval), it would also be good to add tasks that focus on answer generation with LLMs. For example,
What table format works best within LLM prompts? Markdown, XML, CSV, etc.
Are there LLMs that are capable of doing basic math? E.g. questions that require comparing to numbers in a table such as "Did the revenue for company X increase in 2023 compared to 2022?"
Are Visual QA models better than Text only LLMs?
Also for the retrieval side of things:
I believe @ju-gu and @bglearning have found BM25 to work as a decent baseline for retrieving tables.
A big improvement to Table Retrieval could come from finding the text in the PDF (or File) that talks about the table and then attaching that text to the table as metadata. This relevant context about the table is often not near the table physically in the file which makes this type of extraction challenging.
Lastly, a big question on my mind is how do we effectively combine Text and Table retrieval into a single RAG pipeline? If we end up using different retrieval techniques for text and tables respectively, how do we decide how many text and table documents to send to the LLM? For example, do we just always send the top-5 text chunks and top-5 tables every time?
@sjrl In my experience, basic arithmetic has been a challenge for all LLMs I've worked with. Even though some LLMs, like code models, might perform slightly better at these tasks, their accuracy remains inconsistent, making them unreliable for production use.
For indexing PDFs, I suggest we could develop an agent that processes the document chunk by chunk in an interactive manner, extracting specific facts such as a company's net profit. This agent could add references to tables, images, and other relevant elements, which we could then use to enhance the metadata of those elements. By indexing all chunks in a document store and maintaining separate storage for tables and images, we could preserve context during retrieval. When a table needs to be accessed, the retrieval process would pull up a passage that references it. Additionally, filtering rows and columns of tables before passing them to the LLM might help avoid confusion.
I understand this needs testing, but I am confident in its potential. I'll start working on this after my exams.
These are some techniques to assess and decide whether it's worth trying or not, but it's up to the assignee to try different techniques that will work on the selected dataset.
The text was updated successfully, but these errors were encountered: