Skip to content

An Extensible Framework for Retrieval-Augmented LLM Applications: Learning Relevance Beyond Simple Similarity.

License

Notifications You must be signed in to change notification settings

jzhoubu/Vsearch

Folders and files

NameName
Last commit message
Last commit date

Latest commit

ย 

History

51 Commits
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 

Repository files navigation

Vsearch

License Python 3.9

Vsearch: Disentangling Data on LM Vocabulary Space for Search.

An extensible, transparent, trainable toolbox for retrieval-augmented frameworks, designed to be user-friendly, efficient, and accessible, empowering individuals to customize and deploy their own retrieval-based applications.

This repository includes:

What's News ๐Ÿ”ฅ

๐Ÿ—บ Overview

  1. Preparation

    • Setup Environment
    • Download Data
    • Testing
  2. Quick Start

    • Text-to-text Retrieval
    • Cross-modal Retrieval
    • Disentanglement and Reasoning
    • Visualization
    • Semi-parametric Search
  3. Training (in development ๐Ÿ”ง, expected to be released soon)

  4. Inference

    • Build index
    • Search
    • Scoring

๐Ÿ’ป Preparation

Setup Environment

Setup Environment via poetry (suggested)

# install poetry first
# curl -sSL https://install.python-poetry.org | python3 -
poetry install
poetry shell

Setup Environment via pip

conda create -n vdr python=3.9
conda activate vdr
pip install -r requirements.txt
Download Data

Download data using identifiers in the YAML configuration files at conf/data_stores/*.yaml.

# Download a single dataset file
python download.py nq_train
# Download multiple dataset files:
python download.py nq_train trivia_train
# Download all dataset files:
python download.py all
Testing
python -m examples.demo.quick_start
# Expected Ouput:
# tensor([[91.1257, 17.6930, 13.0358, 12.4576]], device='cuda:0')
# tensor([[0.3209, 0.0984]])

๐Ÿš€ Quick Start

Text-to-text Retrieval
>>> import torch
>>> from src.vdr import Retriever

# Initialize the retriever
>>> vdr_text2text = Retriever.from_pretrained("vsearch/vdr-nq")

# Set up the device
>>> device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
>>> vdr_text2text = vdr_text2text.to(device)

# Define a query and a list of passages
>>> query = "What are the benefits of drinking green tea?"
>>> passages = [
...     "Green tea is known for its antioxidant properties, which can help protect cells from damage caused by free radicals. It also contains catechins, which have been shown to have anti-inflammatory and anti-cancer effects. Drinking green tea regularly may help improve overall health and well-being.",
...     "The history of coffee dates back to ancient times, with its origins in Ethiopia. Coffee is one of the most popular beverages in the world and is enjoyed by millions of people every day.",
...     "Yoga is a mind-body practice that combines physical postures, breathing exercises, and meditation. It has been practiced for thousands of years and is known for its many health benefits, including stress reduction and improved flexibility.",
...     "Eating a balanced diet that includes a variety of fruits, vegetables, whole grains, and lean proteins is essential for maintaining good health. It provides the body with the nutrients it needs to function properly and can help prevent chronic diseases."
... ]

# Embed the query and passages
>>> q_emb = vdr_text2text.encoder_q.embed(query)  # Shape: [1, V]
>>> p_emb = vdr_text2text.encoder_p.embed(passages)  # Shape: [4, V]

 # Query-passage Relevance
>>> scores = q_emb @ p_emb.t()
>>> print(scores)

# Output: 
# tensor([[91.1257, 17.6930, 13.0358, 12.4576]], device='cuda:0')
Cross-modal Retrieval
# Note: we use `encoder_q` for text and `encoder_p` for image
>>> vdr_cross_modal = Retriever.from_pretrained("vsearch/vdr-cross-modal") 

>>> image_file = './examples/images/mars.png'
>>> texts = [
...     "Four thousand Martian days after setting its wheels in Gale Crater on Aug. 5, 2012, NASAโ€™s Curiosity rover remains busy conducting exciting science. The rover recently drilled its 39th sample then dropped the pulverized rock into its belly for detailed analysis.",
...     "ChatGPT is a chatbot developed by OpenAI and launched on November 30, 2022. Based on a large language model, it enables users to refine and steer a conversation towards a desired length, format, style, level of detail, and language."
... ]
>>> image_emb = vdr_cross_modal.encoder_p.embed(image_file) # Shape: [1, V]
>>> text_emb = vdr_cross_modal.encoder_q.embed(texts)  # Shape: [2, V]

# Image-text Relevance
>>> scores = image_emb @ text_emb.t()
>>> print(scores)

# Output: 
# tensor([[0.3209, 0.0984]])
Disentanglement and Reasoning

Data Disentanglement

# Disentangling query embedding
>>> disentanglement = vdr_text2text.encoder_q.dst(query, topk=768, visual=True) # Generate a word cloud if `visual`=True
>>> print(disentanglement)

# Output: 
# {
#     'tea': 6.9349799156188965,
#     'green': 5.861555576324463,
#     'bitter': 4.233378887176514,
#     ...
# }

Retrieval Reasoning

# Retrieval reasoning on query-passage match
>>> reasons = vdr_text2text.explain(q=query, p=passages[0], topk=768, visual=True)
>>> print(reasons)

# Output: 
# {
#     'tea': 41.2425175410242,
#     'green': 38.784010452150596,
#     'effects': 1.1575102038585783,
#     ...
# }
Semi-Parametric Search

Alpha search

# non-parametric query -> parametric passage
>>> q_bin = vdr.encoder_q.embed(query, bow=True)
>>> p_emb = vdr.encoder_p.embed(passages)
>>> scores = q_bin @ p_emb.t()

Beta search

# parametric query -> non-parametric passage (binary token index)
>>> q_emb = vdr.encoder_q.embed(query)
>>> p_bin = vdr.encoder_p.embed(passages, bow=True)
>>> scores = q_emb @ p_bin.t()
Visualization

๐Ÿ‘พ Training

We are testing on python 3.9 and torch 2.2.1. Configuration is handled through hydra==1.3.2.

EXPERIMENT_NAME=test
python -m torch.distributed.launch --nnodes=1 --nproc_per_node=4 train_vdr.py \
hydra.run.dir=./experiments/${EXPERIMENT_NAME}/train \
train=vdr_nq \
data_stores=wiki21m \
train_datasets=[nq_train]
  • --hydra.run.dir: Directory where training logs and outputs will be saved
  • --train: Identifier for the training config, in conf/train/*.yaml.
  • --data_stores: Identifier for the datastore, in conf/data_stores/*.yaml.
  • --train_datasets: List of identifiers for the training datasets to be used, in data_stores

During training, we display InfoCard to monitor the training progress.

Tip

What is InfoCard?

The InfoCard is a organized log generated during the training that helps us visually track the progress.

An InfoCard looks like this:

InfoCard Layout

  1. Global Variables (v_q_global, v_p_global, etc.):

    • Shape: Displays the dimensions of the variable matrix.
    • Gate: Indicates the sparsity by showing the ratio of non-zero activations.
    • Mean, Max, Min: Statistical measures of the data distribution within the variable.
  2. EXAMPLE Section:

    • Contains one sample from the training batch, including query text (Q_TEXT), positive passages (P_TEXT1), negative passage (P_TEXT2), and the correct answer (ANSWER).
  3. Token Triple Sections (v_q, v_p, v_p_neg, v_q * v_p), which provided token-level impact:

    • Token (t): The specific vocabulary token.
    • Query Rank (qrank): Rank of the token in the query representation.
    • Passage Rank (prank): Rank of the token in the passage representation.

๐ŸŽฎ Inference

1. Build a Binary Token Index

To construct a binary token index for text corpus:

python -m inference.build_index.build_binary_index \
        --text_file="path/to/your/corpus_file.jsonl" \
        --save_file="path/to/your/output_index.npz" \
        --batch_size=32 \
        --num_shift=999 \
        --max_len=256

Parameters:

  • --text_file: Path to the corpus file to be indexed (.jsonl format).
  • --save_file: Path where the index file will be saved (.npz format).
  • --batch_size: Batch size for processing.
  • --num_shift: Allows for shifting the vocabulary token IDs by a specified amount.
  • --max_len: Maximum length for tokenization of the documents.

2. Beta Search on Binary Token Index

python -m inference.search.beta_search \
        --checkpoint=vsearch/vdr-nq \
        --query_file="path/to/your/query_file.jsonl" \
        --text_file="path/to/your/corpus_file.jsonl" \
        --index_file="path/to/your/index_file.npz" \
        --save_file="path/to/your/search_result.json"  \
        --device=cuda

Parameters:

  • --query_file: Path to file containing questions, with each question as a separate line (.jsonl format).
  • --qa_file: Path to DPR-provided qa file (.csv format). Required if --query_file is not provided.
  • --text_file: Path to the corpus file (.jsonl format).
  • --index_file: Path to pre-computed index file (.npz format).
  • --save_file: Path where the search results will be stored (.json format).
  • --batch_size: Number of queries per batch.
  • --num_rerank: Number of passages to re-rank.

3. Scoring on Wiki21m benchmark

python -m inference.score.eval_wiki21m \
        --text_file="path/to/your/corpus_file.jsonl" \
        --result_file="path/to/your/search_result.json" \
        --qa_file="path/to/your/dpr_qa_file.csv"

Parameters:

  • --text_file: Path to the corpus file (.jsonl format).
  • --result_file: Path to search results (.json format).
  • --qa_file: Path to DPR-provided qa file (.csv format)

๐Ÿ‰ Citation

If you find this repository useful, please consider giving โญ and citing our paper:

@inproceedings{zhou2023retrieval,
  title={Retrieval-based Disentangled Representation Learning with Natural Language Supervision},
  author={Zhou, Jiawei and Li, Xiaoguang and Shang, Lifeng and Jiang, Xin and Liu, Qun and Chen, Lei},
  booktitle={The Twelfth International Conference on Learning Representations},
  year={2023}
}
@article{zhou2024semi,
  title={Semi-Parametric Retrieval via Binary Token Index},
  author={Zhou, Jiawei and Dong, Li and Wei, Furu and Chen, Lei},
  journal={arXiv preprint arXiv:2405.01924},
  year={2024}
}

License

VDR is licensed under the terms of the MIT license. See LICENSE for more details.

About

An Extensible Framework for Retrieval-Augmented LLM Applications: Learning Relevance Beyond Simple Similarity.

Topics

Resources

License

Stars

Watchers

Forks