Skip to content

1. Quick start

molli edited this page May 19, 2023 · 16 revisions

In this section, you'll find step-by-step instructions for setting up and running a small benchmark. If you're looking to expand FedShop by adding more engines or incorporating your own use case, we recommend checking out the Quick tutorial section for further guidance.

If you would like to reproduce the results obtained in FedShop, you can refer to this section.

1.1 Setup FedShop

For easy and efficient deployment of FedShop, we offer a containerized version of the platform available on Dockerhub.

  • Run a container from the image
# Run fedshop container. The results will be in `/tmp/experiments`
docker run --detach --privileged --network host --name fedshop \
    --volume /tmp/experiments:/FedShop/experiments \
    minhhoangdang/fedshop:amd64

# If you have an Apple Chip, use `minhhoangdang/fedshop:arm64`
# If you are behind a proxy, add "--env NO_PROXY=auth,localhost,127.0.0.1,192.168.0.1/24" 
  • Open an interactive shell in the container
docker exec -it fedshop /bin/bash
  • IMPORTANT: Update the FedShop repository within the container
git fetch && git reset --hard origin/main --recurse-submodules

1.2 Launch a miniature version of FedShop

For this tutorial, we provide a scaled-down version of FedShop in config_small.yaml. This setup creates:

  • 2 configurations:
    • 20 endpoints (batch0): 10 vendors + 10 rating-sites
    • 40 endpoints (batch1): 20 vendors + 20 rating-sites
  • 24 queries (2 random instantiations of the 12 templates queries)

The benchmark execution comprises three simple steps:

  1. Generate data (~10mn)
python rsfb/benchmark.py generate data experiments/bsbm/config_small.yaml
## see you datasets
# ls /tmp/experiments/bsbm/model/dataset/*.nq
  1. Generate queries (~9mn)
python rsfb/benchmark.py generate queries experiments/bsbm/config_small.yaml
## see your queries
# ls /tmp/experiments/bsbm/benchmark/generation/q*/instance_*/injected.sparql 

More options for calling FedShop generator: python rsfb/benchmark.py generate --help

  1. Evaluate FedShop on some engines (FedX and our Reference Source Assignement (RSA)) ~12mn
# If you are behind a proxy, make sure that you copy `~/.m2/settings.xml` to the container
docker cp ~/.m2/settings.xml fedshop:/root/.m2/

python rsfb/benchmark.py evaluate experiments/bsbm/config_small.yaml
# Results are in /tmp/experiments/bsbm/benchmark/evaluation

More options for calling FedShop runner:

python rsfb/benchmark.py evaluate --help

1.3 Interactive Visualization of Miniature FedShop Results with Jupyter Notebook

  1. Save the models:
python rsfb/benchmark.py save-model experiments/bsbm/
# From the host machine, the models are located at:
# /tmp/experiments/bsbm/eval-model.zip
# /tmp/experiments/bsbm/gen-model.zip
  1. Upload the models to Google Colab, inside /content folder
  2. In Google Colab, Runtime -> Run all