Skip to content

can't reproduce DeepCT results with evaluate_deepct.py #176

@jeka-e

Description

@jeka-e

Hi!

We are trying to reproduce the beir results for DeepCT framework on NF Corpus for our university project, however, we get very different scores. The paper reports an NDCG@10 of 0.283 while we get 0.2146. What we did was:

  • cloned this repository
  • cloned DeepCT repo
  • pulled the Docker container beir/pyserini-fastapi
  • run the container: docker run -p 8000:8000 -it --rm beir/pyserini-fastapi
  • downgrade the tf version to 2.15 (tf.estimator is deprecated in 2.16)
  • change the dataset to nfcorpus
  • run evaluate_deepct.py

We didn't change anything inside the scripts, so we assume the model is exactly the same.

Were there any major changes to the repository code after the paper had been released that could cause such big mismatches?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions