Skip to content

Code associated to the article "Who knows best? Intelligent Crowdworker Selection via Deep Learning"

Notifications You must be signed in to change notification settings

ies-research/intelligent-crowdworker-selection

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

12 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Who knows best? Intelligent Crowdworker Selection via Deep Learning

Authors: Marek Herde, Denis Huseljic, Bernhard Sick, Ulrich Bretschneider, and Sarah Oeste-Reiß

Project Structure

How to execute experiments?

In the following, we describe step-by-step how to execute all experiments presented in the accompanied article. As a prerequisite, we assume to have a Linux distribution as operating system and conda installed on your machine.

  1. Setup Python environment:
projectpath$ conda create --name crowd python=3.9
projectpath$ conda activate crowd

First, we need to install torch with the build (1.13.1). For this purpose, we refer to pytorch. An exemplary command for a Linux operating system would be:

projectpath$ pip install torch==1.13.1+cu116 torchvision==0.14.1+cu116 torchaudio==0.13.1 --extra-index-url https://download.pytorch.org/whl/cu116

Subsequently, we install the remaining requirements:

projectpath$ pip install -r requirements.txt
  1. Create and download data sets: Start jupyter-notebook and follow the instructions in the jupyter-notebook file notebooks/data_set_creation_download.ipynb.
projectpath$ conda activate crowd
projectpath$ jupyter-notebook
  1. Simulate annotators: Start jupyter-notebook and follow the instructions in the jupyter-notebook file notebooks/annotator_simulation.ipynb.
projectpath$ conda activate crowd
projectpath$ jupyter-notebook
  1. Execute experiment scripts: The files evaluation/letter.sh and evaluation/cifar10.sh corresponds to evaluating MaDL on CIFAR10 and LETTER. Such a file consists of multiple commands executing the file evaluation/run_experiment.py with different configurations. For a better understanding of these possible configurations, we refer to the explanations in the file evaluation/run_experiment.py. Further you need to specify certain paths, e.g., for logging before execution. You can now execute such a bash script via:
projectpath$ conda activate crowd
projectpath$ ./evaluation/crowd_letter.sh
projectpath$ ./evaluation/crowd_cifar10.sh

Alternatively, you can use the sbatch command:

projectpath$ conda activate crowd
projectpath$ sbatch ./evaluation/letter.sh
projectpath$ sbatch ./evaluation/crowd_cifar10.sh

How to investigate the experimental results?

Once, an experiment is completed, its associated results are saved as a .csv file at the directory specified by evaluation.run_experiment.RESULT_PATH. For getting a summarized presentation of these results, you need to start jupyter-notebook and follow the instructions in the jupyter-notebook file notebooks/evaluation.ipynb.

projectpath$ conda activate crowd
projectpath$ jupyter-notebook

References

The code is majorly based on and adopted from Multi-annotator Deep Learning (MaDL).

Citing

If you use this software in one of your research projects or would like to reference the accompanied article, please use the following:

@inproceedings{
    herde2023who,
    title={Who knows best? Intelligent Crowdworker Selection via Deep Learning},
    author={Marek Herde and Denis Huseljic and Bernhard Sick and Ulrich Bretschneider and Sarah Oeste-Rei{\ss}},
    booktitle={Interactive Adaptive Learning Workshop @ ECML/PKDD},
    pages={14--18},
    year={2023},
    url={https://ceur-ws.org/Vol-3470/paper3.pdf},
}

About

Code associated to the article "Who knows best? Intelligent Crowdworker Selection via Deep Learning"

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages