A package for comparative judgement (CJ).
Dependencies
comparative-judgement requires:
- Python (>= |PythonMinVersion|)
- NumPy (>= |NumPyMinVersion|)
- SciPy (>= |SciPyMinVersion|)
- Ray
User installation
If you already have a working installation of NumPy and SciPy,
the easiest way to install comparative_judgement is using pip::
pip install comparative-judgement conda install -c conda-forge comparative-judgement
Importing the BCJ model and initiating a instance of the model with 4 samples:
from cj.models import BayesianCJ
BCJ = BayesianCJ(4)Creating the data:
import numpy as np
data = np.asarray([
[0, 1, 0],
[0, 1, 0],
[0, 3, 0],
[1, 0, 1],
[1, 0, 1],
[1, 0, 1],
[1, 2, 1],
[1, 2, 1],
[1, 2, 1],
[1, 2, 1],
[1, 2, 1],
[2, 1, 2],
[2, 1, 2],
[2, 1, 2],
[2, 3, 2],
[3, 0, 3],
[3, 0, 3],
[3, 0, 3],
[3, 0, 3],
[3, 2, 3],
[3, 2, 3],
[3, 2, 3],
])running the model:
BCJ.run(data)Finding the
BCJ.Er_scores
>>> [3.046875, 2.09765625, 3.05859375, 1.796875]Finding the BCJ rank:
BCJ.rank
>>> array([3, 1, 0, 2])Importing the BCJ model and initiating a instance of the model with 4 samples:
from cj.models import MBayesianCJ
criteria_weights = [0.2, 0.2, 0.6]
MBCJ = MBayesianCJ(3, criteria_weights)data = [
#A, B,C1, 2, 3
[0, 1, 1, 1, 1],
[1, 2, 1, 1, 1],
[0, 2, 0, 0, 2]
]running the model:
MBCJ.run(data)Finding the overall MBCJ rank:
MBCJ.combined_rank
>>> array([1, 2, 0])Finding the individual criteria BCJ ranks:
MBCJ.lo_rank_scores
>>> {0: [np.float64(2.0), np.float64(1.5), np.float64(2.5)],
1: [np.float64(2.0), np.float64(1.5), np.float64(2.5)],
2: [np.float64(2.5), np.float64(1.5), np.float64(2.0)]}Importing the BTM Model a instance of the model with 4 samples:
from cj.models import BTMCJ
BTM = BTMCJ(4)running the model:
BTM.run(data)Finding the optimised p scores:
BTM.optimal_params
>>> array([-0.44654627, 0.04240265, -0.41580243, 0.81994508])find BTM rank:
BTM.rank
>>> array([3, 1, 2, 0])from cj.pair_selector import EntropyPairSelectorentropy_pairs = EntropyPairSelector(5)scores = [55, 65, 72, 45, 80]
standard_dev = 5
entropy_pairs.run_entropy_pairs_simulation(scores, standard_dev)entropy_pairs.results
>>> [[1, 4, 4],
[1, 3, 1],
[2, 3, 2],
[1, 2, 2],
[0, 2, 2],
[0, 4, 4],
[0, 1, 1],
[3, 4, 4],
[0, 3, 0],
[2, 4, 4],
[1, 2, 2],
[2, 3, 2],
[1, 4, 4],
[3, 4, 4],
[2, 4, 4],
[0, 4, 4],
[0, 2, 2],
[0, 1, 1],
[0, 3, 0],
[1, 3, 1],
[1, 4, 4],
[0, 1, 1],
[0, 4, 4],
[3, 4, 4],
[2, 3, 2],
...
[2, 4, 4],
[0, 1, 1],
[0, 4, 4],
[2, 3, 2],
[1, 2, 2]]Citing this Library:
@misc{comparative_judgement,
author = {Andy Gray},
title = {Comparative Judgement},
year = {2024},
publisher = {Python Package Index (PyPI)},
howpublished = {\url{https://pypi.org/project/comparative-judgement/}}
}