Skip to content
This repository has been archived by the owner on Jul 29, 2024. It is now read-only.

Scoring and evaluation for continuous outcome #6

Open
shaddyab opened this issue Nov 15, 2019 · 0 comments
Open

Scoring and evaluation for continuous outcome #6

shaddyab opened this issue Nov 15, 2019 · 0 comments

Comments

@shaddyab
Copy link
Contributor

shaddyab commented Nov 15, 2019

Q1)
Given the fact that for continuous outcome the theoretical max (i.e., q1_) and practical max(i.e., q2_) curves are not well defined and will not be correct, then only the following six metrics can be used to evaluate the model. Is this correct?

  1. Q_cgains
  2. Q_aqini
  3. Q_qini
  4. max_ cgains
  5. max_aqini
  6. max_qini

Q2)
Based on lines 205
score_name = 'q1_'+method
And the _score function in base.py

    def _score(self, y_true, y_pred, method, plot_type, score_name):
        """ scoring function to be passed to make_scorer.
        """
        treatment_true, outcome_true, p = self.untransform(y_true)
        scores = get_scores(treatment_true, outcome_true, y_pred, p, scoring_range=(0,self.scoring_cutoff[method]), plot_type=plot_type)
        return scores[score_name]

three of the scoring methods which can be used for grid search: 'q1_qini', 'q1_cgains', 'q1_aqini' should not be used with continuous variables. If this is indeed the case, then I would suggest that this issue be fixed using the continuous_outcome argument already available and maybe be replaced with ‘Q_’ scores for continuous variables.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant