Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[query] lower poisson regression #12793

Merged
merged 8 commits into from
Mar 21, 2023
Merged

Conversation

danking
Copy link
Contributor

@danking danking commented Mar 16, 2023

cc @tpoterba

My apologies. I made several changes to lowered logistic regression as well.

All the generalized linear model methods share the same fit result. I abstracted this into one datatype at the top of statgen.py: numerical_regression_fit_dtype.


You'll notice I moved the cases such that we check for convergence before checking if we are at the maximum iteration. It seemed to me that:

  • max_iter == 0 means do not even attempt to fit.
  • max_iter == 1 means take one gradient step, if you've converged, then return successfully, otherwise fail.
  • etc. The main branch currently always fails if you set max_iter == 1, even if the first step lands on the true maximum likelihood fit.

I substantially refactored logistic regression. There were dead code paths (e.g. the covariates array is known to be non-empty). I also found all the function currying and comingling of fitting and testing really confusing. To be fair, the Scala code does this (and its really confusing). I think the current structure is easier to follow:

  1. Fit the null model.
  2. If wald, assume the beta for the genotypes is zero and use the rest of the parameters from the null model fit to compute the score (i.e. the gradient of the likelihood). Recall calculus: gradient near zero => value near the maximum. Return: this is the test.
  3. Otherwise, fit the full model starting at the null fit parameters.
  4. Test the "goodness" of this new & full fit.

Poisson regression is similar but with a different likelihood function and gradient thereof. Notice that I key_cols_by() to indicate to Hail that the order of the cols is irrelevant (the result is a locus-keyed table after all). This is necessary at least until #12753 merges. I think it's generally a good idea though: it indicates to Hail that the ordering of the columns is irrelevant, which is potentially useful information for the optimizer!


Both logistic and Poisson regression can benefit from BLAS3 by running at least the score test for multiple variants at once.


I'll attach an image in the comments, but I spend ~6 seconds compiling this trivial model and ~140ms testing it.

import hail as hl
mt = hl.utils.range_matrix_table(1, 3)
mt = mt.annotate_entries(x=hl.literal([1, 3, 10, 5]))
ht = hl.poisson_regression_rows(
    'wald', y=hl.literal([0, 1, 1, 0])[mt.col_idx], x=mt.x[mt.col_idx], covariates=[1], max_iterations=2)
ht.collect()

I grabbed some sample code from
scikit-learn
for Poisson regression (doing a score test rather than a wald test) and timed it. It takes ~8ms. So we're 3 orders of magnitude including the compiler, and ~1.2 orders of magnitude off without the compiler. Digging in a bit:

  • ~65ms for class loading.
  • ~15ms for region allocation.
  • ~20ms various little spots. Leaving about 40ms strictly executing generated code That's about 5x which is starting to feel reasonable.

My apologies. I made several changes to lowered logistic regression as well.

All the generalized linear model methods share the same fit result. I abstracted this into one
datatype at the top of `statgen.py`: `numerical_regression_fit_dtype`.

---

You'll notice I moved the cases such that we check for convergence *before* checking if we are at
the maximum iteration. It seemed to me that:
- `max_iter == 0` means do not even attempt to fit.
- `max_iter == 1` means take one gradient step, if you've converged, then return successfully,
otherwise fail.
- etc.
The `main` branch currently always fails if you set `max_iter == 1`, even if the first step lands on
the true maximum likelihood fit.

I substantially refactored logistic regression. There were dead code paths (e.g. the covariates
array is known to be non-empty). I also found all the function currying and comingling of fitting
and testing really confusing. To be fair, the Scala code does this (and its really confusing). I
think the current structure is easier to follow:

1. Fit the null model.
2. If wald, assume the beta for the genotypes is zero and use the rest of the parameters from the
   null model fit to compute the score (i.e. the gradient of the likelihood). Recall calculus:
   gradient near zero => value near the maximum. Return: this is the test.
3. Otherwise, fit the full model starting at the null fit parameters.
4. Test the "goodness" of this new & full fit.

---

Poisson regression is similar but with a different likelihood function and gradient thereof. Notice
that I `key_cols_by()` to indicate to Hail that the order of the cols is irrelevant (the result is a
locus-keyed table after all). This is necessary at least until hail-is#12753 merges. I think it's generally
a good idea though: it indicates to Hail that the ordering of the columns is irrelevant, which is
potentially useful information for the optimizer!

---

Both logistic and Poisson regression can benefit from BLAS3 by running at least the score
test for multiple variants at once.

---

I'll attach an image in the comments, but I spend ~6 seconds compiling this trivial model and ~140ms
testing it.

```python3
import hail as hl
mt = hl.utils.range_matrix_table(1, 3)
mt = mt.annotate_entries(x=hl.literal([1, 3, 10, 5]))
ht = hl.poisson_regression_rows(
    'wald', y=hl.literal([0, 1, 1, 0])[mt.col_idx], x=mt.x[mt.col_idx], covariates=[1], max_iterations=2)
ht.collect()
```

I grabbed some [sample code from
scikit-learn](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.PoissonRegressor.html)
for Poisson regression (doing a score test rather than a wald test) and timed it. It takes
~8ms. So we're 3 orders of magnitude including the compiler, and ~1.2 orders of magnitude off
without the compiler. Digging in a bit:
- ~65ms for class loading.
- ~15ms for region allocation.
- ~20ms various little spots.
Leaving about 40ms strictly executing generated code That's about 5x which is starting to feel reasonable.
@danking
Copy link
Contributor Author

danking commented Mar 16, 2023

Screen Shot 2023-03-16 at 13 23 30

Copy link
Collaborator

@patrick-schultz patrick-schultz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for disentangling the regression code. It is so much more clear now. Just a couple small things.

Comment on lines 934 to 935
def nd_exp(hl_nd):
return hl_nd.map(lambda x: hl.exp(x))
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hl.exp(hl_nd) works

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ah nice, I also simplified sigmoid to sigmoid = hl.expit.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, inspired by this, I fixed hl.log to do the right thing for ndarrays.


def test(yvec, null_fit):
if test == 'score':
return logistic_score_test(covs_and_x, y, null_fit)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should this be yvec?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

eagle eyes, yes it absolutely should be. Luckily this was caught by the tests as a mismatched source objects error.

@danking
Copy link
Contributor Author

danking commented Mar 20, 2023

With the binding changes I can't even find the regression code. We're totally swamped by the compiler. After the JIT is warm, the compiler is ~3s. The regression code is probably equally as fast as scikit learn on this tiny example.

@danking danking merged commit 2eab792 into hail-is:main Mar 21, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants