Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

BUG: GLM NegativeBinomial: llf ignores offset and exposure #1684

Closed
josef-pkt opened this issue May 22, 2014 · 2 comments

Comments

Projects
None yet
1 participant
@josef-pkt
Copy link
Member

commented May 22, 2014

I assume this is a bug, see #1486 for general problems with offset and exposure

    def llf(self):
        _modelfamily = self.family
        if isinstance(_modelfamily, families.NegativeBinomial):
            XB = np.dot(self.model.exog, self.params)
            val = _modelfamily.loglike(self.model.endog, fittedvalues=XB)

This should use model.predict, so we don't have to care about how it is calculated.

@josef-pkt

This comment has been minimized.

Copy link
Member Author

commented May 22, 2014

terminology, naming:
in GLM fittedvalues=mu which is predict(linear=False), but here it is used for linear=True, i.e. X dot params

@josef-pkt

This comment has been minimized.

Copy link
Member Author

commented Aug 24, 2014

fixed by @kshedden as part of #1734, rebase merged in #1821

@josef-pkt josef-pkt closed this Aug 24, 2014

@josef-pkt josef-pkt added this to the 0.6 milestone Aug 24, 2014

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.