Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add some default implementations of AbstractModel #81

Closed
torfjelde opened this issue Jul 15, 2021 · 33 comments
Closed

Add some default implementations of AbstractModel #81

torfjelde opened this issue Jul 15, 2021 · 33 comments

Comments

@torfjelde
Copy link
Member

torfjelde commented Jul 15, 2021

In AdvancedHMC.jl we now have a DifferentiableDensityModel which we required to implement the AbstractMCMC.jl-interface.

IMO it would be super-nice if AbstractMCMC.jl provided a simple barebones implementation of such models, e.g. LogDensityModel, DifferentiableLogDensityModel, maybe even a TransformedModel which has a field corresponding to a transformation used and then transformation packages such as Bijectors.jl and TransformVariable.jl can hook into it by overloading a AbstractMCMC.forward(model, transformation) (probably find a better name) which returns the constrained input and the logabsdetjac-factor, i.e. something similar to LogDensityProblems.jl but more lightweight (i.e. no direct dependency on TransformVariables.jl, etc.).

Alternatively we could make a AbstractMCMCModels.jl package which contains implementations of AbstractModel, but IMO the above should be so lightweight that it shouldn't be much of a problem to include.

EDIT: The main motivation is to avoid a large number of re-implementations of at least LogDensityModel and DifferentiableLogDensityModel spread around in different packages.

@cpfiffer
Copy link
Member

This kind of thing has come up a lot I think. My thought was that this stuff should end up in AbstractPPL.jl, but I guess not? I'm not sure what APPL is up to now.

In general the idea is that the inference code should be handle both the sampler and the model, which means that it's not really valuable to include here.

I am sympathetic though to the litany of reproductions of super basic model code that shows up everywhere -- each inference package that uses AbstractMCMC generally builds it's own model which feels wrong to me. I think the breakdown you had (LogDensityModel, DifferentiableLogDensityModel, and TransformedModel) is pretty good, and I wouldn't mind including abstract types for this kind of thing in AbstractMCMC just to clarify the interface a little bit.

@cpfiffer
Copy link
Member

So I'm about 55% leaning towards including some supertypes for models, but IMO defaults (like a struct with a function field or something) are a little sketchier.

@yebai
Copy link
Member

yebai commented Jul 15, 2021

I support keeping these types in AbstractMCMC - adding them to AbstractPPL might create the awkward situation that MCMC implementations have to depend on AbstractPPL, which we want to avoid.

@torfjelde
Copy link
Member Author

My thought was that this stuff should end up in AbstractPPL.jl, but I guess not? I'm not sure what APPL is up to now.

I agree with @yebai here; this isn't anything PPL-specific and thus shouldn't go into AbstractPPL.jl.

So I'm about 55% leaning towards including some supertypes for models, but IMO defaults (like a struct with a function field or something) are a little sketchier.

Even such simple defaults? How come?

@cpfiffer
Copy link
Member

Mostly to keep ideological purity for the fact that step doesn't have any default -- it would be nonsensical to do so in that particular case. Though honestly defaults for model types sound really nice to me, and I'm having some difficulty keeping up my curmudgeonly attitude.

@devmotion
Copy link
Member

I just skimmed through the discussion (I'm on vacation until beginning/mid of August, so not keeping up with everything until then) but I guess for interoperability it would be (more?) helpful to add traits such as discussed in #75 (comment). Similar to e.g. when working with the Tables interface, implementations of algorithms could just work with an arbitrary AbstractModel and call the relevant functions. Even with default model structs otherwise multiple implementations are needed if one wants to support different AbstractModel types. Additionally, it seems less likely to run into ambiguity errors if one does not specialize on the model type.

Another comment: LogDensityModel and DifferentiableLogDensityModel seem a bit redundant - one could instead just dispatch on the type of the log density function by eg querying the differentiation capabilities of the function in a similar way as done in LogDensityProblems.

@torfjelde
Copy link
Member Author

torfjelde commented Jul 15, 2021

Mostly to keep ideological purity for the fact that step doesn't have any default -- it would be nonsensical to do so in that particular case.

Agreed.

I'm having some difficulty keeping up my curmudgeonly attitude.

Please don't, I like curmudgeonly Cameron.

I'm on vacation until beginning/mid of August

Nice! Enjoy man:)

helpful to add traits

I'm down with this 👍

Another comment: LogDensityModel and DifferentiableLogDensityModel seem a bit redundant - one could instead just dispatch on the type of the log density function by eg querying the differentiation capabilities of the function in a similar way as done in LogDensityProblems.

Ah yes, 100% agree. I was trying to remember our discussion of this before but couldn't, sooo I'll admit I partially made this issue hoping that you'd chime in and suggest the superior approach 👼 Hey, it worked!

@phipsgabler
Copy link
Member

Now that DensityInterface will eventually find its way into Turing, I'm throwing in @oschulz's suggestions from here:

I guess AdvancedMH could easily be extended to support anything that implements DensityInterface, in addition AdvancedMH.DensityModel?

AdvancedMH.DensityModel is actually pretty much exactly what DensityInterface.LogFuncDensity now provides, so AdvancedMH.DensityModel could just be deprecated/removed in future versions of AdvancedMH.

Maybe AbstractMCMC.AbstractModel could even go?

Not that I endorse all of this, but DensityInterface.LogFuncDensity would be a good candidate for the default implementation you guys were talking about, wouldn't it? Transformation capabilities would then come in from the interfaces in Bijectors/ChangesOfVariables.

@devmotion
Copy link
Member

LogFuncDensity eg doesn't provide a way to query if the function is differentiable or not and it doesn't bundle derivatives such as eg LogDensityProblems, so it seems a bit too barebone? I imagine it would be useful to support and integrate it in some way though.

@phipsgabler
Copy link
Member

Yeah, that's missing compared to LogDensityProblems. The ideal solution IMHO would be for LogDensityProblems (if that fits our purposes here) to depend on DensityInterface, which the maintainer is only ready to do when the latter becomes mature enough (tpapp/LogDensityProblems.jl#78 (comment)).

I can't judge whether LogDensityProblems is exactly what is needed here, though (and it uses TransformVariables instead of Bijectors, right)? The alternative would kind of be to replicate the same kind of interface package: DensityInterface + ChangesOfVariables + some differentiation interface. Is AbstractDifferentiation ready and fitting?

@devmotion
Copy link
Member

I think the traits for models would be implemented in AbstractMCMC anyways and would probably not be limited to the density function. So I assume the easiest would be to just add the required functionality in AbstractMCMC initially.

Is AbstractDifferentiation ready and fitting?

It exists but it only supports ForwardDiff (via Requires 😢). Not a single package depends on it and uses it currently: https://juliahub.com/ui/Packages/AbstractDifferentiation/Y4WMq/0.2.1

@torfjelde
Copy link
Member Author

Btw, this comment is relevant: #85 (comment)

I think the traits for models would be implemented in AbstractMCMC anyways and would probably not be limited to the density function. So I assume the easiest would be to just add the required functionality in AbstractMCMC initially.

Am of the same opinion 👍

It exists but it only supports ForwardDiff (via Requires cry). Not a single package depends on it and uses it currently:

I was also looking at AbstractDifferentiation.jl because of this, and was somewhat disappointed to see that a) it uses Requires.jl, and b) it only supports ForwardDiff.jl atm.

I originally thought it was going to be more like ChainRulesCore.jl where each respective package would implement the interface, and then we'd be golden. Seems like this isn't the case though?

@devmotion
Copy link
Member

devmotion commented Dec 7, 2021

Seems like this isn't the case though?

AFAIK it's still the plan in the long term but it was decided to add glue code until it is more stable: JuliaDiff/ForwardDiff.jl#550 (comment)

@oschulz
Copy link

oschulz commented Dec 7, 2021

LogFuncDensity eg doesn't provide a way to query if the function is differentiable or not

How does AdvancedMH.DensityModel handle this at the moment?

@torfjelde
Copy link
Member Author

How does AdvancedMH.DensityModel handle this at the moment?

It doesn't:) In that case it's "fine" because Metropolis-Hastings doesn't require differentiability.

I'm going to cross-post #85 (comment) because I feel like that comment should be here rather than in the other thread:

Awesome!

Generally, I think it would be helpful to be more honest about the supported model types (possibly reusing model types such as the discussed DensityModel) in the implementations or, even better and more scalable if possible, only use functions of a generic yet to be added interface for models such as eg. loglikelihood etc.

I 100% agree with this, but we had some issues reaching a consensus the last time we discussed this, but maybe we can now +1

How about this direction (I'm trying to do something similar to what I think you proposed before @devmotion ):

struct DensityModel{F} <: AbstractModel
    logdensity::F
end

logdensity(model::DensityModel, args...) = model.logdensity(args...)

"""
    Differentiable{N}

Represents N-th order differentiability.
"""
struct Differentiable{N} end
const NonDifferentiable = Differentiable{0}
const FirstOrderDifferentiable = Differentiable{1}
const SecondOrderDifferentiable = Differentiable{2}

function Base.:+(::Differentiable{N1}, ::Differentiable{N2}) where {N1,N2}
    return Differentiable{min(N1,N2)}()
end


"""
    differentiable(model)

Return an instance of `Differentiable{N}`, where `N` represents the order.
"""
differentiable(model::DensityModel) = differentiable(model.logdensity)

"""
    PosteriorModel

Represents a model which can be decomposed into a prior and a likelihood.
"""
struct PosteriorModel{P1,P2} <: AbstractModel
    logprior::P1
    loglikelihood::P2
end

logprior(model::PosteriorModel, args...) = model.logprior(args...)
loglikelihood(model::PosteriorModel, args...) = model.loglikelihood(args...)
logdensity(model::PosteriorModel, args...) = logprior(model, args...) + loglikelihood(model, args...)

function differentiable(model::PosteriorModel)
    return differentiable(model.logprior) + differentiable(model.loglikelihood)
end

?

Then we can also add (but in a different package; maybe Bijectors.jl itself or Turing.jl):

struct TransformedModel{M,B} <: AbstractMCMC.AbstractModel
    model::M
    transform::B
end

function AbstractMCMC.logdensity(tmodel::TransformedModel, y)
    x, logjac = forward(tmodel.transform, y)
    return AbstractMCMC.logdensity(tmodel.model, x) + logjac
end

function AbstractMCMC.differentiable(tmodel::TransformedModel)
    return AbstractMCMC.differentiable(tmodel.model)
end

And then things would "just work".

We might also want some of the following methods (though implementations should go somewhere else):

  • domain(model): return some notion of whether a model expects inputs in a particular domain.

    • Only issue is that we can't really say much about what to expect for a return-value here.
    • Should also go together with a hasdomain to indicate whether it has this method implemented.
  • length/size/etc.: returns the properties of the variables used in the model (this kind of goes under domain if it could be handled nicely).

    • This doesn't make sense for every model, and so we might need something similar to the iterator traits, e.g. HasLength, HasSize, etc. But I also don't like this because we'll end up with a lot of "maybe" existing methods confused

@oschulz
Copy link

oschulz commented Dec 7, 2021

it uses TransformVariables instead of Bijectors, right

TransformVariables supports InverseFunctions and ChangesOfVariables now, btw. (tpapp/TransformVariables.jl#85). I promised @torfjelde that I'd do a PR to add initial support to Bijectors as well soon (TuringLang/Bijectors.jl#199 , later on, Bijectors may then undergo some refactoring, as far as I understand). I hope this will enable us to not depend on specific transformation packages so much in the future. I hope to have VariateTransformations (exact transformation of variates between distributions, pulled out from BAT.jl) ready for registration soon as well, it will support InverseFunctions and ChangesOfVariables natively.

@oschulz
Copy link

oschulz commented Dec 7, 2021

LogDensityModel and DifferentiableLogDensityModel seem a bit redundant - one could instead just dispatch on the type of the log density function by eg querying the differentiation capabilities of the function in a similar way as done in LogDensityProblems

I think a trait-based API to query a density or function about differentiation-related information would work well with DensityInterface and LogFuncDensity.

@oschulz
Copy link

oschulz commented Dec 7, 2021

But wouldn't

struct DensityModel{F} <: AbstractModel
    logdensity::F
end

be exactly what we now have as DensityInterface.LogFuncDensity?

struct PosteriorModel{P1,P2} <: AbstractModel
    logprior::P1
    loglikelihood::P2
end

Shouldn't this rather be

struct Posterior{M,D}
    prior::M
    likelihood::D
end

with DensityKind(prior) === HasDensity() && DensityKind(likelihood) === IsDensity() && DensityKind(posterior) === HasDensity()

I have the same thing in BAT, only everything is IsDensity() there right now - a sin from the past that I plan to correct.

I would welcome a common lightweight package for Bayesian primitives like that, if you guys are interested. We recently decided to go for a breaking BAT 3.0 release next, so I can smash some porcelain and get rid of some past sins in the design.

It would be nice if we had things like a common Posterior defined in a central place. I would probably still convert it to an internal, more specific posterior type in BAT, for example, to be able to dispatch plotting recipes and so on, but that way users could construct a posterior without depending on specific frameworks. I do the same with LogFuncDensity right now: It's now the recommended way to give a likelihood to BAT (master branch), which will then convert it to a more specific internal type. Maybe that would be a good approach for AdvancedMH & friends as well?

@torfjelde
Copy link
Member Author

torfjelde commented Dec 16, 2021

be exactly what we now have as DensityInterface.LogFuncDensity?

It indeed seems like that is the case, yeah:)

Shouldn't this rather be

I don't think we should assume likliehood has density kind IsDensity, e.g. one could imagine providing a Distribution as the likelihood.

I think a better idea would be to provide some notion of promotion between the different types, e.g.:

# Could just dispatch on `D<:DensityKind` but could
# mess up if someone decides to implement a new one.
for D in (:NoDensity, :HasDensity, :IsDensity)
    @eval begin
        Base.:+(::$D, ::$D) = $D()
    end
end
Base.:+(::IsDensity, ::HasDensity) = HasDensity()
Base.:+(::HasDensity, ::IsDensity) = HasDensity()
Base.:+(::NoDensity, ::DensityKind) = NoDensity()
Base.:+(::DensityKind, ::NoDensity) = NoDensity()

though I'm happy to not overload + here.

Then we just do

struct JointDensity{P,L}
    prior::P
    likelihood::L
end

function DensityInterface.DensityKind(joint::JointDensity)
    return DensityInterface.DensityKind(joint.prior) + DensityInterface.DensityKind(joint.likelihood)
end

I would welcome a common lightweight package for Bayesian primitives like that, if you guys are interested.

Very interested!

@torfjelde
Copy link
Member Author

And for conversion from package-specific implementations of models/densities/etc., do we just implement convert? E.g. in Turing/DynamicPPL we would just do

function Base.convert(::Type{Joint}, model::DynamicPPL.Model)
    return Joint(
        DensityInterface.logfuncdensity(Base.Fix1(DynamicPPL.logprior)),
        DensityInterface.logfuncdensity(Base.Fix1(DynamicPPL.loglikelihood))
    )
end

Or maybe it's better to just overload the constructor Joint since the implicit conversions that convert provides aren't really that useful here.

@oschulz
Copy link

oschulz commented Dec 16, 2021

@torfjelde : one could imagine providing a Distribution as the likelihood.

I while ago I would have said it could be a useful convenience, but @cscherrer and @mschauer cured me of that notion. :-) If that's intended, e.g. for testing scenarios, one can do logfuncdensity(logdensityof(dist)) to get something that is a density.

@torfjelde
Copy link
Member Author

@torfjelde : one could imagine providing a Distribution as the likelihood.

I while ago I would have said it could be a useful convenience, but @cscherrer and @mschauer cured me of that notion. :-) If that's intended, e.g. for testing scenarios, one can do logfuncdensity(logdensityof(dist)) to get something that is a density.

What cure did they give you? I'm asking for a friend 😳

But on a serious note, I agree that it's preferable to not use a Distribution, but why disallow it when it, IIUC, costs us nothing? It will def be super-confusing for a lot of users who aren't aware of this.

@oschulz
Copy link

oschulz commented Dec 16, 2021

function priorof end

function likelihoodof end


struct Posterior{L,P}
    likelihood::L
    prior::P
end

function Posterior(likelihood::L, prior::P) where {L,P}
    DensityKind(likelihood) === IsDensity() || throw ArgumentError("DensityKind(likelihood) must be IsDensity()")
    DensityKind(prior) === HasDensity() || throw ArgumentError("DensityKind(prior) must be HasDensity()")

    Posterior{L,P}(likelihood, prior)
end

likelihoodof(p::Posterior) = p.likelihood
priorof(p::Posterior) = p.prior

function DensityInterface.logdensityof(p::Posterior, x)
    logdensityof(likelihoodof(p), x) + logdensityof(priorof(p), x)
end

@inline DensityInterface.DensityKind(::Posterior) = HasDensity()

Question is, should it be type based (there would be an AbstractPosterior) or trait-based (like above)? Trait-based would probably be easier integrate with our existing packages.

@oschulz
Copy link

oschulz commented Dec 16, 2021

What cure did they give you? I'm asking for a friend flushed

They explained very patiently that it's a bad idea to mix up measures and densities, they play very different roles in an RN integral/derivative. While we often informally treat a distribution and it's density the same, they should not be treated as the same thing. DensityInterface build a bridge here by allowing logdensityof for both but giving them different DensityKinds.

But on a serious note, I agree that it's preferable to not use a Distribution, but why disallow it when it, IIUC, costs us nothing?

It's a slippery slope, especially after we were very careful in defining the semantics of IsDensity and HasDensity, they are mutually exclusive. On the more practical side, only allowing IsDensity for the likelihood and HasDensity for the prior prevents users from doing Posterior(prior, likelihood) by mistake (swapping the order).

It will def be super-confusing for a lot of users who aren't aware of this.

Are there actually user use cases for using a distribution as a likelihood? I mainly use it to test samplers.

@torfjelde
Copy link
Member Author

Question is, should it be type based (there would be an AbstractPosterior) or trait-based (like above)? Trait-based would probably be easier integrate with our existing packages.

I'm fine with either I think, but if we're doing trait-based we'll need a way to dispatch on having this trait.

@oschulz
Copy link

oschulz commented Dec 16, 2021

but if we're doing trait-based we'll need a way to dispatch on having this trait.

You mean beyond HasDensity(). Maybe ... well, you could an is_posterior or so.

@torfjelde
Copy link
Member Author

They explained very patiently that it's a bad idea to mix up measures and densities, since they play very different roles in an RN integral/derivative. While we often treat a distribution and it's density the same

So I'm very familiar with the difference between measures and densities, but most people won't be 😕
People will expect to be able to specify the prior as a simple function (or at least logfuncdensity(f)) which actually represents the density wrt. Lebesgue measure rather than having to make it into something that HasDensity.

Whether or not this is inexact is not something a user should have to deal with IMO.

It's a slippery slope, especially after we were very careful in defining the semantics of IsDensity and HasDensity, they are mutually exclusive.

But shouldn't this just be handled downstream? AFAIK the general consensus in Julia is to not restrict arguments by types unless we're a) disambiguating, or b) to ensure that the arguments have certain functionalities. In this case we're doing the latter, but the methods we end up using, e.g. logdensityof, have the same behavior for both IsDensity and HasDensity, no?

On the more practical side, only allowing IsDensity for the likelihood and HasDensity for the prior prevents users from doing Posterior(prior, likelihood) by mistake (swapping the order).

Sure, this is a fair point 👍 But I think the issue of people trying to pass in a logfuncdensity(f) as a prior is going to come up way more than using the wrong ordering 😕

@torfjelde
Copy link
Member Author

You mean beyond HasDensity(). Maybe ... well, you could an is_posterior or so.

Yeah. My personal motivation behind a Joint isn't being able to construct a Joint from prior and likelihood, but rather providing a unified interface to the decomposition of a Joint into a prior and likelihood. This can be exploited nicely by some samplers, e.g. EllipticalSliceSampling.jl and NestedSamplers.jl. We also often don't want to do prior + likelihood in densityof since it might be repeating a bunch of computation, i.e. I want the capability of overriding the logdensityof(::Joint, ...) in the case where the Joint represents something like a Turing.Model.

@oschulz
Copy link

oschulz commented Dec 16, 2021

People will expect to be able to specify the prior as a simple function

The likelihood, sure, but the prior? If the prior doesn't have a least a few distribution-like features (ability to draw random numbers as start values, ability to estimate (co)variance or find a transformation to Normal/unconstraint space, and so on), it's kinda hard to run inference algorithms with it.

to ensure that the arguments have certain functionalities. In this case we're doing the latter, but the methods we end up using, e.g. logdensityof, have the same behavior for both IsDensity and HasDensity

At least in BAT.jl, I need more from the prior than just it's density function (see above).

We also often don't want to do prior + likelihood in densityof since it might be repeating a bunch of computation

Indeed - BAT tries to transform the prior away, by default. And it's it not transformed away, it'll evaluate the prior first and then evaluate the likelihood only if the logdensityof(prior, x) > -Inf.

To me, prior and likelihood are very much not interchangeable. That's why I wouldn't call it joint - a joint distribution doesn't care so much what's left and right. But Posterior is a semantical concept, and it matters very much which of it's compoments is the likelihood and which is the prior.

@torfjelde
Copy link
Member Author

The likelihood, sure, but the prior? If the prior doesn't have a least a few distribution-like features (ability to draw random numbers as start values, ability to estimate (co)variance or find a transformation to Normal/unconstraint space, and so on), it's kinda hard to run inference algorithms with it.

But this is what I mean: it should be handled downstream, i.e. if a inference algoritm requires more than just evaluation then it should complain if it's not given such:) Isn't this also in line with what you said earlier about having a very general common costruction, and then the respective packages can convert it into whatever they want?

To give some concrete examples, given an initial point (which could be any point on the real line) MH and HMC does not need anything but evaluation of the joint to be able to sample from it.

To me, prior and likelihood are very much not interchangeable. That's why I wouldn't call it joint - a joint distribution doesn't care so much what's left and right. But Posterior is a semantical concept, and it matters very much which of it's compoments is the likelihood and which is the prior.

But for many natural use-cases, e.g. many inference algorithms, they are indeed interchangable. We could then of course just not use the Joint for these, but this then puts more burden on the user since they have to change between using Joint and others depending on what they're doing. I'd like the user to be able to also just give us the Joint(prior, likelihood) and we would deal with the rest (or alternatively, Posterior(prior, likelihood)), i.e. Joint or Posterior should provide additional information to downstream without being more restrictive by default to avoid increasing burden of user.

@oschulz
Copy link

oschulz commented Dec 16, 2021

But this is what I mean: it should be handled downstream, i.e. if a inference algoritm requires more than just evaluation then it should complain if it's not given such

Oh yes, definitely! Sorry, I didn't mean to imply that such an abstract package should be opinionated about what kind of operations (besides DensityKind and logdensityof) a prior must support.

But I still think that we should require DensityKind(likelihood) === IsDensity() and DensityKind(prior) === HasDensity(), also to force users to pass them to Posterior in the right order. As you say, some algorithms might not "notice" if they have been swapped. But then some other code downstream may ask for the prior and get the likelihood instead.

Edit: see different approach below.

I don't think that would place an undue burden on the user, and while I agree that we shouldn't be to formal, I don't think we should allow two IsDensity() or two HasDensity() objects to combined to a posterior measure. If users want to pass the prior as a simple function there could be a wrapper for that. Some additional minimal information (like dimensionality and a start value or so) will have to be "attached" to the prior density function anyway.

Joint or Posterior should provide additional information to downstream without being more restrictive by default to avoid increasing burden of user.

Yes, I fully agree. I really think it should be Posterior (or similar), not Joint though, since Joint doesn't have Bayesian semantics. Also, it would be weird to have a joint of a density and a measure.

@oschulz
Copy link

oschulz commented Dec 17, 2021

@torfjelde : My personal motivation behind a Joint isn't being able to construct a Joint from prior and likelihood, but rather providing a unified interface to the decomposition of a Joint into a prior and likelihood

I've been thinking about this a bit more - your're right, it's probably best to focus on decomposing posterior-like objects into likelihood and prior in this kind of interface package, instead of constructing posteriors.

How about this approach - we define (just a draft) something like

abstract type PosteriorKind end
struct IsPosterior <: PosteriorKind end
struct NoPosterior <: PosteriorKind end

function getprior end
function getlikelihood end

function test_posterior(posterior, x = rand(getprior(posterior))) end
    @test PosteriorKind(posterior) === IsPosterior()
    likelihood = getlikelihood(posterior)
    prior = getprior(posterior)
    @test DensityKind(likelihood) === IsDensity()
    @test DensityKind(prior) === HasDensity()
    test_density_interface(likelihood, x)
    test_density_interface(prior, x)
    test_density_interface(posterior, x)
    @test logdensityof(posterior, x) = logdensityof(likelihood, x) + logdensityof(prior, x)
end

So no default posterior implementation in BayesianConcepts.jl.

This should be enough for inference packages to use a posterior - users will use more specific packages to define the posterior. These can, if they want to, support accepting anything as a likelihood (even a distribution) or prior (e.g. just some density function). That takes care of the user convenience you mentioned. It'll be the job of the posterior constructors (in Turing packages, in BATBase, etc.) to convert/wrap these so that DensityKind(getlikelihood(posterior)) === IsDensity() and DensityKind(getprior(posterior)) === HasDensity(). This way, necessary wrapping/conversion happen at one point and inference codes can rely on clean semantics.

While we'd require logdensityof(posterior, x) = logdensityof(likelihood, x) + logdensityof(prior, x), inference engines would of course be free to evaluate the components of the posterior separately instead, apply transformations internally, etc.

Would that work for you, @torfjelde ?

@yebai
Copy link
Member

yebai commented Dec 29, 2022

Probably fixed by #110

@yebai yebai closed this as completed Dec 29, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants