From 1e3f53ff16d19fbdea4a70da49e287a92c5cfb42 Mon Sep 17 00:00:00 2001 From: Penelope Yong Date: Tue, 28 Oct 2025 17:08:27 +0000 Subject: [PATCH 1/6] Add page on models and varinfos --- _quarto.yml | 6 + developers/models/varinfo-overview/index.qmd | 387 +++++++++++++++++++ 2 files changed, 393 insertions(+) create mode 100644 developers/models/varinfo-overview/index.qmd diff --git a/_quarto.yml b/_quarto.yml index b44b5f713..10d84b1dd 100644 --- a/_quarto.yml +++ b/_quarto.yml @@ -117,6 +117,11 @@ website: - developers/compiler/minituring-contexts/index.qmd - developers/compiler/design-overview/index.qmd + - section: "DynamicPPL Models" + collapse-level: 1 + contents: + - developers/models/varinfo-overview/index.qmd + - section: "DynamicPPL Contexts" collapse-level: 1 contents: @@ -267,3 +272,4 @@ dev-transforms-distributions: developers/transforms/distributions dev-transforms-bijectors: developers/transforms/bijectors dev-transforms-dynamicppl: developers/transforms/dynamicppl dev-contexts-submodel-condition: developers/contexts/submodel-condition +dev-models-varinfo-overview: developers/models/varinfo-overview diff --git a/developers/models/varinfo-overview/index.qmd b/developers/models/varinfo-overview/index.qmd new file mode 100644 index 000000000..b33210828 --- /dev/null +++ b/developers/models/varinfo-overview/index.qmd @@ -0,0 +1,387 @@ +--- +title: "Evaluation of DynamicPPL Models with VarInfo" +engine: julia +--- + +Once you have defined a model using the `@model` macro, Turing.jl provides high-level interfaces for applying MCMC sampling, variational inference, optimisation, and other inference algorithms. +Suppose, however, that you want to work more directly with the model. +A common use case for this is if you are developing your own inference algorithm. + +This page describes how you can evaluate DynamicPPL models and obtain information about variable values, log densities, and other quantities of interest. +In particular, this provides a high-level overview of what we call `VarInfo`: this is a data structure that holds information about the execution state while traversing a model. + +To begin, let's define a simple model. + +```{julia} +using DynamicPPL, Distributions + +@model function simple() + @info " --- Executing model --- " + x ~ Normal() # Prior + 2.0 ~ Normal(x) # Likelihood + return (; xplus1 = x + 1) # Return value +end + +model = simple() +``` + +## The outputs of a model + +A DynamicPPL model has similar characteristics to Julia functions (which should not come as a surprise, since the `@model` macro is applied to a Julia function). +However, an ordinary function only has a return value, whereas DynamicPPL models can have both _return values_ as well as _latent variables_ (i.e., the random variables in the model). + +In general, both of these are of interest. +We can obtain the return value by calling the model as if it were a function: + +```{julia} +retval = model() +``` + +and the latent variables using `rand()`: + +```{julia} +latents = rand(Dict, model) +``` + +::: {.callout-note} +## Why `Dict`? + +Simply calling `rand(model)`, by default, returns a NamedTuple. +This is fine for simple models where all variables on the left-hand side of tilde statements are standalone variables like `x`. +However, if you have indices or fields such as `x[1]` or `x.a` on the left-hand side, then the NamedTuple will not be able to represent these variables properly. +Feeding such a NamedTuple back into the model, as shown in the next section, will lead to errors. + +In general, `Dict{VarName}` will always avoid such correctness issues. +::: + +Before proceeding, it is worth mentioning that both of these calls generate values for random variables by sampling from their prior distributions. +We will see how to use different sampling strategies later. + +## Passing latent values into a model + +Having considered what one can obtain from a model, we now turn to how we can use it. + +Suppose you now want to obtain the log probability (prior, likelihood, or joint) of a model, *given* certain parameters. +For this purpose, DynamicPPL provides the `logprior`, `loglikelihood`, and `logjoint` functions: + +```{julia} +logprior(model, latents) +``` + +One can check this against the expected log prior: + +```{julia} +logpdf(Normal(), latents[@varname(x)]) +``` + +Likewise, you can evaluate the return value of the model given the latent variables: + +```{julia} +# returned(model, latents) +@warn "Doesn't actually work, https://github.com/TuringLang/DynamicPPL.jl/issues/1095" +``` + +## VarInfo + +The above functions are convenient, but for many 'serious' applications they might not be flexible enough. +For example, if you wanted to obtain the return value _and_ the log joint, you would have to execute the model twice: once with `returned` and once with `logjoint`. +If you want to avoid this duplicate work, you need to use a lower-level interface, which is `DynamicPPL.evaluate!!`. + +At its core, `evaluate!!` takes a model and a VarInfo object, and returns a tuple of the return value and the new VarInfo. +So before we get to `evaluate!!`, we need to understand what a VarInfo is. + +A VarInfo is a container that tracks the state of model execution, as well as any outputs related to its latent variables, such as log probabilities. +DynamicPPL's source code contains many different kinds of VarInfos, each with different trade-offs. +The details of these are somewhat arcane and unfortunately cannot be fully abstracted away, mainly due to performance considerations. + +For the vast majority of users, it suffices to know that you can generate one of them for a model with the constructor `VarInfo([rng, ]model)`. +Note that this construction executes the model once (in the process, sampling new parameter values from the prior). + +```{julia} +v = VarInfo(model) +``` + +(Don't worry about the printout of the VarInfo object: we won't need to understand its internal structure.) +We can index into a VarInfo: + +```{julia} +v[@varname(x)] +``` + +To access the values of log-probabilities, DynamicPPL provides the `getlogprior`, `getloglikelihood`, and `getlogjoint` functions: + +```{julia} +DynamicPPL.getlogprior(v) +``` + +What about the return value? +Well, the VarInfo does not store this directly: recall that `evaluate!!` gives us back the return value separately from the updated VarInfo. +So, let's call it. +The default behaviour of `evaluate!!` is to use the parameter values stored in the VarInfo during model execution. +That is, when it sees `x ~ Normal()`, it will use the value of `x` stored in `v`. +We will see later how to change this behaviour. + +```{julia} +retval, vout = DynamicPPL.evaluate!!(model, v) +``` + +So here in a single call we have obtained both the return value and an updated VarInfo `vout`, from which we can again extract log probabilities and variable values. +We can see from this that the value of `vout[@varname(x)]` is the same as `v[@varname(x)]`: + +```{julia} +vout[@varname(x)] == v[@varname(x)] +``` + +which is in line with the statement above that by default `evaluate!!` uses the values stored in the VarInfo. + +At this point, the sharp reader will notice that we have not really solved the problem here. +Although the call to `DynamicPPL.evaluate!!` does indeed only execute the model once, we also had to do this once at the beginning when constructing the VarInfo. + +Besides, we don't know how to control the parameter values used during model execution: they were simply whatever we got in the original VarInfo. + +## Specifying parameter values + +We will first tackle the problem of specifying our own parameter values. +To do this, we need to use `DynamicPPL.init!!` instead of `DynamicPPL.evaluate!!`. + +The difference is that instead of using the values stored in the VarInfo (which `evaluate!!` does by default), `init!!` uses a _strategy_ for generating new values, and overwrites the values in the VarInfo accordingly. +For example, `InitFromPrior()` says that any time a tilde-statement `x ~ dist` is encountered, a new value for `x` should be sampled from `dist`: + +```{julia} +retval, v_new = DynamicPPL.init!!(model, v, InitFromPrior()) +``` + +This updates `v_new` with the new values that were sampled, and also means that log probabilities are computed using these new values. + +::: {.callout-note} +## Random number generator +You can also provide an `AbstractRNG` as the first argument to `init!!` to control the reproducibility of the sampling: here we have omitted it. +::: + +Alternatively, to provide specific sets of values, we can use `InitFromParams(...)` to specify them. +`InitFromParams` can wrap either a `NamedTuple` or an `AbstractDict{<:VarName}`, but for reasons explained above, `Dict` is generally much preferred. + +```{julia} +retval, v_new = DynamicPPL.init!!( + model, v, InitFromParams(Dict(@varname(x) => 3.0)) +) +``` + +We now find that if we look into `v_new`, the value of `x` is indeed `3.0`: + +```{julia} +v_new[@varname(x)] +``` + +and we can extract the return value and log probabilities exactly as before. + +Note that `init!!` always ignores any values that are already present in the VarInfo, and overwrites them with new values according to the specified strategy. + +If you have a loop in which you want to repeatedly evaluate a model with different parameter values, then the workflow shown here is recommended: + + - First generate a VarInfo using `VarInfo(model)`; + - Then call `DynamicPPL.init!!(model, v, InitFromParams(...))` to evaluate the model using those parameters. + +This costs an extra model evaluation at the very beginning to generate the VarInfo, but subsequent evaluations will be efficient. + +For example, this is how functions like `predict(model, chain)` are implemented. + +## `unflatten` + +In general, one problem with `init!!` is that it is often slower than `evaluate!!`. +This is primarily because it does more work: it has to not only read from the provided parameters, but also overwrite existing values in the VarInfo. + +```{julia} +using Chairmarks, Logging +# We need to silence the 'executing model' message, or else it will +# fill up the entire screen! +with_logger(ConsoleLogger(stderr, Logging.Warn)) do + median(@be DynamicPPL.evaluate!!(model, v_new)) +end +``` + +```{julia} +with_logger(ConsoleLogger(stderr, Logging.Warn)) do + median(@be DynamicPPL.init!!(model, v_new, InitFromParams(Dict(@varname(x) => 3.0)))) +end +``` + +When evaluating models in tight loops, as is often the case in inference algorithms, this overhead can be quite unwanted. +DynamicPPL provides a rather dangerous, but powerful, way to get around this, which is the `DynamicPPL.unflatten` function. +`unflatten` allows you to directly modify the internal storage of a VarInfo, without having to go through `init!!` and model evaluation. +Its input is a vector of parameters. + +```{julia} +xs = [7.0] +v_unflattened = DynamicPPL.unflatten(v_new, xs) +v_unflattened[@varname(x)] +``` + +We can then directly use `v_new` in `evaluate!!`, which will use the value `7.0` for `x`: + +```{julia} +retval, vout = DynamicPPL.evaluate!!(model, v_unflattened) +``` + +**There are several reasons why this function is dangerous. +If you use it, you must pay close attention to correctness:** + +1. For models with multiple variables, the order in which these variables occur in the vector is not obvious. The short answer is that it depends on the order in which the variables are added to the VarInfo during its initialisation. If you have models where the order of variables can vary from one execution to another, then `unflatten` can easily lead to incorrect results. + +2. The meaning of the values passed in will generally depend on whether the VarInfo is linked or not (see the [Variable Transformations page]({{< meta developers/transforms/dynamicppl >}}) for more information about linked VarInfos). You must make sure that the values passed in are consistent with the link status of the VarInfo. In contrast, `InitFromParams` always uses unlinked values. + +3. While `unflatten` modifies the parameter values stored in the VarInfo, it does not modify any other information, such as log probabilities. Thus, after calling `unflatten`, your VarInfo will be in an inconsistent state, and you should not attempt to read any other information from it until you have called `evaluate!!` again (which recomputes e.g. log probabilities). + +The inverse operation of `unflatten` is `DynamicPPL.getindex_internal(v, :)`: + +```{julia} +DynamicPPL.getindex_internal(v_unflattened, :) +``` + +## LogDensityFunction + +There is one place where `unflatten` is (unfortunately) quite indispensable, namely, the implementation of the LogDensityProblems.jl interface for Turing models. + +The LogDensityProblems interface defines interface functions such as + +```julia +LogDensityProblems.logdensity(f, x::AbstractVector) +``` + +which evaluates the log density of a model `f` given a vector of parameters `x`. + +Given what we have seen above, this can be done by wrapping a model and a VarInfo together inside a struct. +Here is a rough sketch of how this can be implemented: + +```{julia} +using LogDensityProblems + +struct MyModelLogDensity{M<:DynamicPPL.Model,V<:DynamicPPL.VarInfo} + model::M + varinfo::V +end + +function LogDensityProblems.logdensity(f::MyModelLogDensity, x::AbstractVector) + v_new = DynamicPPL.unflatten(f.varinfo, x) + _, vout = DynamicPPL.evaluate!!(f.model, v_new) + return DynamicPPL.getlogjoint(vout) +end + +# Usage +my_ldf = MyModelLogDensity(model, VarInfo(model)) +LogDensityProblems.logdensity(my_ldf, [2.5]) +``` + +DynamicPPL contains a `LogDensityFunction` type that, at its core, is essentially the same as the above. + +```{julia} +# the varinfo object defaults to VarInfo(model) +ldf = DynamicPPL.LogDensityFunction(model) +LogDensityProblems.logdensity(ldf, [2.5]) +``` + +The real implementation is a bit more complicated as it provides more options, as well as support for gradients with automatic differentiation. + +In this way, any Turing model can be converted into an object that you can use with LogDensityProblems-compatible optimisers, samplers, and other algorithms. +This is very powerful as it allows the algorithms to completely ignore the internal structure of the model, and simply treat it as an opaque log-density function. +For example, Turing's external sampler interface makes heavy use of this. + +However, it should be noted that because this uses `unflatten` under the hood, it suffers from exactly the same limitations as described above. +For example, models that do not have a fixed number or order of latent variables can lead to incorrect results or errors. + +## Advanced: Typed and untyped VarInfo + +The discussion above suffices for many applications of DynamicPPL, but one question remains: how to avoid the initial overhead of constructing a VarInfo object before we can do anything useful with it. +This is important when implementing a function such as `logjoint(model, params)`: in principle, only a single evaluation should be needed. + +To tackle this, we need to understand a little bit more about two kinds of VarInfo. +Conceptually, DynamicPPL has both _typed_ and _untyped_ VarInfos. +This distinction is also described in section 4.2.4 of [our recent Turing.jl paper](https://dl.acm.org/doi/10.1145/3711897). + +Evaluating a model with an existing typed VarInfo is generally much faster, and once you have a typed VarInfo it is a good idea to stick with it. +However, when instantiating a new VarInfo, it is often better to start with an untyped VarInfo, fill in the values, and then convert it to a typed VarInfo. + +::: {.callout-note} +## Why is untyped initialisation better? +Initialising a fresh VarInfo requires adding variables to it as they are encountered during model execution. +There are two main reasons for preferring untyped VarInfo: firstly, compilation time with typed VarInfo scales poorly with the number of variables; and secondly, typed VarInfos can error with certain kinds of models. +See [this issue](https://github.com/TuringLang/DynamicPPL.jl/issues/1062) for more information. +::: + +To see this in action, let's begin by constructing an empty _untyped_ VarInfo. +This does not execute the model, and so the resulting object has no stored variable values. +If we try to index into it, we will get an error: + +```{julia} +#| error: true +v_empty_untyped = VarInfo() +v_empty_untyped[@varname(x)] +``` + +::: {.callout-note} +## `VarInfo(model)` returns a typed VarInfo +Although `VarInfo()` with no arguments returns an untyped VarInfo, note that calling `VarInfo(model)` returns a typed VarInfo. This is a slightly awkward aspect of DynamicPPL's current API. +::: + +To generate new values for it, we will use `DynamicPPL.init!!` as before. + +```{julia} +_, v_filled_untyped = DynamicPPL.init!!(model, v_empty_untyped, InitFromParams(Dict(@varname(x) => 5.0))) +``` + +Now that we have filled in the untyped VarInfo, we can access parameter values, log probabilities, and so on: + +```{julia} +DynamicPPL.getlogprior(v_filled_untyped) +``` + +So, putting this all together, this is how an implementation of `logprior(model, params)` could look: + +```{julia} +function mylogprior(model, params) + # Create empty untyped VarInfo + v_empty_untyped = VarInfo() + # Fill in values from given params + _, v_filled_untyped = DynamicPPL.init!!(model, v_empty_untyped, InitFromParams(params)) + # Extract log prior + return DynamicPPL.getlogprior(v_filled_untyped) +end + +mylogprior(model, Dict(@varname(x) => 5.0)) +``` + +Notice that the above only required a single model evaluation. + +If we later want to convert the untyped VarInfo into a typed VarInfo (for example, for later reuse), we can do so using `DynamicPPL.typed_varinfo`: + +```{julia} +v_filled_typed = DynamicPPL.typed_varinfo(v_filled_untyped) +``` + +This allows us to demonstrate how `VarInfo(model)` is implemented: + +```{julia} +function myvarinfo(model) + # Create empty untyped VarInfo + v_empty_untyped = VarInfo() + # Sample values from prior + _, v_filled_untyped = DynamicPPL.init!!(model, v_empty_untyped, InitFromPrior()) + # Convert to typed VarInfo + return DynamicPPL.typed_varinfo(v_filled_untyped) +end +``` + +Notice here that `evaluate!!` runs much faster with a typed VarInfo than with untyped: this is why generally for repeated evaluation you should use a typed VarInfo. +The same is true of `init!!`. + +```{julia} +with_logger(ConsoleLogger(stderr, Logging.Warn)) do + median(@be DynamicPPL.evaluate!!(model, v_filled_untyped)) +end +``` + +```{julia} +with_logger(ConsoleLogger(stderr, Logging.Warn)) do + median(@be DynamicPPL.evaluate!!(model, v_filled_typed)) +end +``` From 6b29cdf47a9870be8dc16eb850e16941e6f0c285 Mon Sep 17 00:00:00 2001 From: Penelope Yong Date: Tue, 28 Oct 2025 17:22:13 +0000 Subject: [PATCH 2/6] fix deps --- Manifest.toml | 2 +- Project.toml | 2 ++ 2 files changed, 3 insertions(+), 1 deletion(-) diff --git a/Manifest.toml b/Manifest.toml index 366ab594e..9ee5c0409 100644 --- a/Manifest.toml +++ b/Manifest.toml @@ -2,7 +2,7 @@ julia_version = "1.11.7" manifest_format = "2.0" -project_hash = "2b3993e6e60ba9c1c456523b8f2caaae65279013" +project_hash = "13af05c5c830ccbd16d00856d8cce62a9368ce6a" [[deps.ADTypes]] git-tree-sha1 = "27cecae79e5cc9935255f90c53bb831cc3c870d7" diff --git a/Project.toml b/Project.toml index 137e2fa1b..6ea13f35e 100644 --- a/Project.toml +++ b/Project.toml @@ -7,6 +7,7 @@ AdvancedMH = "5b7e9947-ddc0-4b3f-9b55-0d8042f74170" AdvancedVI = "b5ca4192-6429-45e5-a2d9-87aec30a685c" Bijectors = "76274a88-744f-5084-9051-94815aaf08c4" CSV = "336ed68f-0bac-5ca0-87d4-7b16caf5d00b" +Chairmarks = "0ca39b1e-fe0b-4e98-acfc-b1656634c4de" DataFrames = "a93c6f00-e57d-5684-b7b6-d8193f3e46c0" DifferentialEquations = "0c46a032-eb83-5123-abaf-570d42b7fbaa" Distributed = "8ba89e20-285c-5b6f-9357-94700520ee1b" @@ -23,6 +24,7 @@ LinearAlgebra = "37e2e46d-f89d-539d-b4ee-838fcccc9c8e" LogDensityProblems = "6fdf6af0-433a-55f7-b3ed-c6c6e0b8df7c" LogDensityProblemsAD = "996a588d-648d-4e1f-a8f0-a84b347e47b1" LogExpFunctions = "2ab3a3ac-af41-5b50-aa03-7779005ae688" +Logging = "56ddb016-857b-54e1-b83d-db4d58db5568" Lux = "b2108857-7c20-44ae-9111-449ecde12c47" MCMCChains = "c7f686f2-ff18-58e9-bc7b-31028e88f75d" MLDataUtils = "cc2ba9b6-d476-5e6d-8eaf-a92d5412d41d" From 7f5017d99f16876d2769d9fdcae1612bbdd06741 Mon Sep 17 00:00:00 2001 From: Penelope Yong Date: Tue, 28 Oct 2025 23:37:42 +0000 Subject: [PATCH 3/6] Use new DPPL release --- Manifest.toml | 150 +++++++++++-------- developers/models/varinfo-overview/index.qmd | 3 +- 2 files changed, 88 insertions(+), 65 deletions(-) diff --git a/Manifest.toml b/Manifest.toml index 9ee5c0409..bb7e48eeb 100644 --- a/Manifest.toml +++ b/Manifest.toml @@ -34,9 +34,9 @@ version = "0.5.24" [[deps.AbstractMCMC]] deps = ["BangBang", "ConsoleProgressMonitor", "Distributed", "FillArrays", "LogDensityProblems", "Logging", "LoggingExtras", "ProgressLogging", "Random", "StatsBase", "TerminalLoggers", "Transducers", "UUIDs"] -git-tree-sha1 = "349b876950079105eafe19c5424eea3480ad7a84" +git-tree-sha1 = "9c7f7697af1eca5c2680e8935fb6e8585fcb0109" uuid = "80f14c24-f653-4e6a-9b94-39d6b0f70001" -version = "5.8.2" +version = "5.9.0" [[deps.AbstractPPL]] deps = ["AbstractMCMC", "Accessors", "DensityInterface", "JSON", "LinearAlgebra", "Random", "StatsBase"] @@ -304,9 +304,9 @@ version = "0.1.1" [[deps.Bijectors]] deps = ["ArgCheck", "ChainRulesCore", "ChangesOfVariables", "Distributions", "DocStringExtensions", "Functors", "InverseFunctions", "IrrationalConstants", "LinearAlgebra", "LogExpFunctions", "MappedArrays", "Random", "Reexport", "Roots", "SparseArrays", "Statistics"] -git-tree-sha1 = "e717d07fec10a086054ca960857bac637db4dd1f" +git-tree-sha1 = "642af9e4f33cfe6930088418da4c2b0d2b9de169" uuid = "76274a88-744f-5084-9051-94815aaf08c4" -version = "0.15.11" +version = "0.15.12" weakdeps = ["ChainRules", "DistributionsAD", "EnzymeCore", "ForwardDiff", "LazyArrays", "Mooncake", "ReverseDiff"] [deps.Bijectors.extensions] @@ -739,9 +739,9 @@ version = "1.15.1" [[deps.DifferentialEquations]] deps = ["BoundaryValueDiffEq", "DelayDiffEq", "DiffEqBase", "DiffEqCallbacks", "DiffEqNoiseProcess", "JumpProcesses", "LinearAlgebra", "LinearSolve", "NonlinearSolve", "OrdinaryDiffEq", "Random", "RecursiveArrayTools", "Reexport", "SciMLBase", "SteadyStateDiffEq", "StochasticDiffEq", "Sundials"] -git-tree-sha1 = "afdc7dfee475828b4f0286d63ffe66b97d7a3fa7" +git-tree-sha1 = "1df783c534cd0c4a865a397b1c4801771b5cbb07" uuid = "0c46a032-eb83-5123-abaf-570d42b7fbaa" -version = "7.16.1" +version = "7.17.0" [[deps.DifferentiationInterface]] deps = ["ADTypes", "LinearAlgebra"] @@ -863,9 +863,9 @@ version = "3.5.1" [[deps.DynamicPPL]] deps = ["ADTypes", "AbstractMCMC", "AbstractPPL", "Accessors", "BangBang", "Bijectors", "Chairmarks", "Compat", "ConstructionBase", "DifferentiationInterface", "Distributions", "DocStringExtensions", "InteractiveUtils", "LinearAlgebra", "LogDensityProblems", "MacroTools", "OrderedCollections", "Printf", "Random", "Statistics", "Test"] -git-tree-sha1 = "8caa9710b7a1b7c89a063ce8e578a25725d5c8eb" +git-tree-sha1 = "eee36e836165199cf5e53dc225a4a71e78adfb78" uuid = "366bfd00-2699-11ea-058f-f148b4cae6d8" -version = "0.38.1" +version = "0.38.3" [deps.DynamicPPL.extensions] DynamicPPLChainRulesCoreExt = ["ChainRulesCore"] @@ -899,9 +899,9 @@ version = "1.0.5" [[deps.Enzyme]] deps = ["CEnum", "EnzymeCore", "Enzyme_jll", "GPUCompiler", "InteractiveUtils", "LLVM", "Libdl", "LinearAlgebra", "ObjectFile", "PrecompileTools", "Preferences", "Printf", "Random", "SparseArrays"] -git-tree-sha1 = "bd778fdcba83fdf6c97e1bc2b55600557fa519a7" +git-tree-sha1 = "0df8f57601c65d2a58a0fc726d6ca8bdcc671fb5" uuid = "7da242da-08ed-463a-9acd-ee780be4f1d9" -version = "0.13.87" +version = "0.13.94" [deps.Enzyme.extensions] EnzymeBFloat16sExt = "BFloat16s" @@ -923,9 +923,9 @@ version = "0.13.87" StaticArrays = "90137ffa-7385-5640-81b9-e52037218182" [[deps.EnzymeCore]] -git-tree-sha1 = "e059db5d02720ae826445f5ce2fdfb3d53236b87" +git-tree-sha1 = "f91e7cb4c17dae77c490b75328f22a226708557c" uuid = "f151be2c-9106-41f4-ab19-57ee4f262869" -version = "0.8.14" +version = "0.8.15" weakdeps = ["Adapt"] [deps.EnzymeCore.extensions] @@ -933,9 +933,9 @@ weakdeps = ["Adapt"] [[deps.Enzyme_jll]] deps = ["Artifacts", "JLLWrappers", "LazyArtifacts", "Libdl", "TOML"] -git-tree-sha1 = "edcca3037addd6402706e435b0551bddd9d14840" +git-tree-sha1 = "acef0a1caa894290aaad245b9aa9f9daaf4eced2" uuid = "7cc45869-7501-5eee-bdea-0790c847d4ef" -version = "0.0.203+1" +version = "0.0.205+0" [[deps.EpollShim_jll]] deps = ["Artifacts", "JLLWrappers", "Libdl"] @@ -977,15 +977,15 @@ version = "0.10.14" [[deps.FFMPEG]] deps = ["FFMPEG_jll"] -git-tree-sha1 = "83dc665d0312b41367b7263e8a4d172eac1897f4" +git-tree-sha1 = "95ecf07c2eea562b5adbd0696af6db62c0f52560" uuid = "c87230d0-a227-11e9-1b43-d7ebe4e7570a" -version = "0.4.4" +version = "0.4.5" [[deps.FFMPEG_jll]] deps = ["Artifacts", "Bzip2_jll", "FreeType2_jll", "FriBidi_jll", "JLLWrappers", "LAME_jll", "Libdl", "Ogg_jll", "OpenSSL_jll", "Opus_jll", "PCRE2_jll", "Zlib_jll", "libaom_jll", "libass_jll", "libfdk_aac_jll", "libvorbis_jll", "x264_jll", "x265_jll"] -git-tree-sha1 = "3a948313e7a41eb1db7a1e733e6335f17b4ab3c4" +git-tree-sha1 = "ccc81ba5e42497f4e76553a5545665eed577a663" uuid = "b22a6f82-2f65-5046-a5b2-351ab43fb4e5" -version = "7.1.1+0" +version = "8.0.0+0" [[deps.FFTW]] deps = ["AbstractFFTs", "FFTW_jll", "Libdl", "LinearAlgebra", "MKL_jll", "Preferences", "Reexport"] @@ -1035,9 +1035,9 @@ uuid = "442a2c76-b920-505d-bb47-c5924d526838" version = "1.1.0" [[deps.FastPower]] -git-tree-sha1 = "5f7afd4b1a3969dc34d692da2ed856047325b06e" +git-tree-sha1 = "e47c70bf430175e077d1955d7f04923504acc74c" uuid = "a4df4552-cc26-4903-aec0-212e50a0e84b" -version = "1.1.3" +version = "1.2.0" [deps.FastPower.extensions] FastPowerEnzymeExt = "Enzyme" @@ -1650,9 +1650,9 @@ version = "2.41.2+0" [[deps.Libtask]] deps = ["MistyClosures", "Test"] -git-tree-sha1 = "6c4f536cdba06a5280d308168cca990d95b50b83" +git-tree-sha1 = "d87c9a93e94b7e3d4585007ff8dd7ae08fecc66b" uuid = "6f1fad26-d15e-5dc8-ae53-837a1d7b8c9f" -version = "0.9.5" +version = "0.9.6" [[deps.Libtiff_jll]] deps = ["Artifacts", "JLLWrappers", "JpegTurbo_jll", "LERC_jll", "Libdl", "XZ_jll", "Zlib_jll", "Zstd_jll"] @@ -1688,10 +1688,10 @@ uuid = "37e2e46d-f89d-539d-b4ee-838fcccc9c8e" version = "1.11.0" [[deps.LinearSolve]] -deps = ["ArrayInterface", "ChainRulesCore", "ConcreteStructs", "DocStringExtensions", "EnumX", "GPUArraysCore", "InteractiveUtils", "Krylov", "LazyArrays", "Libdl", "LinearAlgebra", "MKL_jll", "Markdown", "PrecompileTools", "Preferences", "RecursiveArrayTools", "Reexport", "SciMLBase", "SciMLOperators", "Setfield", "StaticArraysCore", "UnPack"] -git-tree-sha1 = "1c597eceb5ead73dae17ffa4dcc1160e3e4da1f3" +deps = ["ArrayInterface", "ChainRulesCore", "ConcreteStructs", "DocStringExtensions", "EnumX", "GPUArraysCore", "InteractiveUtils", "Krylov", "LazyArrays", "Libdl", "LinearAlgebra", "MKL_jll", "Markdown", "OpenBLAS_jll", "PrecompileTools", "Preferences", "RecursiveArrayTools", "Reexport", "SciMLBase", "SciMLOperators", "Setfield", "StaticArraysCore", "UnPack"] +git-tree-sha1 = "9112d1a01dad29464dbec00e4ac10b499dfe5bcb" uuid = "7ed4a6bd-45f5-4d41-b270-4a48e9bafcae" -version = "3.28.0" +version = "3.45.0" [deps.LinearSolve.extensions] LinearSolveAMDGPUExt = "AMDGPU" @@ -1701,6 +1701,7 @@ version = "3.28.0" LinearSolveCUDAExt = "CUDA" LinearSolveCUDSSExt = "CUDSS" LinearSolveCUSOLVERRFExt = ["CUSOLVERRF", "SparseArrays"] + LinearSolveCliqueTreesExt = ["CliqueTrees", "SparseArrays"] LinearSolveEnzymeExt = "EnzymeCore" LinearSolveFastAlmostBandedMatricesExt = "FastAlmostBandedMatrices" LinearSolveFastLapackInterfaceExt = "FastLapackInterface" @@ -1710,6 +1711,7 @@ version = "3.28.0" LinearSolveKernelAbstractionsExt = "KernelAbstractions" LinearSolveKrylovKitExt = "KrylovKit" LinearSolveMetalExt = "Metal" + LinearSolveMooncakeExt = "Mooncake" LinearSolvePardisoExt = ["Pardiso", "SparseArrays"] LinearSolveRecursiveFactorizationExt = "RecursiveFactorization" LinearSolveSparseArraysExt = "SparseArrays" @@ -1722,6 +1724,7 @@ version = "3.28.0" CUDA = "052768ef-5323-5732-b1bb-66c8b64840ba" CUDSS = "45b445bb-4962-46a0-9369-b4df9d0f772e" CUSOLVERRF = "a8cc9031-bad2-4722-94f5-40deabb4245c" + CliqueTrees = "60701a23-6482-424a-84db-faee86b9b1f8" EnzymeCore = "f151be2c-9106-41f4-ab19-57ee4f262869" FastAlmostBandedMatrices = "9d29842c-ecb8-4973-b1e9-a27b1157504e" FastLapackInterface = "29a986be-02c6-4525-aec4-84b980013641" @@ -1732,6 +1735,7 @@ version = "3.28.0" KrylovKit = "0b1a1467-8014-51b9-945f-bf0ae24f4b77" LAPACK_jll = "51474c39-65e3-53ba-86ba-03b1b862ec14" Metal = "dde4c033-4e86-420c-a63e-0dd931031962" + Mooncake = "da2b9cff-9c12-43a0-ae48-6db2b0edb7d6" Pardiso = "46dd5b70-b6fb-5a00-ae2d-e8fea33afaf2" RecursiveFactorization = "f2c3362d-daeb-58d1-803e-2bc74f2840b4" SparseArrays = "2f01184e-e22b-5df5-ae63-d93ebab69eaf" @@ -1902,9 +1906,9 @@ version = "1.12.1" [[deps.MCMCChains]] deps = ["AbstractMCMC", "AxisArrays", "DataAPI", "Dates", "Distributions", "IteratorInterfaceExtensions", "KernelDensity", "LinearAlgebra", "MCMCDiagnosticTools", "MLJModelInterface", "NaturalSort", "OrderedCollections", "PrettyTables", "Random", "RecipesBase", "Statistics", "StatsBase", "StatsFuns", "TableTraits", "Tables"] -git-tree-sha1 = "e31382401a6fb7f01b9ad4025c04fefb4ad0bcb8" +git-tree-sha1 = "4f5b84761bbfd1e99c2568815b1108858b760f4c" uuid = "c7f686f2-ff18-58e9-bc7b-31028e88f75d" -version = "7.5.0" +version = "7.6.0" [[deps.MCMCDiagnosticTools]] deps = ["AbstractFFTs", "DataAPI", "DataStructures", "Distributions", "LinearAlgebra", "MLJModelInterface", "Random", "SpecialFunctions", "Statistics", "StatsBase", "StatsFuns", "Tables"] @@ -2334,6 +2338,12 @@ git-tree-sha1 = "b6aa4566bb7ae78498a5e68943863fa8b5231b59" uuid = "e7412a2a-1a6e-54c0-be00-318e2571c051" version = "1.3.6+0" +[[deps.OpenBLAS32_jll]] +deps = ["Artifacts", "CompilerSupportLibraries_jll", "JLLWrappers", "Libdl"] +git-tree-sha1 = "ece4587683695fe4c5f20e990da0ed7e83c351e7" +uuid = "656ef2d0-ae68-5445-9ca0-591084a874a2" +version = "0.3.29+0" + [[deps.OpenBLAS_jll]] deps = ["Artifacts", "CompilerSupportLibraries_jll", "Libdl"] uuid = "4536629a-c528-5b80-bd46-f80d51c5b363" @@ -2391,16 +2401,16 @@ version = "0.4.6" Reactant = "3c362404-f566-11ee-1572-e11a4b42c853" [[deps.Optimization]] -deps = ["ADTypes", "ArrayInterface", "ConsoleProgressMonitor", "DocStringExtensions", "LinearAlgebra", "Logging", "LoggingExtras", "OptimizationBase", "Printf", "ProgressLogging", "Reexport", "SciMLBase", "SparseArrays", "TerminalLoggers"] -git-tree-sha1 = "6b74af28fe70d6c828746376549c357cf88d3c3a" +deps = ["ADTypes", "ArrayInterface", "ConsoleProgressMonitor", "DocStringExtensions", "LinearAlgebra", "Logging", "LoggingExtras", "OptimizationBase", "Printf", "Reexport", "SciMLBase", "SparseArrays", "TerminalLoggers"] +git-tree-sha1 = "b0afd00640ed7a122dfdd6f7c3e676079ce75dc0" uuid = "7f7a1694-90dd-40f0-9382-eb1efda571ba" -version = "5.0.0" +version = "5.1.0" [[deps.OptimizationBase]] deps = ["ADTypes", "ArrayInterface", "DifferentiationInterface", "DocStringExtensions", "FastClosures", "LinearAlgebra", "PDMats", "Reexport", "SciMLBase", "SparseArrays", "SparseConnectivityTracer", "SparseMatrixColorings"] -git-tree-sha1 = "9656e816095035cb993863fea4209f0cd8b1bc45" +git-tree-sha1 = "348a21d115538f8d66a2d66662a591c067d08894" uuid = "bca83a33-5cc9-4baa-983d-23429ab6bcbb" -version = "3.3.1" +version = "4.0.1" [deps.OptimizationBase.extensions] OptimizationEnzymeExt = "Enzyme" @@ -2426,15 +2436,15 @@ version = "3.3.1" [[deps.OptimizationNLopt]] deps = ["NLopt", "OptimizationBase", "Random", "Reexport", "SciMLBase"] -git-tree-sha1 = "067cbeb17f03494d40affa13c750411f737417ca" +git-tree-sha1 = "2327e6bed0a59ff201f0b42b518bbc9ce0cf859a" uuid = "4e6fcdb7-1186-4e1f-a706-475e75c168bb" -version = "0.3.6" +version = "0.3.7" [[deps.OptimizationOptimJL]] deps = ["Optim", "OptimizationBase", "PrecompileTools", "Reexport", "SciMLBase", "SparseArrays"] -git-tree-sha1 = "f46a4ac6619a232ef35f9f8728d76fd7baeb962e" +git-tree-sha1 = "f845be2386a220dcc0a3664249535b7f2bf610dd" uuid = "36348300-93cb-4f02-beb5-3c3902f8871e" -version = "0.4.6" +version = "0.4.7" [[deps.Opus_jll]] deps = ["Artifacts", "JLLWrappers", "Libdl"] @@ -2449,9 +2459,9 @@ version = "1.8.1" [[deps.OrdinaryDiffEq]] deps = ["ADTypes", "Adapt", "ArrayInterface", "CommonSolve", "DataStructures", "DiffEqBase", "DocStringExtensions", "EnumX", "ExponentialUtilities", "FastBroadcast", "FastClosures", "FillArrays", "FiniteDiff", "ForwardDiff", "FunctionWrappersWrappers", "InteractiveUtils", "LineSearches", "LinearAlgebra", "LinearSolve", "Logging", "MacroTools", "MuladdMacro", "NonlinearSolve", "OrdinaryDiffEqAdamsBashforthMoulton", "OrdinaryDiffEqBDF", "OrdinaryDiffEqCore", "OrdinaryDiffEqDefault", "OrdinaryDiffEqDifferentiation", "OrdinaryDiffEqExplicitRK", "OrdinaryDiffEqExponentialRK", "OrdinaryDiffEqExtrapolation", "OrdinaryDiffEqFIRK", "OrdinaryDiffEqFeagin", "OrdinaryDiffEqFunctionMap", "OrdinaryDiffEqHighOrderRK", "OrdinaryDiffEqIMEXMultistep", "OrdinaryDiffEqLinear", "OrdinaryDiffEqLowOrderRK", "OrdinaryDiffEqLowStorageRK", "OrdinaryDiffEqNonlinearSolve", "OrdinaryDiffEqNordsieck", "OrdinaryDiffEqPDIRK", "OrdinaryDiffEqPRK", "OrdinaryDiffEqQPRK", "OrdinaryDiffEqRKN", "OrdinaryDiffEqRosenbrock", "OrdinaryDiffEqSDIRK", "OrdinaryDiffEqSSPRK", "OrdinaryDiffEqStabilizedIRK", "OrdinaryDiffEqStabilizedRK", "OrdinaryDiffEqSymplecticRK", "OrdinaryDiffEqTsit5", "OrdinaryDiffEqVerner", "Polyester", "PreallocationTools", "PrecompileTools", "Preferences", "RecursiveArrayTools", "Reexport", "SciMLBase", "SciMLOperators", "SciMLStructures", "SimpleNonlinearSolve", "SimpleUnPack", "SparseArrays", "Static", "StaticArrayInterface", "StaticArrays", "TruncatedStacktraces"] -git-tree-sha1 = "89cd4e81d7a668f8858fba6779212f41a0360260" +git-tree-sha1 = "89172157d16139165d470602f1e552484b357771" uuid = "1dea7af3-3e70-54e6-95c3-0bf5283fa5ed" -version = "6.102.1" +version = "6.103.0" [[deps.OrdinaryDiffEqAdamsBashforthMoulton]] deps = ["DiffEqBase", "FastBroadcast", "MuladdMacro", "OrdinaryDiffEqCore", "OrdinaryDiffEqLowOrderRK", "Polyester", "RecursiveArrayTools", "Reexport", "SciMLBase", "Static"] @@ -2784,10 +2794,10 @@ uuid = "8162dcfd-2161-5ef2-ae6c-7681170c5f98" version = "0.2.0" [[deps.PrettyTables]] -deps = ["Crayons", "LaTeXStrings", "Markdown", "PrecompileTools", "Printf", "Reexport", "StringManipulation", "Tables"] -git-tree-sha1 = "1101cd475833706e4d0e7b122218257178f48f34" +deps = ["Crayons", "LaTeXStrings", "Markdown", "PrecompileTools", "Printf", "REPL", "Reexport", "StringManipulation", "Tables"] +git-tree-sha1 = "6b8e2f0bae3f678811678065c09571c1619da219" uuid = "08abe8d2-0d0c-5749-adfa-8a2ac140af0d" -version = "2.4.0" +version = "3.1.0" [[deps.Printf]] deps = ["Unicode"] @@ -2975,9 +2985,9 @@ version = "1.16.1" [[deps.Rmath]] deps = ["Random", "Rmath_jll"] -git-tree-sha1 = "52b99504e2c174d9a8592a89647f5187063d1eb1" +git-tree-sha1 = "5b3d50eb374cea306873b371d3f8d3915a018f0b" uuid = "79098fc4-a85e-5d69-aa6a-4863f24498fa" -version = "0.8.2" +version = "0.9.0" [[deps.Rmath_jll]] deps = ["Artifacts", "JLLWrappers", "Libdl"] @@ -3029,13 +3039,14 @@ uuid = "26aad666-b158-4e64-9d35-0e672562fa48" version = "0.5.2" [[deps.SciMLBase]] -deps = ["ADTypes", "Accessors", "Adapt", "ArrayInterface", "CommonSolve", "ConstructionBase", "Distributed", "DocStringExtensions", "EnumX", "FunctionWrappersWrappers", "IteratorInterfaceExtensions", "LinearAlgebra", "Logging", "Markdown", "Moshi", "PreallocationTools", "PrecompileTools", "Preferences", "Printf", "RecipesBase", "RecursiveArrayTools", "Reexport", "RuntimeGeneratedFunctions", "SciMLOperators", "SciMLPublic", "SciMLStructures", "StaticArraysCore", "Statistics", "SymbolicIndexingInterface"] -git-tree-sha1 = "7680fbbc8a4fdf9837b4cae5e3fbebe53ec8e4ff" +deps = ["ADTypes", "Accessors", "Adapt", "ArrayInterface", "CommonSolve", "ConstructionBase", "Distributed", "DocStringExtensions", "EnumX", "FunctionWrappersWrappers", "IteratorInterfaceExtensions", "LinearAlgebra", "Logging", "Markdown", "Moshi", "PreallocationTools", "PrecompileTools", "Preferences", "Printf", "RecipesBase", "RecursiveArrayTools", "Reexport", "RuntimeGeneratedFunctions", "SciMLLogging", "SciMLOperators", "SciMLPublic", "SciMLStructures", "StaticArraysCore", "Statistics", "SymbolicIndexingInterface"] +git-tree-sha1 = "7614a1b881317b6800a8c66eb1180c6ea5b986f3" uuid = "0bca4576-84f4-4d90-8ffe-ffa030f20462" -version = "2.122.0" +version = "2.124.0" [deps.SciMLBase.extensions] SciMLBaseChainRulesCoreExt = "ChainRulesCore" + SciMLBaseDifferentiationInterfaceExt = "DifferentiationInterface" SciMLBaseDistributionsExt = "Distributions" SciMLBaseEnzymeExt = "Enzyme" SciMLBaseForwardDiffExt = "ForwardDiff" @@ -3055,6 +3066,7 @@ version = "2.122.0" [deps.SciMLBase.weakdeps] ChainRules = "082447d4-558c-5d27-93f4-14fc19e9eca2" ChainRulesCore = "d360d2e6-b24c-11e9-a2a3-2a2ae2dbcce4" + DifferentiationInterface = "a0c0ee7d-e4b9-4e03-894e-1c5f64a51d63" Distributions = "31c24e10-a181-5473-b8eb-7969acd0382f" Enzyme = "7da242da-08ed-463a-9acd-ee780be4f1d9" ForwardDiff = "f6369f11-7733-5829-9624-2563aa707210" @@ -3077,6 +3089,12 @@ git-tree-sha1 = "a273b291c90909ba6fe08402dd68e09aae423008" uuid = "19f34311-ddf3-4b8b-af20-060888a46c0e" version = "0.1.11" +[[deps.SciMLLogging]] +deps = ["Logging", "LoggingExtras", "Preferences"] +git-tree-sha1 = "5a026f5549ad167cda34c67b62f8d3dc55754da3" +uuid = "a6db7da4-7206-11f0-1eab-35f2a5dbe1d1" +version = "1.3.1" + [[deps.SciMLOperators]] deps = ["Accessors", "ArrayInterface", "DocStringExtensions", "LinearAlgebra", "MacroTools"] git-tree-sha1 = "c1053ba68ede9e4005fc925dd4e8723fcd96eef8" @@ -3208,9 +3226,9 @@ version = "1.11.0" [[deps.SparseConnectivityTracer]] deps = ["ADTypes", "DocStringExtensions", "FillArrays", "LinearAlgebra", "Random", "SparseArrays"] -git-tree-sha1 = "62f3dbfa8e0bb01ce41076ee31686f0514a9e339" +git-tree-sha1 = "ba6dc9b87304964647bd1c750b903cb360003a36" uuid = "9f842d2f-2579-4b1d-911e-f412cf18a3f5" -version = "1.1.1" +version = "1.1.2" weakdeps = ["ChainRulesCore", "LogExpFunctions", "NNlib", "NaNMath", "SpecialFunctions"] [deps.SparseConnectivityTracer.extensions] @@ -3266,9 +3284,9 @@ version = "1.0.3" [[deps.Static]] deps = ["CommonWorldInvalidations", "IfElse", "PrecompileTools", "SciMLPublic"] -git-tree-sha1 = "1e44e7b1dbb5249876d84c32466f8988a6b41bbb" +git-tree-sha1 = "49440414711eddc7227724ae6e570c7d5559a086" uuid = "aedffcd0-7271-4cad-89d0-dc628f76c6d3" -version = "1.3.0" +version = "1.3.1" [[deps.StaticArrayInterface]] deps = ["ArrayInterface", "Compat", "IfElse", "LinearAlgebra", "PrecompileTools", "Static"] @@ -3327,9 +3345,9 @@ version = "0.33.21" [[deps.StatsFuns]] deps = ["HypergeometricFunctions", "IrrationalConstants", "LogExpFunctions", "Reexport", "Rmath", "SpecialFunctions"] -git-tree-sha1 = "1ec049c79e13fb2638ddbf8793ab2cbbeb266f45" +git-tree-sha1 = "91f091a8716a6bb38417a6e6f274602a19aaa685" uuid = "4c63d2b9-4356-54db-8cca-17b64c39e42c" -version = "1.5.1" +version = "1.5.2" weakdeps = ["ChainRulesCore", "InverseFunctions"] [deps.StatsFuns.extensions] @@ -3355,10 +3373,10 @@ uuid = "9672c7b4-1e72-59bd-8a11-6ac3964bc41f" version = "2.7.0" [[deps.StochasticDiffEq]] -deps = ["ADTypes", "Adapt", "ArrayInterface", "DataStructures", "DiffEqBase", "DiffEqNoiseProcess", "DocStringExtensions", "FastPower", "FiniteDiff", "ForwardDiff", "JumpProcesses", "LevyArea", "LinearAlgebra", "Logging", "MuladdMacro", "NLsolve", "OrdinaryDiffEqCore", "OrdinaryDiffEqDifferentiation", "OrdinaryDiffEqNonlinearSolve", "Random", "RandomNumbers", "RecursiveArrayTools", "Reexport", "SciMLBase", "SciMLOperators", "SparseArrays", "StaticArrays", "UnPack"] -git-tree-sha1 = "63c85dd929eaf9910fd30ba1b5aa2892d3bb0368" +deps = ["ADTypes", "Adapt", "ArrayInterface", "DataStructures", "DiffEqBase", "DiffEqNoiseProcess", "DocStringExtensions", "FastPower", "FiniteDiff", "ForwardDiff", "JumpProcesses", "LevyArea", "LinearAlgebra", "Logging", "MuladdMacro", "NLsolve", "OrdinaryDiffEqCore", "OrdinaryDiffEqDifferentiation", "OrdinaryDiffEqNonlinearSolve", "Random", "RecursiveArrayTools", "Reexport", "SciMLBase", "SciMLOperators", "SparseArrays", "StaticArrays", "UnPack"] +git-tree-sha1 = "a7d5d87185450b61a95000547c85401ffd8e6e42" uuid = "789caeaf-c7a9-5a7d-9973-96adeb23e2a0" -version = "6.83.0" +version = "6.84.0" [[deps.StrideArraysCore]] deps = ["ArrayInterface", "CloseOpenIntervals", "IfElse", "LayoutPointers", "LinearAlgebra", "ManualMemory", "SIMDTypes", "Static", "StaticArrayInterface", "ThreadingUtilities"] @@ -3433,28 +3451,34 @@ version = "1.11.0" deps = ["Libdl", "LinearAlgebra", "Serialization", "SparseArrays"] uuid = "4607b0f0-06f3-5cda-b6b1-a6196a1729e9" +[[deps.SuiteSparse32_jll]] +deps = ["Artifacts", "JLLWrappers", "Libdl", "libblastrampoline_jll"] +git-tree-sha1 = "dc199915b7d2d1a25c8b66968e905f9cc671c1be" +uuid = "ca45d3f4-326b-53b0-9957-23b75aacb3f2" +version = "7.11.0+0" + [[deps.SuiteSparse_jll]] deps = ["Artifacts", "Libdl", "libblastrampoline_jll"] uuid = "bea87d4a-7f5b-5778-9afe-8cc45184846c" version = "7.7.0+0" [[deps.Sundials]] -deps = ["CEnum", "DataStructures", "DiffEqBase", "Libdl", "LinearAlgebra", "Logging", "PrecompileTools", "Reexport", "SciMLBase", "SparseArrays", "Sundials_jll"] -git-tree-sha1 = "7c7a7ee705724b3c80d5451ac49779db36c6f758" +deps = ["Accessors", "ArrayInterface", "CEnum", "DataStructures", "DiffEqBase", "Libdl", "LinearAlgebra", "LinearSolve", "Logging", "NonlinearSolveBase", "PrecompileTools", "Reexport", "SciMLBase", "SparseArrays", "Sundials_jll", "SymbolicIndexingInterface"] +git-tree-sha1 = "2d27edb89b7c555a57b8f22bfde92d6828d11cee" uuid = "c3572dad-4567-51f8-b174-8c6c989267f4" -version = "4.28.0" +version = "5.1.0" [[deps.Sundials_jll]] -deps = ["Artifacts", "CompilerSupportLibraries_jll", "JLLWrappers", "Libdl", "SuiteSparse_jll", "libblastrampoline_jll"] -git-tree-sha1 = "91db7ed92c66f81435fe880947171f1212936b14" +deps = ["Artifacts", "CompilerSupportLibraries_jll", "JLLWrappers", "Libdl", "OpenBLAS32_jll", "SuiteSparse32_jll"] +git-tree-sha1 = "a872f379c836e9cb5734485ca0681b192a59b98b" uuid = "fb77eaff-e24c-56d4-86b1-d163f2edb164" -version = "5.2.3+0" +version = "7.5.0+0" [[deps.SymbolicIndexingInterface]] deps = ["Accessors", "ArrayInterface", "RuntimeGeneratedFunctions", "StaticArraysCore"] -git-tree-sha1 = "b19cf024a2b11d72bef7c74ac3d1cbe86ec9e4ed" +git-tree-sha1 = "94c58884e013efff548002e8dc2fdd1cb74dfce5" uuid = "2efcf032-c050-4f8e-a9bb-153293bab1f5" -version = "0.3.44" +version = "0.3.46" weakdeps = ["PrettyTables"] [deps.SymbolicIndexingInterface.extensions] diff --git a/developers/models/varinfo-overview/index.qmd b/developers/models/varinfo-overview/index.qmd index b33210828..8f0c5ab20 100644 --- a/developers/models/varinfo-overview/index.qmd +++ b/developers/models/varinfo-overview/index.qmd @@ -77,8 +77,7 @@ logpdf(Normal(), latents[@varname(x)]) Likewise, you can evaluate the return value of the model given the latent variables: ```{julia} -# returned(model, latents) -@warn "Doesn't actually work, https://github.com/TuringLang/DynamicPPL.jl/issues/1095" +returned(model, latents) ``` ## VarInfo From 45c8e45574e9036f4b0a19a2c24972863479d376 Mon Sep 17 00:00:00 2001 From: Penelope Yong Date: Wed, 29 Oct 2025 01:48:27 +0000 Subject: [PATCH 4/6] Minor text fixes --- developers/models/varinfo-overview/index.qmd | 26 +++++++++++--------- 1 file changed, 15 insertions(+), 11 deletions(-) diff --git a/developers/models/varinfo-overview/index.qmd b/developers/models/varinfo-overview/index.qmd index 8f0c5ab20..4655c79fa 100644 --- a/developers/models/varinfo-overview/index.qmd +++ b/developers/models/varinfo-overview/index.qmd @@ -49,7 +49,7 @@ latents = rand(Dict, model) Simply calling `rand(model)`, by default, returns a NamedTuple. This is fine for simple models where all variables on the left-hand side of tilde statements are standalone variables like `x`. However, if you have indices or fields such as `x[1]` or `x.a` on the left-hand side, then the NamedTuple will not be able to represent these variables properly. -Feeding such a NamedTuple back into the model, as shown in the next section, will lead to errors. +Feeding such a NamedTuple back into the model will lead to errors. In general, `Dict{VarName}` will always avoid such correctness issues. ::: @@ -84,10 +84,10 @@ returned(model, latents) The above functions are convenient, but for many 'serious' applications they might not be flexible enough. For example, if you wanted to obtain the return value _and_ the log joint, you would have to execute the model twice: once with `returned` and once with `logjoint`. -If you want to avoid this duplicate work, you need to use a lower-level interface, which is `DynamicPPL.evaluate!!`. +If you want to avoid this duplicate work, you need to use a lower-level interface, which is `DynamicPPL.evaluate!!`. At its core, `evaluate!!` takes a model and a VarInfo object, and returns a tuple of the return value and the new VarInfo. -So before we get to `evaluate!!`, we need to understand what a VarInfo is. +So, before we even get to `evaluate!!`, we need to understand what a VarInfo is. A VarInfo is a container that tracks the state of model execution, as well as any outputs related to its latent variables, such as log probabilities. DynamicPPL's source code contains many different kinds of VarInfos, each with different trade-offs. @@ -115,7 +115,7 @@ DynamicPPL.getlogprior(v) What about the return value? Well, the VarInfo does not store this directly: recall that `evaluate!!` gives us back the return value separately from the updated VarInfo. -So, let's call it. +So, let's try calling it to see what happens. The default behaviour of `evaluate!!` is to use the parameter values stored in the VarInfo during model execution. That is, when it sees `x ~ Normal()`, it will use the value of `x` stored in `v`. We will see later how to change this behaviour. @@ -133,8 +133,8 @@ vout[@varname(x)] == v[@varname(x)] which is in line with the statement above that by default `evaluate!!` uses the values stored in the VarInfo. -At this point, the sharp reader will notice that we have not really solved the problem here. -Although the call to `DynamicPPL.evaluate!!` does indeed only execute the model once, we also had to do this once at the beginning when constructing the VarInfo. +At this point, the keen reader will notice that we have not really solved the problem here. +Although the call to `DynamicPPL.evaluate!!` does indeed only execute the model once, we also had to do this once more at the beginning when constructing the VarInfo. Besides, we don't know how to control the parameter values used during model execution: they were simply whatever we got in the original VarInfo. @@ -158,7 +158,7 @@ You can also provide an `AbstractRNG` as the first argument to `init!!` to contr ::: Alternatively, to provide specific sets of values, we can use `InitFromParams(...)` to specify them. -`InitFromParams` can wrap either a `NamedTuple` or an `AbstractDict{<:VarName}`, but for reasons explained above, `Dict` is generally much preferred. +`InitFromParams` can wrap either a `NamedTuple` or an `AbstractDict{<:VarName}`, but `Dict` is generally much preferred as this guarantees correct behaviour even for complex variable names. ```{julia} retval, v_new = DynamicPPL.init!!( @@ -181,11 +181,15 @@ If you have a loop in which you want to repeatedly evaluate a model with differe - First generate a VarInfo using `VarInfo(model)`; - Then call `DynamicPPL.init!!(model, v, InitFromParams(...))` to evaluate the model using those parameters. -This costs an extra model evaluation at the very beginning to generate the VarInfo, but subsequent evaluations will be efficient. +This requires you to pay a one-time cost at the very beginning to generate the VarInfo, but subsequent evaluations will be efficient. +DynamicPPL uses this approach when implementing functions such as `predict(model, chain)`. -For example, this is how functions like `predict(model, chain)` are implemented. +::: {.callout-tip} +If you want to avoid even the first model evaluation, you will need to read on to the 'Advanced' section below. +However, for most applications this should not necessary. +::: -## `unflatten` +## Parameters in the form of Vectors In general, one problem with `init!!` is that it is often slower than `evaluate!!`. This is primarily because it does more work: it has to not only read from the provided parameters, but also overwrite existing values in the VarInfo. @@ -237,7 +241,7 @@ The inverse operation of `unflatten` is `DynamicPPL.getindex_internal(v, :)`: DynamicPPL.getindex_internal(v_unflattened, :) ``` -## LogDensityFunction +## `LogDensityFunction` There is one place where `unflatten` is (unfortunately) quite indispensable, namely, the implementation of the LogDensityProblems.jl interface for Turing models. From c3df6aa737a8eeaba3b411f2d1274c96739c64c3 Mon Sep 17 00:00:00 2001 From: Penelope Yong Date: Wed, 29 Oct 2025 16:21:25 +0000 Subject: [PATCH 5/6] Update developers/models/varinfo-overview/index.qmd Co-authored-by: Markus Hauru --- developers/models/varinfo-overview/index.qmd | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/developers/models/varinfo-overview/index.qmd b/developers/models/varinfo-overview/index.qmd index 4655c79fa..90068f7c2 100644 --- a/developers/models/varinfo-overview/index.qmd +++ b/developers/models/varinfo-overview/index.qmd @@ -94,7 +94,7 @@ DynamicPPL's source code contains many different kinds of VarInfos, each with di The details of these are somewhat arcane and unfortunately cannot be fully abstracted away, mainly due to performance considerations. For the vast majority of users, it suffices to know that you can generate one of them for a model with the constructor `VarInfo([rng, ]model)`. -Note that this construction executes the model once (in the process, sampling new parameter values from the prior). +Note that this construction executes the model once (sampling new parameter values from the prior in the process). ```{julia} v = VarInfo(model) From 0de31974269b6324a30cefd2cc37a954a1fdb072 Mon Sep 17 00:00:00 2001 From: Penelope Yong Date: Mon, 3 Nov 2025 09:34:32 +0000 Subject: [PATCH 6/6] Mention that `unflatten` + `evaluate!!` is still faster --- developers/models/varinfo-overview/index.qmd | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/developers/models/varinfo-overview/index.qmd b/developers/models/varinfo-overview/index.qmd index 90068f7c2..98aa7a6f7 100644 --- a/developers/models/varinfo-overview/index.qmd +++ b/developers/models/varinfo-overview/index.qmd @@ -226,7 +226,9 @@ We can then directly use `v_new` in `evaluate!!`, which will use the value `7.0` retval, vout = DynamicPPL.evaluate!!(model, v_unflattened) ``` -**There are several reasons why this function is dangerous. +Even the combination of `unflatten` and `evaluate!!` tends to be faster than a single call to `init!!`, especially for larger models. + +**However, there are several reasons why this function is dangerous. If you use it, you must pay close attention to correctness:** 1. For models with multiple variables, the order in which these variables occur in the vector is not obvious. The short answer is that it depends on the order in which the variables are added to the VarInfo during its initialisation. If you have models where the order of variables can vary from one execution to another, then `unflatten` can easily lead to incorrect results.