Skip to content

Releases: TuringLang/Turing.jl

v0.41.4

24 Nov 22:05
c4a98a5

Choose a tag to compare

Turing v0.41.4

Diff since v0.41.3

Fixed a bug where the check_model=false keyword argument would not be respected when sampling with multiple threads or cores.

Merged pull requests:

v0.41.3

21 Nov 23:08
9684d39

Choose a tag to compare

Turing v0.41.3

Diff since v0.41.2

Fixed NUTS not correctly specifying the number of adaptation steps when calling AdvancedHMC.initialize! (this bug led to mass matrix adaptation not actually happening).

Merged pull requests:

Closed issues:

  • Turing and AdvancedHMC give different adaptors for NUTS (#2717)

v0.41.2

21 Nov 18:32
2cda98d

Choose a tag to compare

Turing v0.41.2

Diff since v0.41.1

Add GibbsConditional, a "sampler" that can be used to provide analytically known conditional posteriors in a Gibbs sampler.

In Gibbs sampling, some variables are sampled with a component sampler, while holding other variables conditioned to their current values. Usually one e.g. takes turns sampling one variable with HMC and the other with a particle sampler. However, sometimes the posterior distribution of one variable is known analytically, given the conditioned values of other variables. GibbsConditional provides a way to implement these analytically known conditional posteriors and use them as component samplers for Gibbs. See the docstring of GibbsConditional for details.

Note that GibbsConditional used to exist in Turing.jl until v0.36, at which it was removed when the whole Gibbs sampler was rewritten. This reintroduces the same functionality, though with a slightly different interface.

Merged pull requests:

Closed issues:

  • Reenable CI tests on Julia v1 (#2686)
  • Error on running using Turing on latest version (#2711)

v0.41.1

07 Nov 20:41
4153a83

Choose a tag to compare

Turing v0.41.1

Diff since v0.41.0

The ModeResult struct returned by maximum_a_posteriori and maximum_likelihood can now be wrapped in InitFromParams().
This makes it easier to use the parameters in downstream code, e.g. when specifying initial parameters for MCMC sampling.
For example:

@model function f()
    # ...
end
model = f()
opt_result = maximum_a_posteriori(model)
sample(model, NUTS(), 1000; initial_params=InitFromParams(opt_result))

If you need to access the dictionary of parameters, it is stored in opt_result.params but note that this field may change in future breaking releases as that Turing's optimisation interface is slated for overhaul in the near future.

Merged pull requests:

  • CompatHelper: add new compat entry for DynamicPPL at version 0.38 for package test, (keep existing compat) (#2701) (@github-actions[bot])
  • Move external sampler interface to AbstractMCMC (#2704) (@penelopeysm)
  • Skip Mooncake on 1.12 (#2705) (@penelopeysm)
  • Test on 1.12 (#2707) (@penelopeysm)
  • Include parameter dictionary in optimisation return value (#2710) (@penelopeysm)

Closed issues:

  • Better support for AbstractSampler (#2011)
  • "failed to find valid initial parameters" without the use of truncated (#2476)
  • Unify src/mcmc/Inference.jl methods (#2631)
  • Could not run even the sample code for Gaussian Mixture Models or Infinite Mixture Models (#2690)
  • type unstable code in Gibbs fails with Enzyme (#2706)

v0.41.0

22 Oct 16:20
0eb8576

Choose a tag to compare

Turing v0.41.0

Diff since v0.40.5

DynamicPPL 0.38

Turing.jl v0.41 brings with it all the underlying changes in DynamicPPL 0.38.
Please see the DynamicPPL changelog for full details: in this section we only describe the changes that will directly affect end-users of Turing.jl.

Performance

A number of functions such as returned and predict will have substantially better performance in this release.

ProductNamedTupleDistribution

Distributions.ProductNamedTupleDistribution can now be used on the right-hand side of ~ in Turing models.

Initial parameters

Initial parameters for MCMC sampling must now be specified in a different form.
You still need to use the initial_params keyword argument to sample, but the allowed values are different.
For almost all samplers in Turing.jl (except Emcee) this should now be a DynamicPPL.AbstractInitStrategy.

There are three kinds of initialisation strategies provided out of the box with Turing.jl (they are exported so you can use these directly with using Turing):

  • InitFromPrior(): Sample from the prior distribution. This is the default for most samplers in Turing.jl (if you don't specify initial_params).

  • InitFromUniform(a, b): Sample uniformly from [a, b] in linked space. This is the default for Hamiltonian samplers. If a and b are not specified it defaults to [-2, 2], which preserves the behaviour in previous versions (and mimics that of Stan).

  • InitFromParams(p): Explicitly provide a set of initial parameters. Note: p must be either a NamedTuple or an AbstractDict{<:VarName}; it can no longer be a Vector. Parameters must be provided in unlinked space, even if the sampler later performs linking.

    • For this release of Turing.jl, you can also provide a NamedTuple or AbstractDict{<:VarName} and this will be automatically wrapped in InitFromParams for you. This is an intermediate measure for backwards compatibility, and will eventually be removed.

This change is made because Vectors are semantically ambiguous.
It is not clear which element of the vector corresponds to which variable in the model, nor is it clear whether the parameters are in linked or unlinked space.
Previously, both of these would depend on the internal structure of the VarInfo, which is an implementation detail.
In contrast, the behaviour of AbstractDicts and NamedTuples is invariant to the ordering of variables and it is also easier for readers to understand which variable is being set to which value.

If you were previously using varinfo[:] to extract a vector of initial parameters, you can now use Dict(k => varinfo[k] for k in keys(varinfo) to extract a Dict of initial parameters.

For more details about initialisation you can also refer to the main TuringLang docs, and/or the DynamicPPL API docs.

resume_from and loadstate

The resume_from keyword argument to sample is now removed.
Instead of sample(...; resume_from=chain) you can use sample(...; initial_state=loadstate(chain)) which is entirely equivalent.
loadstate is exported from Turing now instead of in DynamicPPL.

Note that loadstate only works for MCMCChains.Chains.
For FlexiChains users please consult the FlexiChains docs directly where this functionality is described in detail.

pointwise_logdensities

pointwise_logdensities(model, chn), pointwise_loglikelihoods(...), and pointwise_prior_logdensities(...) now return an MCMCChains.Chains object if chn is itself an MCMCChains.Chains object.
The old behaviour of returning an OrderedDict is still available: you just need to pass OrderedDict as the third argument, i.e., pointwise_logdensities(model, chn, OrderedDict).

Initial step in MCMC sampling

HMC and NUTS samplers no longer take an extra single step before starting the chain.
This means that if you do not discard any samples at the start, the first sample will be the initial parameters (which may be user-provided).

Note that if the initial sample is included, the corresponding sampler statistics will be missing.
Due to a technical limitation of MCMCChains.jl, this causes all indexing into MCMCChains to return Union{Float64, Missing} or similar.
If you want the old behaviour, you can discard the first sample (e.g. using discard_initial=1).

Merged pull requests:

  • [breaking] v0.41 (#2667) (@penelopeysm)
  • Compatibility with DynamicPPL 0.38 + InitContext (#2676) (@penelopeysm)
  • Remove Sampler, remove InferenceAlgorithm, transfer initialstep, init_strategy, and other functions from DynamicPPL to Turing (#2689) (@penelopeysm)

Closed issues:

  • Do we need resume_from now that we have initial_state? (#2171)
  • Introduce a docs FAQ section (#2431)
  • These tests should be in DynamicPPL or removed (#2475)
  • Support for Distributions.ProductNamedTupleDistribution (#2659)
  • Overly aggressive concretisation in bundle_samples (#2666)
  • broken docs on website (#2698)

v0.40.5

17 Oct 06:42
cabe73f

Choose a tag to compare

Turing v0.40.5

Diff since v0.40.4

Bump Optimization.jl compatibility to include v5.

Merged pull requests:

v0.40.4

06 Oct 12:11
d8fbe78

Choose a tag to compare

Turing v0.40.4

Diff since v0.40.3

Fixes a bug where initial_state was not respected for NUTS if resume_from was not also specified.

Merged pull requests:

Closed issues:

  • Allow user to disable unnecessary model evaluations after #2202 (#2215)
  • Integrating Turing and MarginalLogDensities (#2398)
  • Simplify the tilde-pipeline in DynamicPPL (#2422)
  • Rethinking Threaded/Multichain Callbacks (#2568)
  • Gibbs / dynamic model / PG + ESS reproducibility (#2626)
  • Fixing initial_params when using NUTS does not fix the initial parameters (#2673)
  • initial_state is not used when resume_from is not specified (#2679)

v0.40.3

08 Sep 16:31
296f654

Choose a tag to compare

Turing v0.40.3

Diff since v0.40.2

This patch makes the resume_from keyword argument work correctly when sampling multiple chains.

In the process this also fixes a method ambiguity caused by a bugfix in DynamicPPL 0.37.2.

This patch means that if you are using RepeatSampler() to sample from a model, and you want to obtain MCMCChains.Chains from it, you need to specify sample(...; chain_type=MCMCChains.Chains).
This only applies if the sampler itself is a RepeatSampler; it doesn't apply if you are using RepeatSampler within another sampler like Gibbs.

Merged pull requests:

  • Fix typos in comments and variable names (#2665) (@Copilot)
  • Fix multiple-chain method ambiguity (#2670) (@penelopeysm)

Closed issues:

  • DynamicPPL documentation is incorrect (#2661)
  • Poisson with Dirichlet prior samples negative values (#2663)
  • RepeatSampler doesn't use MCMCChains.Chains (#2669)

v0.40.2

18 Aug 15:43
5a3f7aa

Choose a tag to compare

Turing v0.40.2

Diff since v0.40.1

sample(model, NUTS(), N; verbose=false) now suppresses the 'initial step size' message.

Merged pull requests:

Closed issues:

  • Option to suppress "Warning" and "Info" statements (#1398)

v0.40.1

12 Aug 23:23
4862ad6

Choose a tag to compare

Turing v0.40.1

Diff since v0.40.0

Extra release to trigger Documenter.jl build (when 0.40.0 was released GitHub was having an outage). There are no code changes.

Closed issues:

  • Broadcasting , addprob and PPL not functionning with PG and SMC sampler (#1996)
  • values of logp in chain do not include logabsdetjac term (#2617)
  • Skip extra re-evaluation with Prior (#2641)