bilby.core.sampler.dynesty_utils.LivePointSampler
- class bilby.core.sampler.dynesty_utils.LivePointSampler(loglikelihood, prior_transform, npdim, live_points, method, update_interval, first_update, rstate, queue_size, pool, use_pool, kwargs=None, ncdim=0, blob=False, logvol_init=0)[source]
Bases:
UnitCubeSampler
Modified version of dynesty UnitCubeSampler that adapts the MCMC length in addition to the proposal scale, this corresponds to
bound=live
.In order to support live-point based proposals, e.g., differential evolution (
diff
), the live points are added to thekwargs
passed to the evolve method.Note that this does not perform ellipsoid clustering as with the
bound=multi
option, if ellipsoid-based proposals are used, e.g.,volumetric
, consider using theMultiEllipsoidLivePointSampler
(sample=live-multi
).- __init__(loglikelihood, prior_transform, npdim, live_points, method, update_interval, first_update, rstate, queue_size, pool, use_pool, kwargs=None, ncdim=0, blob=False, logvol_init=0)[source]
Initializes and returns a sampler object for Static Nested Sampling.
- Parameters:
- loglikelihoodfunction
Function returning ln(likelihood) given parameters as a 1-d ~numpy array of length ndim.
- prior_transformfunction
Function translating a unit cube to the parameter space according to the prior. The input is a 1-d ~numpy array with length ndim, where each value is in the range [0, 1). The return value should also be a 1-d ~numpy array with length ndim, where each value is a parameter. The return value is passed to the loglikelihood function. For example, for a 2 parameter model with flat priors in the range [0, 2), the function would be:
def prior_transform(u): return 2.0 * u
- ndimint
Number of parameters returned by prior_transform and accepted by loglikelihood.
- nliveint, optional
Number of “live” points. Larger numbers result in a more finely sampled posterior (more accurate evidence), but also a larger number of iterations required to converge. Default is 500.
- bound{‘none’, ‘single’, ‘multi’, ‘balls’, ‘cubes’}, optional
Method used to approximately bound the prior using the current set of live points. Conditions the sampling methods used to propose new live points. Choices are no bound (‘none’), a single bounding ellipsoid (‘single’), multiple bounding ellipsoids (‘multi’), balls centered on each live point (‘balls’), and cubes centered on each live point (‘cubes’). Default is ‘multi’.
- sample{‘auto’, ‘unif’, ‘rwalk’, ‘slice’, ‘rslice’,
‘hslice’, callable}, optional Method used to sample uniformly within the likelihood constraint, conditioned on the provided bounds. Unique methods available are: uniform sampling within the bounds(‘unif’), random walks with fixed proposals (‘rwalk’), multivariate slice sampling along preferred orientations (‘slice’), “random” slice sampling along all orientations (‘rslice’), “Hamiltonian” slices along random trajectories (‘hslice’), and any callable function which follows the pattern of the sample methods defined in dynesty.sampling. ‘auto’ selects the sampling method based on the dimensionality of the problem (from ndim). When ndim < 10, this defaults to ‘unif’. When 10 <= ndim <= 20, this defaults to ‘rwalk’. When ndim > 20, this defaults to ‘hslice’ if a gradient is provided and ‘rslice’ otherwise. ‘slice’ is provided as alternatives for`’rslice’. Default is `’auto’.
- periodiciterable, optional
A list of indices for parameters with periodic boundary conditions. These parameters will not have their positions constrained to be within the unit cube, enabling smooth behavior for parameters that may wrap around the edge. Default is None (i.e. no periodic boundary conditions).
- reflectiveiterable, optional
A list of indices for parameters with reflective boundary conditions. These parameters will not have their positions constrained to be within the unit cube, enabling smooth behavior for parameters that may reflect at the edge. Default is None (i.e. no reflective boundary conditions).
- update_intervalint or float, optional
If an integer is passed, only update the proposal distribution every update_interval-th likelihood call. If a float is passed, update the proposal after every round(update_interval * nlive)-th likelihood call. Larger update intervals larger can be more efficient when the likelihood function is quick to evaluate. Default behavior is to target a roughly constant change in prior volume, with 1.5 for ‘unif’, 0.15 * walks for ‘rwalk’. 0.9 * ndim * slices for ‘slice’, 2.0 * slices for ‘rslice’, and 25.0 * slices for ‘hslice’.
- first_updatedict, optional
A dictionary containing parameters governing when the sampler should first update the bounding distribution from the unit cube (‘none’) to the one specified by sample. Options are the minimum number of likelihood calls (‘min_ncall’) and the minimum allowed overall efficiency in percent (‘min_eff’). Defaults are 2 * nlive and 10., respectively.
- npdimint, optional
Number of parameters accepted by prior_transform. This might differ from ndim in the case where a parameter of loglikelihood is dependent upon multiple independently distributed parameters, some of which may be nuisance parameters.
- rstate~numpy.random.Generator, optional
- ~numpy.random.Generator instance. If not given, the
global random state of the ~numpy.random module will be used.
- queue_sizeint, optional
Carry out likelihood evaluations in parallel by queueing up new live point proposals using (at most) queue_size many threads. Each thread independently proposes new live points until the proposal distribution is updated. If no value is passed, this defaults to pool.size (if a pool has been provided) and 1 otherwise (no parallelism).
- pooluser-provided pool, optional
Use this pool of workers to execute operations in parallel.
- use_pooldict, optional
A dictionary containing flags indicating where a pool should be used to execute operations in parallel. These govern whether prior_transform is executed in parallel during initialization (‘prior_transform’), loglikelihood is executed in parallel during initialization (‘loglikelihood’), live points are proposed in parallel during a run (‘propose_point’), and bounding distributions are updated in parallel during a run (‘update_bound’). Default is True for all options.
- live_pointslist of 3 ~numpy.ndarray each with shape (nlive, ndim)
A set of live points used to initialize the nested sampling run. Contains live_u, the coordinates on the unit cube, live_v, the transformed variables, and live_logl, the associated loglikelihoods. By default, if these are not provided the initial set of live points will be drawn uniformly from the unit npdim-cube. WARNING: It is crucial that the initial set of live points have been sampled from the prior. Failure to provide a set of valid live points will result in incorrect results.
- logl_argsiterable, optional
Additional arguments that can be passed to loglikelihood.
- logl_kwargsdict, optional
Additional keyword arguments that can be passed to loglikelihood.
- ptform_argsiterable, optional
Additional arguments that can be passed to prior_transform.
- ptform_kwargsdict, optional
Additional keyword arguments that can be passed to prior_transform.
- gradientfunction, optional
A function which returns the gradient corresponding to the provided loglikelihood with respect to the unit cube. If provided, this will be used when computing reflections when sampling with ‘hslice’. If not provided, gradients are approximated numerically using 2-sided differencing.
- grad_argsiterable, optional
Additional arguments that can be passed to gradient.
- grad_kwargsdict, optional
Additional keyword arguments that can be passed to gradient.
- compute_jacbool, optional
Whether to compute and apply the Jacobian dv/du from the target space v to the unit cube u when evaluating the gradient. If False, the gradient provided is assumed to be already defined with respect to the unit cube. If True, the gradient provided is assumed to be defined with respect to the target space so the Jacobian needs to be numerically computed and applied. Default is False.
- enlargefloat, optional
Enlarge the volumes of the specified bounding object(s) by this fraction. The preferred method is to determine this organically using bootstrapping. If bootstrap > 0, this defaults to 1.0. If bootstrap = 0, this instead defaults to 1.25.
- bootstrapint, optional
Compute this many bootstrapped realizations of the bounding objects. Use the maximum distance found to the set of points left out during each iteration to enlarge the resulting volumes. Can lead to unstable bounding ellipsoids. Default is None (no bootstrap unless the sampler is uniform). If bootstrap is set to zero, bootstrap is disabled.
- walksint, optional
For the ‘rwalk’ sampling option, the minimum number of steps (minimum 2) before proposing a new live point. Default is 25.
- faccfloat, optional
The target acceptance fraction for the ‘rwalk’ sampling option. Default is 0.5. Bounded to be between [1. / walks, 1.].
- slicesint, optional
For the ‘slice’, ‘rslice’, and ‘hslice’ sampling options, the number of times to execute a “slice update” before proposing a new live point. Default is 3 for ‘slice’ and 3+ndim for rslice and hslice. Note that ‘slice’ cycles through all dimensions when executing a “slice update”.
- fmovefloat, optional
The target fraction of samples that are proposed along a trajectory (i.e. not reflecting) for the ‘hslice’ sampling option. Default is 0.9.
- max_moveint, optional
The maximum number of timesteps allowed for ‘hslice’ per proposal forwards and backwards in time. Default is 100.
- update_funcfunction, optional
Any callable function which takes in a blob and scale as input and returns a modification to the internal scale as output. Must follow the pattern of the update methods defined in dynesty.nestedsamplers. If provided, this will supersede the default functions used to update proposals. In the case where a custom callable function is passed to sample but no similar function is passed to update_func, this will default to no update.
- ncdim: int, optional
The number of clustering dimensions. The first ncdim dimensions will be sampled using the sampling method, the remaining dimensions will just sample uniformly from the prior distribution. If this is None (default), this will default to npdim.
- blob: bool, optional
The default value is False. If it is true, then the log-likelihood should return the tuple of logl and a numpy-array “blob” that will stored as part of the chain. That blob can contain auxiliary information computed inside the likelihood function.
- Returns:
- samplersampler from
nestedsamplers
An initialized instance of the chosen sampler specified via bound.
- samplersampler from
- __call__(*args, **kwargs)
Call self as a function.
Methods
__init__
(loglikelihood, prior_transform, ...)Initializes and returns a sampler object for Static Nested Sampling.
add_final_live
([print_progress, print_func])A wrapper that executes the loop adding the final live points. Adds the final set of live points to the pre-existing sequence of dead points from the current nested sampling run.
Add the remaining set of live points to the current set of dead points.
evolve_point
(*args)propose_live
(*args)We need to make sure the live points are passed to the proposal function if we are using live point-based proposals.
propose_point
(*args)propose_unif
(*args)Propose a new live point by sampling uniformly within the unit cube.
reset
()Re-initialize the sampler.
restore
(fname[, pool])Restore the dynamic sampler from a file.
run_nested
([maxiter, maxcall, dlogz, ...])A wrapper that executes the main nested sampling loop. Iteratively replace the worst live point with a sample drawn uniformly from the prior until the provided stopping criteria are reached.
sample
([maxiter, maxcall, dlogz, logl_max, ...])The main nested sampling loop. Iteratively replace the worst live point with a sample drawn uniformly from the prior until the provided stopping criteria are reached.
save
(fname)Save the state of the dynamic sampler in a file
update
([subset])Update the unit cube bound.
update_bound_if_needed
(loglstar[, ncall, force])Here we update the bound depending on the situation The arguments are the loglstar and number of calls if force is true we update the bound no matter what
update_hslice
(blob[, update])Update the Hamiltonian slice proposal scale based on the relative amount of time spent moving vs reflecting.
update_proposal
(*args, **kwargs)update_rwalk
(blob[, update])Update the proposal parameters based on the number of accepted steps and MCMC chain length.
update_slice
(blob[, update])Update the slice proposal scale based on the relative size of the slices compared to our initial guess.
update_unif
(blob[, update])Filler function.
update_user
(blob[, update])Update the proposal parameters based on the number of accepted steps and MCMC chain length.
Attributes
Return list of papers that should be cited given the specified configuration of the sampler.
Estimate the effective number of posterior samples using the Kish Effective Sample Size (ESS) where ESS = sum(wts)^2 / sum(wts^2).
rebuild
Saved results from the nested sampling run.
- add_final_live(print_progress=True, print_func=None)[source]
A wrapper that executes the loop adding the final live points. Adds the final set of live points to the pre-existing sequence of dead points from the current nested sampling run.
- Parameters:
- print_progressbool, optional
Whether or not to output a simple summary of the current run that updates with each iteration. Default is True.
- print_funcfunction, optional
A function that prints out the current state of the sampler. If not provided, the default
results.print_fn()
is used.
- add_live_points()[source]
Add the remaining set of live points to the current set of dead points. Instantiates a generator that will be called by the user. Returns the same outputs as
sample()
.
- property citations
Return list of papers that should be cited given the specified configuration of the sampler.
- property n_effective
Estimate the effective number of posterior samples using the Kish Effective Sample Size (ESS) where ESS = sum(wts)^2 / sum(wts^2). Note that this is len(wts) when wts are uniform and 1 if there is only one non-zero element in wts.
- propose_live(*args)[source]
We need to make sure the live points are passed to the proposal function if we are using live point-based proposals.
- static restore(fname, pool=None)[source]
Restore the dynamic sampler from a file. It is assumed that the file was created using .save() method of DynamicNestedSampler or as a result of checkpointing during run_nested()
- Parameters:
- fname: string
Filename of the save file.
- pool: object(optional)
The multiprocessing pool-like object that supports map() calls that will be used in the restored object.
- property results
Saved results from the nested sampling run. If bounding distributions were saved, those are also returned.
- run_nested(maxiter=None, maxcall=None, dlogz=None, logl_max=inf, n_effective=None, add_live=True, print_progress=True, print_func=None, save_bounds=True, checkpoint_file=None, checkpoint_every=60, resume=False)[source]
A wrapper that executes the main nested sampling loop. Iteratively replace the worst live point with a sample drawn uniformly from the prior until the provided stopping criteria are reached.
- Parameters:
- maxiterint, optional
Maximum number of iterations. Iteration may stop earlier if the termination condition is reached. Default is sys.maxsize (no limit).
- maxcallint, optional
Maximum number of likelihood evaluations. Iteration may stop earlier if termination condition is reached. Default is sys.maxsize (no limit).
- dlogzfloat, optional
Iteration will stop when the estimated contribution of the remaining prior volume to the total evidence falls below this threshold. Explicitly, the stopping criterion is ln(z + z_est) - ln(z) < dlogz, where z is the current evidence from all saved samples and z_est is the estimated contribution from the remaining volume. If add_live is True, the default is 1e-3 * (nlive - 1) + 0.01. Otherwise, the default is 0.01.
- logl_maxfloat, optional
Iteration will stop when the sampled ln(likelihood) exceeds the threshold set by logl_max. Default is no bound (np.inf).
- n_effective: int, optional
Minimum number of effective posterior samples. If the estimated “effective sample size” (ESS) exceeds this number, sampling will terminate. Default is no ESS (np.inf). This option is deprecated and will be removed in a future release.
- add_livebool, optional
Whether or not to add the remaining set of live points to the list of samples at the end of each run. Default is True.
- print_progressbool, optional
Whether or not to output a simple summary of the current run that updates with each iteration. Default is True.
- print_funcfunction, optional
A function that prints out the current state of the sampler. If not provided, the default
results.print_fn()
is used.- save_boundsbool, optional
Whether or not to save past bounding distributions used to bound the live points internally. Default is True.
- checkpoint_file: string, optional
if not None The state of the sampler will be saved into this file every checkpoint_every seconds
- checkpoint_every: float, optional
The number of seconds between checkpoints that will save the internal state of the sampler. The sampler will also be saved in the end of the run irrespective of checkpoint_every.
- sample(maxiter=None, maxcall=None, dlogz=0.01, logl_max=inf, n_effective=inf, add_live=True, save_bounds=True, save_samples=True, resume=False)[source]
The main nested sampling loop. Iteratively replace the worst live point with a sample drawn uniformly from the prior until the provided stopping criteria are reached. Instantiates a generator that will be called by the user.
- Parameters:
- maxiterint, optional
Maximum number of iterations. Iteration may stop earlier if the termination condition is reached. Default is sys.maxsize (no limit).
- maxcallint, optional
Maximum number of likelihood evaluations. Iteration may stop earlier if termination condition is reached. Default is sys.maxsize (no limit).
- dlogzfloat, optional
Iteration will stop when the estimated contribution of the remaining prior volume to the total evidence falls below this threshold. Explicitly, the stopping criterion is ln(z + z_est) - ln(z) < dlogz, where z is the current evidence from all saved samples and z_est is the estimated contribution from the remaining volume. Default is 0.01.
- logl_maxfloat, optional
Iteration will stop when the sampled ln(likelihood) exceeds the threshold set by logl_max. Default is no bound (np.inf).
- n_effective: int, optional
Minimum number of effective posterior samples. If the estimated “effective sample size” (ESS) exceeds this number, sampling will terminate. Default is no ESS (np.inf).
- add_livebool, optional
Whether or not to add the remaining set of live points to the list of samples when calculating n_effective. Default is True.
- save_boundsbool, optional
Whether or not to save past distributions used to bound the live points internally. Default is True.
- save_samplesbool, optional
Whether or not to save past samples from the nested sampling run (along with other ancillary quantities) internally. Default is True.
- Returns:
- worstint
Index of the live point with the worst likelihood. This is our new dead point sample.
- ustar~numpy.ndarray with shape (npdim,)
Position of the sample.
- vstar~numpy.ndarray with shape (ndim,)
Transformed position of the sample.
- loglstarfloat
Ln(likelihood) of the sample.
- logvolfloat
Ln(prior volume) within the sample.
- logwtfloat
Ln(weight) of the sample.
- logzfloat
Cumulative ln(evidence) up to the sample (inclusive).
- logzvarfloat
Estimated cumulative variance on logz (inclusive).
- hfloat
Cumulative information up to the sample (inclusive).
- ncint
Number of likelihood calls performed before the new live point was accepted.
- worst_itint
Iteration when the live (now dead) point was originally proposed.
- boundidxint
Index of the bound the dead point was originally drawn from.
- bounditerint
Index of the bound being used at the current iteration.
- efffloat
The cumulative sampling efficiency (in percent).
- delta_logzfloat
The estimated remaining evidence expressed as the ln(ratio) of the current evidence.
- save(fname)[source]
Save the state of the dynamic sampler in a file
- Parameters:
- fname: string
Filename of the save file.
- update_bound_if_needed(loglstar, ncall=None, force=False)[source]
Here we update the bound depending on the situation The arguments are the loglstar and number of calls if force is true we update the bound no matter what
- update_hslice(blob, update=True)[source]
Update the Hamiltonian slice proposal scale based on the relative amount of time spent moving vs reflecting. The keyword update determines if we are just accumulating the number of steps or actually adjusting the scale
- update_rwalk(blob, update=True)[source]
Update the proposal parameters based on the number of accepted steps and MCMC chain length.
There are a number of logical checks performed: - if the ACT tracking rwalk method is being used and any parallel
process has an empty cache, set the
rebuild
flag to force the cache to rebuild at the next call. This improves the efficiency when using parallelisation.update the
walks
parameter to asymptotically approach the desired number of accepted steps for theFixedRWalk
proposal.update the ellipsoid scale if the ellipsoid proposals are being used.
- update_slice(blob, update=True)[source]
Update the slice proposal scale based on the relative size of the slices compared to our initial guess. For slice sampling the scale is only ‘advisory’ in the sense that the right scale will just speed up sampling as we’ll have to expand or contract less. It won’t affect the quality of the samples much. The keyword update determines if we are just accumulating the number of steps or actually adjusting the scale
- update_user(blob, update=True)[source]
Update the proposal parameters based on the number of accepted steps and MCMC chain length.
There are a number of logical checks performed: - if the ACT tracking rwalk method is being used and any parallel
process has an empty cache, set the
rebuild
flag to force the cache to rebuild at the next call. This improves the efficiency when using parallelisation.update the
walks
parameter to asymptotically approach the desired number of accepted steps for theFixedRWalk
proposal.update the ellipsoid scale if the ellipsoid proposals are being used.