G’day everyone. This post is to update you on some work I’ve been doing (along with some collaborators, notably João Faria) on DNest4. So, you’ll likely only care if you’re a user of that package. There have been enough additions recently that I’ve decided to call it a new version, 0.2.0. Everything is backwards compatible, so don’t stress. But some of the new features might be useful to you.

What are they? Here’s a partial list:

### Probability Distribution Classes

There is now a selection of classes available to represent some standard-ish probability distributions. This can make it easier to implement the `from_prior`

and `perturb`

member functions in a model. For example, suppose you want a normal prior centered at 30 with a standard deviation of 10. In the past, you had to implement it like this:

#include

```
```void MyModel::from_prior(DNest4::RNG& rng)

{

x = 30.0 + 10.0*rng.randn();

}

double MyModel::perturb(DNest4::RNG& rng)

{

double logH = 0.0;

logH -= -0.5*pow((x - 30.0)/10.0, 2);

x += rng.randh();

logH += -0.5*pow((x - 30.0)/10.0, 2);

` return logH;`

}

Now you can use the class `DNest4::Gaussian`

, an object of which represents a gaussian (normal) distribution. An alternative to the above is then:

#include

#include "DNest4/code/Distributions/Gaussian.h"

```
```void MyModel::from_prior(DNest4::RNG& rng)

{

DNest4::Gaussian gaussian(30.0, 10.0);

x = gaussian.generate(rng);

}

`double MyModel::perturb(DNest4::RNG& rng)`

{

double logH = 0.0;

DNest4::Gaussian gaussian(30.0, 10.0);

logH += gaussian.perturb(x, rng); // Perturbs x in-place

return logH;

}

Which means you don’t have to remember a bunch of details about normal distributions. Those are taken care of by the Gaussian class. In principle, instead of making a new Gaussian object twice, I should have just put one in the `MyModel`

class and used it in both `from_prior`

and `perturb`

, but you get the idea.

The collection of available distributions is a bit eclectic (look in the Distributions subdirectory to see them). I’m happy to accept pull requests if anyone wants to implement boring old things like gammas. 🙂 I might get around to it at some point.

### Better automatic detection of the number of levels

One of the annoying things about Nested Sampling is how to decide how much compression is needed, i.e., how many iterations (in classic NS) or how many levels (in Diffusive NS). For a while, DNest4 has tried to guess the appropriate number of levels on the fly if you set the maximum number of levels to 0 in your OPTIONS file. However, the method I used previously was failing on some problems, when the mixing wasn’t that good it wouldn’t create quite enough levels.

This is always a risk, but I think the modified version of the recipe I used for this is better than the old one. I’ll spare you the details.

### More aggressive compilation options

In all the Makefiles, compilation is now done with `-O3 -march=native`

, which can slightly improve performance. However, it means any binaries generated may be suboptimal (or even crash) if you try to run them on a different computer from where you compiled them. I don’t think anyone is doing this, but in case you are, you’ll just need to get rid of the `-march=native`

.

### An efficiency trick: pre-rejection

This is something I should have been doing ages ago, but some of the examples now demonstrate it. Suppose your model has an expensive part in it, so its perturb function looks like this:

double MyModel::perturb(DNest4::RNG& rng)

{

double logH = 0.0;

logH += something.perturb(rng); // Perturb some parameters

expensive(); // Do an expensive computation

return logH;

}

If the prior is implemented such that logH might be low (which might cause a rejection based on the prior), then all the expensive computation is a waste of time. In this case, it’s more efficient to pre-reject the proposal based on the prior only. This has to be done in a way which satisfies overall detailed balance. The trick is to insert a few lines:

double MyModel::perturb(DNest4::RNG& rng)

{

double logH = 0.0;

logH += something.perturb(rng); // Perturb some parameters

```
``` // Pre-rejection trick

if(rng.rand() >= exp(logH)) // Reject?

return -1E300;

else

logH = 0.0;

expensive(); // Do an expensive computation

` return logH;`

}

In some models (e.g., where `something`

is an RJObject with a log-uniform prior for the number of components) I have achieved a factor of two speedup by doing this.