Retiring this blog

The other day I bought a domain name for my website, which was fun, and has motivated me to continue simplifying my internet life in a few ways. Over the last year or so I’ve successfully reduced my social media usage by about 80%, and am now using the internet like it’s 1999 again (well, except for the crypto stuff). It’s been a pleasant experience, despite sounding a bit hipster.

In addition, WordPress has been annoying lately with too many promotional emails and generally being more complicated than I need. Therefore, I’ve decided that I’ll no longer be posting at plausibilitytheory.wordpress.com. I’ll leave it up so people can still read it if they want, but in general, any new posts will go straight to the blog page on brendonbrewer.com.

Posted in Computing, Personal | Leave a comment

Lyle Lovett – Flyin’ Shoes

Posted in Music | Leave a comment

Jim – Faith No More cover

Just sharing a little recording of me playing guitar.

Posted in Music, Personal | Leave a comment

More efficient than Nested Sampling!

This paper nicely illustrates a genre I’ve seen a few times before. Pick a problem where there’s some symmetry or feature you can exploit to speed up the calculation, do so, and then compare it to Nested Sampling which doesn’t assume that you have problem-specific insight like that. Of course, the customised thing will beat the general thing in such a case.

A reductio ad absurdum of this kind of paper is to present a problem where you already know the answer. Then, the proposed method of already knowing the answer will be infinitely computationally more efficient than Nested Sampling, or indeed anything involving a computer.

Posted in Computing, Inference | Leave a comment

LBRY whitepaper released

https://lbry.io/news/lbrytech

 

Posted in Computing | Leave a comment

Videos: Intro to Probability and Bayes for (Astro)physicists

In late 2017 I went to ESAC in Madrid to do a winter school teaching basic statistics things to astronomers. I’ve presented similar material twice before, but I think it was better this time since I’ve learned more and there were also exercises to do. I’ve put the videos up on LBRY (I hope this is okay, Guillaume :-)). Here are some links to watch them on the web:

Part 1 Part 2 Part 3 Part 4

Posted in Inference | 1 Comment

New favourite software

I’ve been into “alternative” computer software since I started using Linux as my main operating system in about 2005. The main reason has been convenience, as I got in the habit of developing on Linux and it would be hard to switch. I did have a brief time where I was interested in purist free software ideology, but I’m more pragmatic now, and don’t proselytise much. Except in this post, where I’ll talk about three of my favourite newish programs.

Keybase

Keybase sort of feels like an instant messenger client, but it’s automatically end-to-end encrypted and also has encrypted cloud storage, which I find extremely useful. The cloud storage is automatically mounted as a directory in your file system, making it very easy to use. You can also have encrypted private git repositories, create teams to work on them and/or share files with, etc etc.

The main downside for me is just that not many people are on it. My wife, a good friend, and a close work collaborator are, so it’s still very useful. Recently I wanted to email an acquaintance about a very sensitive matter and I’d have liked it to be encrypted. It would have been trivial had he been on Keybase. It’d be great to see more of you on there 🙂

LBRY

LBRY is a blockchain-based file distribution platform and marketplace. You can approximately view it as an uncensorable YouTube alternative (it’s mostly videos, though you can actually use it for any file), with private property rights for the name/URL of a file, and a distributed storage for the file (sort of analogous to BitTorrent, but more decentralised using voodoo that I don’t understand). So you can set a price if you want, buy content that is for sale, or tip your favourite creators.

Because of the decentralised nature of LBRY, some features you might expect (commenting, for example) are harder to implement and not there yet. But they’re working on it.

Brave

Brave is a web browser that has several awesome privacy-respecting features. Ads are blocked by default, and eventually you’ll be able to (optionally) earn BAT (basic attention token) for looking at ads which companies have purchased with BAT. There’s also two levels of private browsing: normal private browsing and super-duper private browsing with Tor, which I’ve never used on its own (it seemed too hard). I love being able to use Tor with a simple click.

Posted in Computing | Leave a comment

Much Ado About Nothing

I just met up with a new student of mine, and gave her some warmup questions to get familiar with some of the things we’ll be working on. This involved “differential entropy”, which is basically Shannon entropy but for continuous distributions.

H = -\int f(x) \log f(x) \, dx

An intuitive interpretation of this quantity is, loosely speaking, the generalisation of “log-volume” to non-uniform distributions. If you changed parameterisation from x to x’, you’d stretch the axes and end up with different volumes, so H is not invariant under changes of coordinates. When using this quantity, a notion of volume is implicitly brought in, based on a flat measure that would not remain flat under arbitrary coordinate changes. In principle, you should give the measure explicitly (or use relative entropy/KL divergence instead), but it’s no big deal if you don’t, as long as you know what you’re doing.

For some reason, in this particular situation only, people wring their hands over the lack of invariance and say this makes the differential entropy wrong or bad or something. For example, the Wikipedia article states

Differential entropy (also referred to as continuous entropy) is a concept in information theory that began as an attempt by Shannon to extend the idea of (Shannon) entropy, a measure of average surprisal of a random variable, to continuous probability distributions. Unfortunately, Shannon did not derive this formula, and rather just assumed it was the correct continuous analogue of discrete entropy, but it is not.[citation needed] The actual continuous version of discrete entropy is the limiting density of discrete points (LDDP).

(Aside: this LDDP thing comes from Jaynes, who I think was awesome, but that doesn’t mean that he was always right or that people who’ve read him are always right)

Later on, the Wikipedia article suggests something is strange about differential entropy because it can be negative, whereas discrete Shannon entropy is non-negative. Well, volumes can be less than 1, whereas counts of possibilities cannot be. Scandalous!

This is probably a side effect of the common (and unnecessary) shroud of mystery surrounding information theory. Nobody would be tempted to edit the Wikipedia page on circles to say things like “the area of a circle is not really \pi r^2, because if you do a nonlinear transformation the area will change.” On second thoughts, many maths Wikipedia pages do degenerate into fibre bundles rather quickly, so maybe I shouldn’t say nobody would be tempted.

Posted in Entropy, Information | 3 Comments

O Holy Night – John Cowan

Merry Christmas

Posted in Music, Personal | Leave a comment

R-based model specification for DNest4

Last night, I decided to bite the bullet and add yet another method to implement models in DNest4, this time using R. Statisticians know R, so it’s probably a good idea to support their language in some form. This brings the list of ways of using DNest4 to the following:

    • Write C++ model classes (recommended, fastest, etc), as described in the paper
    • Write Python model classes (also in the paper)
    • Use the Python-based modelling language to specify your model, which is then translated into C++ model classes automatically (though they are not as optimised as they would be if you wrote them yourself)
    • Write an R file specifying your model.

Running an instance of R inside C++ is fairly easy to do thanks to RInside, but do not expect it to compete with pure C++ for speed, unless your R likelihood function is heavily optimised and dominates the computational cost so that overheads are irrelevant. That’s not the case in the example I implemented yesterday.

This post contains instructions to get everything up and running and to implement models in R. Since I’m not very good at R, some of this is probably more complicated than it needs to be. I’m open to suggestions.

Install DNest4

First, git clone and install DNest4 by following along with my quick start video. Get acquainted with how to run the sampler and what the output looks like.

Look at the R model code

Then, navigate to DNest4/code/Templates/RModel to see the example of a model implemented in R. There’s only one R file in that directory, so open it and take a look. There are three key parts of it. The variable num_params is the integer number of parameters in the model. These are assumed to have Uniform(0, 1) priors, but the function from_uniform is used to apply transformations to make the priors whatever you want them to be. The example is a simple linear regression with two vague normal priors and one vague log-uniform prior. It’s the same example from the paper. Then there’s the log likelihood, which is probably the easiest part to understand. I’m using the traditional iid gaussian prior for the noise around the regression line, with unknown standard deviation.

Fiddly library stuff

Make sure the R packages Rcpp and RInside are installed. In R, do this to install them:

> install.packages("Rcpp")
> install.packages("RInside")

Once this is done, find where the header files R.h, Rcpp.h, and RInside.h are on your system, and put those paths on the appropriate lines in DNest4/code/Templates/RModel/Makefile. Then, find the library files libR.so and libRInside.so (the extension is probably different on a Mac) and put their paths in the Makefile as well as adding them to your LD_LIBRARY_PATH environment variable. Enjoy the 1990s computing nostalgia.

Compile

Run make in order to compile the example, then execute main in order to run it. Everything should run just the same as in my quick start video, except slower. Don’t try to use multiple threads, and enjoy writing models in R!

Posted in Computing, Inference | 2 Comments