*What is a statistical model?*

This question was posed recently by the excellent “Stats Fact” Twitter account, which linked to a paper that was too complicated for me to understand, involving category theory. Coincidentally, the same question also came up on a thread in the Astrostatistics group on Facebook, during a discussion of posterior predictive checks and other similar practices. There was a strong consensus that we should participate in such “model checking” to ensure the statistical model used is a good representation of the data.

My own views are different. Firstly, I don’t think statistical models are representations of the data at all (barring one exception, which I will discuss later). Instead, they are representations of the *prior information* that our analysis is assuming — and this is a simple thing to see, not something that requires advanced pure mathematics skills. Therefore, if we want to find out whether our choices are appropriate, we need to look at the prior information, not the data. Of course, in practice the prior information isn’t usually specified explicitly, so this is difficult to do. Secondly, I don’t think model checking is useful in the way most people imagine it is. These procedures don’t tell us whether or not we should trust the results of an inference, which is what we usually want to know.

To make things more concrete, let’s look at a simple example where we want to infer an unknown quantity from data . We are therefore interested in the plausibility of propositions about the values of and , and those plausibilities are related according to the rules of probability theory. To keep things simple, suppose there are only three possibilities for and three for . Therefore there are nine (=3×3) mutually exclusive possibilities overall, of the form where . Before we learn , we not know the value of , but *we also don’t know the value of* (thanks Ariel). To model this we assign prior probabilities over the grid of nine possibilities. Here’s an example probability assignment:

For argument’s sake, let’s assume is on the x-axis and is on the y-axis (i.e. each column corresponds to one of the three possible values and each row corresponds to a possible value). By the laws of probability theory, this joint distribution implies a bunch of other probabilities. For example, the marginal distribution of can be found by summing the columns, i.e. the marginal probabilities are . This marginal distribution is what we usually mean by the term *prior*. If we take the joint distribution and divide by the marginal distribution for , we get the conditional distributions (each column represents a probability distribution for ):

If you look for people using the term “statistical model”, they are usually using it to refer to the choice of , the probability distribution for the data given the parameters, the conditional distributions shown above (aka the sampling distribution or sometimes [a bit misleadingly] the likelihood). However, these distributions are implied by the joint distribution which is explicitly a model of **uncertainty prior to knowing the data**, not the data itself.

In Bayesian inference, data plays one role and only one role. When we learn the value of our knowledge of switches from the marginal distribution to the conditional distribution corresponding to the actual value of . Equivalently, we can continue with the 3×3 grid of possibilities, and just delete all the false possibilities. So if we learned that took the value corresponding to row three, our state of knowledge would change, and the procedure is to set all the probabilities from rows 1 and 2 to zero, and renormalise the result:

This is the posterior distribution, but it’s not the posterior for , it’s the posterior for and jointly. The posterior for is a marginal distribution here, which is trivially equal to the values in row three on the right. Anyway, the point here is that the thing people usually call the “statistical model” is really part of the prior information, so we shouldn’t go looking for it in the data. Incidentally, this updating procedure is equivalent to a certain way of using MaxEnt, which means that Bayesian updating is a way of staying as close to the prior as possible: a fun fact that is the opposite of the way many people think about statistics.

*Posterior predictive checks*

Posterior predictive checks should not be interpreted as validating or invalidating the prior distribution by using the data. Such ideas make no logical sense except in the case where the prior distribution assigned probability zero to the observed data. However, this doesn’t render them totally useless. Posterior predictive checks (and similar practices) can play a role in bringing to light consequences of your prior that you may not have realised were there. For example, assuming a “gaussian noise” sampling distribution implies a very high prior probability that the noise vector in your one dataset “looks like white noise”. You may or may not want that property in your prior. If you can’t decide, the place to look is in your prior information, not at adhockeries based on your current dataset.

*Models “of the data”*

There’s one situation where I’m willing to concede that a statistical model represents data itself, rather than prior information. Suppose I have a large set of numbers and I want to summarise them, like we do in “descriptive statistics”. I could present the mean and standard deviation of the data, or other quantities. Another thing I could present is the result of a maximum likelihood fit of a distribution (maybe one without sufficient statistics, like a Cauchy distribution) to the data. What I’d be doing there is merely presenting a summary, and not doing an inference or trying to inductively reason from the data. It can also be viewed as a kind of reverse Monte Carlo approximation — instead of replacing a probability distribution with an empirical measure, you’re doing the opposite.

Very usefull! In retrospect I find that I have spent a lot of time unlearning the “truths” of my first stats course. Not neccecarily because they were “wrong”, but because they were confusing and not relevant/neccecary when actually doing stuff. Not long ago was I also worring about whether my data was normally distributed…

Thanks Rasmus. When learning anything there’s usually a stage where we learn things that are simplified or not quite true, that we can later shed. However in statistics I think we drag that stage out for too long.

Pingback: A JAGS-like interface to DNest4 | Plausibility Theory