## Markets and Auckland Housing

Like many similar cities around the world, Auckland housing has become much more expensive very rapidly over the last few years. There are many reasons for this, and it is having various effects, positive for some (e.g., those whose houses have increased in value) and negative for others (e.g., those who have never bought a house and would like to, but cannot at current prices; and further down the income scale and more worryingly, those who cannot afford rents). There have been heaps of articles on this from all sorts of different angles, such as this one this morning. I agree with some bits of the article, but this sentence got my goat:

“Only the most laissez faire of free-marketeers still believe that this housing market is working as it should, as increasing numbers of families are forced to sleep in their cars and young Kiwis give up any hope of owning their own home.”

As my loyal reader knows, I have been trying to learn basic economics this year, as a hobby project, like how I learned Spanish starting a couple of years ago. I have also become a lot more open to classical liberal ideas than I used to be. However, it is a myth that all economists are laissez faire non-interventionists. Economics is a properly heterodox field, and Milton Friedman lovers are only a subset of the profession (and are probably right on many things and wrong on others). Anyway, I thought the sentence was a lazy throwaway line to appeal to some popular falsehoods that “everyone knows”: i) only greedy rich people support free markets, and they don’t care about poor people; ii) the current system is a free market; iii) therefore it is inherently perfect (as opposed to probably better than many alternatives you might try to come up with).

So I started thinking about what market forces are involved in the housing crisis and how prices have mitigated the effects of it, even if they haven’t “solved” it.  First though, some familiar reasons why prices have gone up:

i) Immigration to Auckland is high, increasing demand (= more people are willing and able to pay more for the same thing than used to be the case).

ii) People who have money to invest see housing as an option for this, and substitutes have becomes less attractive (rates of return on bonds and cash are low, shares have also become expensive over the same time period as houses).

iii) People seeing the fast price increases are incentivised to buy houses in expectation of becoming wealthier in the future as a result (i.e. the ‘bubble’ explanation).

iv) Auckland is becoming a better city to live in and the NZ economy is doing quite well. If a city becomes a more desirable place to live, that increases demand for housing there (this is related to (i)).

Now, here are some market forces that have mitigated the crisis (made it less bad than it otherwise might have been). Here are a few off the top of my head.

i) High prices signal that Auckland houses are scarce, and encourage people to economise. For example, my wife and I bought a 2 bedroom 80 square metre unit instead of a 3 bedroom 150 square metre house (which we would have bought if we’d had a lot more money). Some good friends of ours were renting a unit in Devonport but their landlord had to move back in. They looked for their next rental house in New Lynn due to high prices in Devonport. This frees up the scarce and highly-valued resource of Devonport housing.

Overcrowding is another example of economisation of scarce resources, but with more tragic human consequences.

ii) High prices act as an incentive for developers to build more houses as they can make a profit from this activity as houses can be sold for a lot of money.

iii) High prices act as an incentive to get people who have houses beyond their current needs to sell. For example, a retired couple downsizing and funding their retirement with the proceeds.

IMHO, one of the major problems is that effect (ii) has been hamstrung by how difficult and restrictive the rules are around building more houses. The council’s “Unitary Plan” should improve this situation somewhat. It will do this by making the rules around building a little bit more free-markety, not less. Basically, the fact there are problems with housing does not mean that markets aren’t doing useful things. They could be doing more useful things and they will once we let them.

## A JAGS-like interface to DNest4

As you probably know, I use Diffusive Nested Sampling (the latest implementation of which is DNest4) to do almost all of my data analysis. It works on pretty much all problems I throw at it, as long as the likelihood function is fairly fast to evaluate. One barrier to increasing its popularity has been that you needed to know C++ to implement models (priors and likelihoods, i.e., models of prior information). You also had to implement Metropolis proposals yourself, which can be a fair bit of work. Things have improved lately on the language front, as you can implement models in Python and Julia. You will lose out on computational efficiency, though. C++ is still the best way to implement a model, if you can do it.

In C++, proposals for hierarchical models can get a bit annoying, and the RJObject class helps a bit with that. But still, hardly any of my users are going to be good at C++ and want to use a template class to run a routine data analysis. A simpler solution is needed.

Since I learned about them, I’ve admired the model specification languages used by JAGS, OpenBUGS, and Stan, and have wanted the option to specify a model in something simple like these, which then gets translated into C++ code with sensible default proposals, so you can run DNest4.

Now it exists! It’s experimental, and not covered in the DNest4 paper, but is basically working. As an example, here’s a simple data analysis demo. Some JAGS code for a simple linear regression with naive uniform priors is:

model
{
# Slope and intercept
m ~ dunif(-100, 100)
b ~ dunif(-100, 100)

# Noise standard deviation
log_sigma ~ dunif(-10, 10)
sigma <- exp(log_sigma)

# p(data | parameters)
for(i in 1:N)
{
y[i] ~ dnorm(m*x[i] + b, 1/sigma^2)
}
}

And an R list (approximately the same as a Python dictionary) containing some data is

data = list(x=c(1, 2, 3, 4, 5), y=c(1, 2, 3, 3.9, 5.1), N=5)

Now, how would I implement the same thing in DNest4? I could write a C++ class, very much like the one given in the paper (and for a research problem, I would). But an alternative way is to use DNest4’s experimental ‘model builder’, which is the JAGS-like interface (but in Python) I’ve been wanting. Here is equivalent model code for the linear regression. Writing this feels kind of like a cross between using JAGS and using daft, the DAG-plotter by Foreman-Mackey and Hogg.

First, the data needs to be in a Python dictionary (containing floats, ints, and numpy arrays of floats & ints):

 data = {"x": np.array([1.0, 2.0, 3.0, 4.0, 5.0]),\
"y": np.array([1.0, 2.0, 3.0, 3.9, 5.1]),\
"N": 5}

Then, construct a ‘Model’:

# Create the model
model = bd.Model()

# Slope and intercept

# Noise standard deviation

# p(data | parameters)
for i in range(0, data["N"]):
name = "y{index}".format(index=i)
mean = "m*x{index} + b".format(index=i)
observed=True))

You use the add_node() function to add nodes (parameter or data quantities) to the model. For the Node constructor, the first argument is the name, and the second is the prior distribution.

The loop part makes nodes for the data, called y0, y1, y2, and so on, by using Python strings. To signal that a node is observed (so the C++ generated needs to put a term in the log likelihood rather than in the prior-related parts of the code), pass observed=True.

To generate and compile the C++ code, the Python script needs to be in the code/Templates/Builder directory, and then the following Python will generate the C++:

# Create the C++ code
bd.generate_h(model, data)
bd.generate_cpp(model, data)

# Compile the C++ code so it's ready to go
import os
os.system("make")

Then you can run the executable just as you could if you had implemented the model by writing a C++ class. Sweet!

The code/Templates/Builder directory includes this example (in for_blog.py) and a few more, including the old BUGS “Rats” example. Feel free to ask me for more details about this feature if you would like help using it!

In related news, I have signed up as a supporter of Heterodox Academy, a group advocating viewpoint diversity in academia as a bulwark against groupthink both within specific research areas and in the greater academic community. If you know me, you probably know that I used to be quite left-wing and a supporter of all the popular causes that academics tend to be enthusiastic about. Over the last 18 months I have become disillusioned and frustrated with much of this. If someone says something popular but false (or at least debateable), it shouldn’t require bravery to openly question it. But it does, and that’s the opposite of a good strategy for pursuing truth.

Posted in Personal | 6 Comments

## A Co-Blogger!

Luke Barnes reminds readers of his blog Letters to Nature that it’s not just his blog. 🙂

## Methinks it is like a weasel

This is my first proper blog post for a while. Apologies for the gap. I have been busy with visits from three of my favourite colleagues (Kevin Knuth, Daniela Huppenkothen, and Dan Foreman-Mackey), followed by teaching an undergraduate course for which I had to learn HTML+CSS, XML, and databases (aside: SQL is cool and I wish I had learned it earlier). Somewhere in there, Lianne and I managed to buy our first house as well. Hopefully that’s enough excuses!

Earlier this year, the physics department had a visit from prominent astrostatistician Daniel Mortlock, who gave a good introductory talk about “Bayesian model selection”. He gave the standard version of the story where the goal is to calculate posterior model probabilities (as opposed to a literal selection of a model, which is a decision theory problem). During the presentation, he claimed that you shouldn’t use this theory to calculate the posterior probability of a hypothesis you only thought of because of the data. I thought this was a weird claim, so I disputed it, which was fun, but didn’t resolve the issue on the spot.

Here’s why I think Mortlock’s advice is wrong. Probabilities measure how plausible a proposition is, in the context of another proposition being known. Equivalently, they measure the degree to which one proposition implies another. For example, a posterior probability $P(H|D, I)$ is the probability of statement $H$ given $D$ and $I$, or the degree to which $D$ implies $H$ in the context of $I$. To calculate it, you use Bayes’ rule. The posterior probability of $H$ equals the prior times the likelihood divided by the marginal likelihood. There’s no term in the equation for when or why you thought of $H$.

Still, I can see why Mortlock would have given his recommendation; it was a warning against the Bayesian equivalent of “p-hacking“. Every dataset will contain some meaningless anomalies, and it’s possible to construct an analysis that makes an anomaly appear meaningful when it isn’t.

A super-simple example will help here (I’ve used this example before, and it’s basically Ed Jaynes’ “sure thing hypothesis”). Consider a lottery with a million tickets. Consider the hypotheses $H_0$: The lottery was fair, and $H_1$: the lottery was rigged to make ticket number 227, 354 win. And let $D$ be the proposition that 227, 354 indeed won. The likelihoods are $P(D|H_0, I) = 10^{-6}$ and $P(D|H_1, I) = 1$. Wow! A strong likelihood ratio in favour of $H_1$. With prior probabilities of 0.5 each, the posterior probabilities of $H_0$ and $H_1$ are 1/1,000,001 and 1,000,000/1,000,001 respectively. Whoa. The lottery was almost certainly rigged!

Common sense says this conclusion is silly, and Mortlock’s warning would have prevented it. Okay, but is there a better way to prevent it? There is. We can assign more sensible prior probabilities. $P(H_0 | I) = P(H_1 | I) = 1/2$ is silly because it would have implied $P(D|I) \approx 1/2$, i.e. that we had some reason to suspect ticket number 227, 354 (and assign a 50% probability to it winning) before we knew that was the outcome. If, for example, we had considered a set of “rigged lottery” hypotheses $\{H_1, ..., H_{1,000,000}\}$, one for each ticket, and divided half the prior probability among them, then we’d have gotten the “obvious” result, that $D$ is uninformative about whether the lottery was fair or not.

The take home message here is that you can use Bayesian inference to calculate the plausibility of whatever hypotheses you want, no matter when you thought of them. The only risk is that you might be inclined to assign bad prior probabilities that sneakily include information from the data. The prior probabilities describe the extent to which hypotheses are implied by the prior information. If they do that, you’ll be fine.

Posted in Inference, Personal | 3 Comments

## Second article on Quillette

For anyone who missed it, I had another article published in the online science & politics magazine Quillette. In this one, I describe Ed Jaynes’ view of the second law of thermodynamics, and how it’s really little more than the sum rule of probability theory.

Posted in Uncategorized | 1 Comment

## The probability of a Mormon second coming

In a recent episode of his podcast, author Sam Harris reiterated an observation about probability theory. The broader context was to criticize the popular notion that all religions are the same. They aren’t — some specific propositions associated with religions are more plausible than others, and their consequences if believed and acted upon also vary. The probabilistic point was that the second coming of Jesus envisioned by Mormons is ‘objectively less plausible’ than a generic Christian version. Commentator Cenk Uygur then responded, saying that this is nonsense because the probability of both is zero if atheism is true (he also replaced the generic Christian version with a specific Christian version so he was talking about different propositions from Harris). The purpose of this post won’t surprise my readers: I’m going to pick nits about what probability theory actually says.

If we consider the proposition, associated with more traditional versions of Christianity, that Jesus will return to Earth to judge the living and the dead, and label this proposition A, then given information this has probability $P(A | I)$ (the probability of A given I).

Now consider the proposition, associated with Mormonism, that Jesus will return to Earth to judge the living and the dead and this will occur in the US state of Missouri. The first part of this proposition is A, but a second proposition B (about it happening in Missouri) has been attached via the and operator. Given information I, the probability is $P(A, B | I)$.

Probability theory says that $P(A, B | I) \leq P(A | I)$ for any propositions A, B, and I. Applying it to our case, the probability of the Mormon proposition must be less than or equal to the probability of the Christian one. I put “or equal to” in bold because it’s the nit I want to pick in Harris’s original statement. Probability theory itself says $\leq$. I think any reasonable person would assign probabilities such that the strict inequality $<$ applies, but that’s not a property of every possible probability assignment.

That adding extra stipulations with and can only decrease the plausibility (or keep it the same) isn’t just a consequence of probability theory, it’s a core part of the arguments for why probability applies to rational degrees of plausibility in the first place.

What happens if we do as Uygur did, and consider another proposition C, which is like B but specifies the location as Jerusalem instead of Missouri? Then probability theory in itself doesn’t constrain the values of $P(A, B | I)$ and $P(A, C | I)$. However, I’d assign a greater, but still small, probability to the latter.

Now what happens if we do another thing Uygur did, which is assert that anyone associated with the worth atheist (even though they do not like the term and would prefer it went away) should assign precisely zero probability to all of these propositions? Nothing much changes about the above discussion. Define $I_2$ as the proposition God doesn’t exist and Jesus was a regular person and will never return. Then, as probability theory requires $P(A, B | I_2) \leq P(A | I_2)$. It just so happens that both are zero (again, given $I_2$), and it’s the equality part of $\leq$ that applies.

Posted in Inference | 2 Comments