## Introducing ‘Brews’, and two cool things from the internet

A few brief items…

• I’ve made a little web page for sharing results of analyses I do (mostly these will be posterior samples and marginal likelihood values). I’ll aim to put things up when they’re sufficiently mature and ‘finished’, in the hope that someone might use them for actual science.
• Check out this fascinating post about an experiment on reddit, where anyone with could contribute to an image by painting one pixel at a time, but had to wait a few minutes between pixel edits. It’s amazing what emerged (via Diana Fleischman on Twitter).
• A professor at Carnegie Mellon has put a twist on multiple choice exams, by asking students to assign a probability distribution over the possible answers, and then grading them using the logarithmic score. This is sufficiently awesome that I might try it out one day. One way of improving this (and scaring students even more) would be to allow the students to assert a probability distribution that doesn’t factor into an independent distribution for each question (via Daniela Huppenkothen).

I am a senior lecturer in the Department of Statistics at The University of Auckland. Any opinions expressed here are mine and are not endorsed by my employer.
This entry was posted in Inference, Personal. Bookmark the permalink.

### 2 Responses to Introducing ‘Brews’, and two cool things from the internet

1. Whirlsler says:

While the MCQ thing sounds interesting (particularly in the difference between hard questions and thought difficult questions), there are two issues that I think should be noted.

One, it doesn’t allow for complete certainty. I know, I know, bowels of Christ, but the reality is that with some MCQs, one really is completely certain. Sometimes this is attained negatively, e.g. I don’t know that A is right but I know for sure that none of the other options are true, and sometimes it is obtained positively (e.g. one knows that Brendon is the lecturer in the course and option A is his face).

Two, I’m not sure how much it changes things mechanically. For instance, I had an exam the other day and during reading time I tried to tell how many marks I was expecting to drop due to not quite getting all the marks from right answers, not knowing, educated guesses and just being wrong. I don’t know how widespread this is as a practice, but I use it to inform my general approach which I am pretty sure is broadly used, insofar as it tells me when I need to start panicking and just bite the metaphorical bullet.

The thing with traditional MCQs is that they can only be so difficult. Take a question like,

data.lm[,2] does what:

a) reports all the values of column two for all rows of the data frame
b) reports all the variable states for observation 2 of the data frame
c) deletes column 2 of the data frame
d) deletes observation 2 from the data frame

I’m pretty confident that the typical student who doesn’t recognise (a) as the right answer immediately will use MCQ tactics to answer the question. For instance, you might remember that this code does report stuff when you run it so you eliminate (c) and (d) or you might remember that R is rows, columns so eliminate (b) and (d) instead. Then if you’re still stuck, you read the rest of the exam and hope one of the other parts of the exam gives the game away and, failing that, either try and guess based on what looks most random (in terms of the overall pattern of responses) or just go with option (b).

The described system penalises guessing but it doesn’t deal with other aspects of MCQ strategies… actually, it only penalises bad guesses and I have to say I am a firm believe in (i) some people are better at guessing than others and (ii) that these people know this. For instance, we might eliminate (b) and (d) so that they each become .01. And then if we think we’re on the verge of a coin flip between these two options, the system lets us have our cake and eat it too. Whereas a traditional MCQ forces us to guess for a single value, this one would allow us to go .49 on each. Add in a couple more questions like this (i.e. where we certainly eliminate half the options/down to two possibles) and we’re pretty much exactly where we always were (coin tossing single answers).

We observe that this broadly works for the case where we eliminate one answer as definitely wrong, not in the sense that it ends up as being equivalent to guessing from three options in a standard test but in the sense that we’re still better off slashing the credibility of the disfavoured possibility. In some sense this is good, because we are rewarding being able to get rid of one possibility but on the other hand we’ve still got the negative knowledge criticism of MCQs (ie. that they don’t assess what students know). Possibly we reduce the equivalency of guessing on MCQs and partial marks/follow through in conventional testing because we expect less than a third of the available marks from all guessed questions of three options, but there is also .24 that we could stick on the “most attractive guess” (perhaps option b).

tl;dr — for the student who already tries to predict their marks, the system just formalises what they already do… for the student who uses knowledge and tactics to answer MCQs, the differences between a conventional system and the one in the example PDF are slight (although, I suppose, I assume that students notice this, which may not be true)… and for the student who knows they’re right you force them to not be completely certain, as is appropriate in a real world situation, despite the plausibility of such levels of conviction in the contrived world of the MCQ.

• I agree completely with your first point. Probability one should be allowed – students just need to know the potential consequences!

On the second point, I think the main advantage is to get students to actually think about their probability assignments. I’ve had some pretty good research students who feel lost when assigning a prior, and sometimes they end up assigning something with fairly silly properties.