Steve Landsburg recently blogged about a maths professor who weeds out 'unlucky' applicants by randomly rejecting half of the resumes he gets sent. Now, this is unusual, in that it is a random sampling method which significantly *reduces* the average quality of the applicant that gets hired.

There are a *lot* of situations in which random sampling would reduce workload whilst having no effect whatsoever on effectiveness. I'll start with one of the simplest and least controversial (and one that I have the most personal experience with). Students regularly submit 10 or more pieces of coursework for each course in a university semester. Every question is then marked, and the papers returned to the students. Assuming (which is probably not entirely accurate) that the courseworks are solely intended as a normative assessment of student performance, surely it would be massively more efficient to sample questions at random and mark those, rather than marking the entire paper. The expected mark for any given student is the same - only the variance goes up.

There are a few situations in which students suffer as a result of this. Say there's a pass mark of 40, and you have to pass every coursework, now someone who answers exactly 40% of the questions right in each coursework expects to fail (although they do expect to get an average mark of 40). Similarly, there are situations in which students benefit from this (pass mark of 40, answer exactly 39% of the questions correctly, you now have a non-zero chance of passing). On the whole, I would expect these things to cancel out, and that no one student knows their mark accurately enough to know whether they would benefit or lose out from this policy being enacted.

So why isn't this done more? I've heard from a few lecturers who've tried it, and it went down horribly with the students, who perceive it as 'unfair'. Apparently there were several comments along the lines of 'what if you only mark the questions I did badly?'. I guess this is some sort of loss aversion - it is quite obviously equally likely that we only mark the questions you did well!

Yvain has an article about a similar example from education - in which students are reluctant to guess answers to true/false questions with a penalty of 50% of a point for a wrong answer for some inexplicable reason. Again, random sampling is a massive net win.

Another example is public transport. No-one every pays to get on the 25 bus. This is because it is extremely rare for anyone to check whether you've paid or not and the penalties just aren't high enough to make it worthwhile paying given how rare the checks are. There are two obvious solutions to this problem: you could either do twice as many checks (thus requiring you to hire twice as many people to do the checking, and inconvenience twice as many people whilst checking) or you could double the fine. I've no idea why they don't take the second option.

How about voting? Instead of counting all of the votes in a general election, why not shake the votes up in a big bowl and count, say, the first 10,000 for any given seat? I can't be bothered to crunch the numbers, but I'm pretty sure the probability of error would be down below 1% - and errors would only occur in seats which were closely contested - where errors are not so important anyway, as the people obviously don't have a clear preference between the candidates.

Most of the examples I can think of exploit the same principle as the public transport idea above - when committing some transgression, your expected utility is the utility of cheating minus the disutility of punishment times the chance of getting caught. Since it's expensive to increase the chance of getting caught, there are a lot of situations in which I think it would be a net win to decrease this and increase the size of the punishment. Why not check half as many tax returns and double the fine for misfiling? Have half as many speed cameras and double the fine for speeding (speed cameras aren't expensive, so this might not be a net win)?

There are dozens of examples - and I don't think that the people in charge have sat down and done the relevant calculation in all cases. Are people just afraid of randomness? Afraid of seeming 'arbitrary'? Afraid of letting people 'get away with' committing crimes - assuming the only legitimate purpose of the criminal justice system is deterrence, this shouldn't be an issue. Maybe there's legitimate concerns that a 'random sampling' approach to some of these problems would be more subject to corruption - but we can just check a few of the samplers at random, and have massive fines for people doing it corruptly!

The law of large numbers is a powerful and important mathematical theorem. Why don't we exploit it better?

## Saturday, 15 May 2010

Subscribe to:
Post Comments (Atom)

## 5 comments:

I marked a course once where I was the only marker, and the only instruction I was given was to mark about 3 questions (of ~12).

The students didn't really like it. I don't understand why, because I was also the tutor, and had they paid attention they could have picked up a pretty good idea of which questions I liked/disliked and saved themselves effort.

I guess the problem is that the student might be right. Over 10 courseworks, you might actually always mark his worst, right? However, over a large number of courseworks, I'd expect you to mark as many of his "best" as his "worst".

Well, yes, but then we might pick their best questions as well. Random sampling increases the variance slightly, but there is already some variance in coursework assessment anyway (we don't ask coursework questions about every possible aspect of every possible course) - is there any reason to believe that the amount of variance we have now is optimal?

I'm willing to have general elections resolved using random sampling as long as the government is willing to collect tax the same way.

Haha, interesting. Although doesn't that mean we should only respond to some 999 calls (sampled at random) too?

Post a Comment