My parents are back from their vacation (and it turns out they don't have a trip planned for the immediate future, a real sign of economic trouble, although since they buy trips ahead of time this is a lagging indicator) and this meant my mother was back to teaching her class Wednesday. It also means I was able to get back to my Wednesday yoga class, where my absence had gone noticed. (I'm one of the rare males in the class, and the progression over the past year from really doughy guy to person who's actually slim and flexible enough to do many moves has stood out. Also people seem to remember me better than I think they need to.) It also meant that I get to hear what the students actually thought about my filling in.

The comment that my mother thought most prominent: I gave them *math.* Well, yeah; the subjects to be covered were the z-test and the t-test. These are statistical methods used to determine whether one sample's mean score (on pretty much anything) is sufficiently different from the population's mean score that it's unlikely this difference can be explained by chance. It's inherently mathematical, although the only work they have to do is subtract the sample mean from the population mean, and divide that difference by the appropriate standard deviation, and then check whether that score is larger than the critical value for the confidence level they've chosen. The confidence level is how big a risk you want to take of getting this call wrong, of saying a difference is meaningful when it's actually just a random fluctuation to one side or another of whatever you're measuring.

I did, though, show how to calculate the standard deviation, and that is actually a frightening-looking formula until you understand why it has to be that. Since they aren't a mathematics class, I emphasized before and after that they didn't need to know this, they have calculators and Excel to know this, and I was showing it just because a student asked how you found the standard deviation. And the students wanted to reopen a question I'd put forth, about whether a coin which, tossed, comes up heads 70 percent of the time is probably unfair. I'd shown that for a small number of flips --- around ten --- that's just not exceptional enough; but around a hundred flips, 70 heads is probably a rigged coin; and 700 heads out of 1000 is so improbable as to be a huge warning.

So my mother ended up getting the dozen students there to take out coins and flip them fifty times, each, getting the total number of heads. Several more students wandered in after the example started, but they refused to participate in coin-flipping for some reason. They found a range of from 20 to 33 heads, total, and found that at the 0.05 confidence level only one coin came up heads a suspiciously large (or low) number of times. Of course, at the 0.05 confidence level --- one chance in 20 of declaring as significant a meaningless fluctuation --- having one person in twelve come up with too many heads is actually a fairly likely outcome. So this was able to put into practice a lot of the inference-testing methods they've been learning a little crankily for weeks now. I'm glad I didn't go with the other standard statistics problem of tossed dice, since what are the odds twelve students would have some kind of die each?

* Trivia: * The Roman Emperor Commodius tried to rename the eleventh month of the year `Romanus', after one of his many names. Source: Mapping Time: The Calendar and its History, EG Richards.

Currently Reading: The Crisis Of The Old Order, Arthur M Schlesinger, Jr.