CTR Home Internal  Relations and Communications Home About CTR Publication Schedule CTR Archives
THURSDAY REPORT ONLINE

February 28, 2002 Economists find bias in figure-skating judging

 

 


by Barbara Black

When the brouhaha over judging in pairs figure-skating broke out at the Salt Lake Olympics, Associate Professor Bryan Campbell got on the phone to his research partner, John W. Galbraith, of McGill.

The two economists jokingly agreed that it was too bad they’d done their study on the statistical probability of nationalist bias in figure-skating judging back in 1996. If they had just done it, they might have been noticed by the media.

To their surprise, they got a call from the Globe and Mail. Reporter Stephen Strauss came upon their paper in The Statistician, one of the journals of the Royal Statistical Society. He called Professor Galbraith, and referred to their study in an article published the following Saturday, Feb. 16.

Intellectual fun


Determining whether the nationality of the judges and the skaters affects the marks awarded in competition posed a delicious challenge to Campbell and Galbraith, who turned aside from their normal research preoccupations, economic forecast evaluation and the study of the term structure of interest rates, to address the question.

It was done for fun, but for intellectual fun — that is, in a serious and scientific way, Professor Campbell said in an interview. “Everyone has an opinion about these things, but there was no literature of empirical studies of this subject, and sports provides a wealth of reliable statistics.”

They wrote to the Olympic Committee and were duly sent data that covered the judging results for four editions of Olympic Games, including solo and pairs competitions, and breakdowns as to the categories of judging and the nationalities of the performers and skaters.

They applied the binomial principle to assessing the possibility of bias. Campbell explained this with the example of a coin toss. If you flip a coin many times, you would expect the incidence of its landing heads or tails to be roughly even. If you assigned a value of plus-one for heads and minus-one for tails, you would expect the sum to be zero.

In terms of this study, each panel involves nine judges, so in each case, they used the median mark — that of the fifth, or middle, judge. A score that was above the median received a 1 and a score below a –1.

They also assigned a plus-one to the judge who was from the same country as the skater or skaters, and a minus-one to the judges who were not from the skater’s country. Then the value of the score was multiplied by the value associated with nationality, and all the numbers added up. If there were no nationalist bias, they should still have ended up with a total close to zero — but they didn’t.

What they found was that there seemed to be a strong link between the nationality of the judges and the skaters they favoured — but only among the top-seeded performers.

“We looked at the bottom-third ranking of skaters, and there seemed to be no link at all in terms of nationality,” Campbell said. “It was only the top skaters, the stars, who were affected.”

A factor of their study that was overlooked in the Globe and Mail article was that these results didn’t point the finger only at the Russians or the French; they were as true for the Canadian judges as for the others. It turns out that we also appear to be unduly impressed by our own skaters.

As for using the data to study the likelihood of collusion among judges of different nationalities, Campbell says that would be “a much more difficult business, and the results obtained would admit of simpler interpretations than collusion.

“Generally, you would have to have a period in the sample where you were certain there was no collusion; then a period where you compare the results to the collusion-free period. Then, even if you found some difference, other interpretations could be advanced — remember, there are nine judges.”