Sunday, December 17, 2006

The Numbers Don't Add Up. Number 1 of a continuing series of pet peeves.

You'd think that people with M.D.s and Ph.Ds and faculty positions in major universities would be able to do simple arithmetic. You'd be wrong.

So I'm writing an article based on a poster presentation of a retrospective study. The topic isn't important. I have the full text of the poster on a piece of paper in front of me. The methods section says that the study involved 29 children hospitalized for serious burns and 73 children hospitalized for other serious injuries. One of the study's dependent variables was whether the children has been breast fed as infants or not. Of those children 47 had been breast fed and 56 had not.

Observant readers will have noticed that 29 + 73 = 102 but 47 + 56 = 103.

The total is 102 for all the other dependent variables, so I'm reasonably certain that there were 102 and not 103 children in the study. Either the number of children who had been breast fed is actually 46 or the number who had not is actually 55.

I pore through all the other numbers on that poster, hoping that there would be a way for me to back-calculate the source of the error. No such luck. By this time it's about 10 minutes before my deadline and after business hours on the Friday before a holiday weekend. There's no realistic possibility of reaching one of the researchers on the phone to resolve the discrepancy.

It clearly wouldn't be right for me to guess which number was correct. That would give me at least a 50% chance of being wrong, and a much greater chance if Murphy's Law is taken into account.

I ended up fudging, writing that "just under half" the children had been breast fed.

I'm telling this story not because it's unusual, but because it's not. It's amazing how often I find numerical errors in studies described in medical conferences. I'd guess it's at least 10%-20% of the time (or about 3 times out of 5, as Dave Barry might say). Occasionally I even find simple arithmetic errors in published papers, errors that apparently went unnoticed during peer review.

Calculated percentages are especially subject to error, for some reason. I've learned to recheck every percentage I plan to quote in my stories.

But I've also found major statistical errors. Once, I was all set to write about a study reporting a statistically significant difference between two groups until I took a close look at the data. There was a bar chart, and one of the groups did appear slightly larger on the relevant variable than the other. But when I took a close look at where the error bars would have been (had the authors put error bars on the bar chart), it was clear that the difference between the two groups was clearly within the margin of error, and there was no way in hell that the difference between them was statistically significant at the p=0.05 level.

I guess the moral of this story is that science and medical writers need to take close and critical looks at the actual numbers in the studies they write about, and not assume that scientists with advanced degrees are capable of calculating a percentage.

0 comments:

Other Posts on this Blog