Wednesday, December 27, 2006

Conflict Over Conflicts of Interest

According to an article in the Boston Globe, the American Society of Hypertension has canceled a panel of experts on conflicts of interest at its upcoming meeting (May 19-22, 2007 in Chicago).

"[former New England Journal of Medicine editor Dr. Marcia Angell] Angell, [Dr.Jerry Avorn, a Brigham and Women’s Hospital physician and Harvard Medical School professor] Avorn, and [Dr. Jerome Kassirer, a former editor of the New England Journal of Medicine] Kassirer were invited to take part in the panel about conflicts of interest by Jean E. Sealey, a researcher and former president-elect of the American Society of Hypertension. Sealey has said the drug industry wields too much influence over the society’s activities through its financial contributions to the group and by paying for honoraria, speakers fees, grants, and research contracts with individual doctors . . .

"The group said in a statement that it sent Sealey’s panel proposal to its continuing medical education review committee, which determined Sealey’s plan to limit the panel to three prominent drug industry critics lacked balance. It suggested adding a Food and Drug Administration official to the roster, but Sealey refused."

The Capsules blog, published by Medical Meetings magazine, called this article to my attention. In addition, Capsules has blogged about COI problems at the American Society of Hypertension two other times, here and here.

Given ASH's continuing COI problems, perhaps it's the entire meeting--and not just that panel--that lacks balance. In my view, the panel of drug industry critics was providing needed balance.

Update: Roy M. Poses, MD, has some additional things to say about this on his Health Renewal Blog.

Sunday, December 17, 2006

The Numbers Don't Add Up. Number 1 of a continuing series of pet peeves.

You'd think that people with M.D.s and Ph.Ds and faculty positions in major universities would be able to do simple arithmetic. You'd be wrong.

So I'm writing an article based on a poster presentation of a retrospective study. The topic isn't important. I have the full text of the poster on a piece of paper in front of me. The methods section says that the study involved 29 children hospitalized for serious burns and 73 children hospitalized for other serious injuries. One of the study's dependent variables was whether the children has been breast fed as infants or not. Of those children 47 had been breast fed and 56 had not.

Observant readers will have noticed that 29 + 73 = 102 but 47 + 56 = 103.

The total is 102 for all the other dependent variables, so I'm reasonably certain that there were 102 and not 103 children in the study. Either the number of children who had been breast fed is actually 46 or the number who had not is actually 55.

I pore through all the other numbers on that poster, hoping that there would be a way for me to back-calculate the source of the error. No such luck. By this time it's about 10 minutes before my deadline and after business hours on the Friday before a holiday weekend. There's no realistic possibility of reaching one of the researchers on the phone to resolve the discrepancy.

It clearly wouldn't be right for me to guess which number was correct. That would give me at least a 50% chance of being wrong, and a much greater chance if Murphy's Law is taken into account.

I ended up fudging, writing that "just under half" the children had been breast fed.

I'm telling this story not because it's unusual, but because it's not. It's amazing how often I find numerical errors in studies described in medical conferences. I'd guess it's at least 10%-20% of the time (or about 3 times out of 5, as Dave Barry might say). Occasionally I even find simple arithmetic errors in published papers, errors that apparently went unnoticed during peer review.

Calculated percentages are especially subject to error, for some reason. I've learned to recheck every percentage I plan to quote in my stories.

But I've also found major statistical errors. Once, I was all set to write about a study reporting a statistically significant difference between two groups until I took a close look at the data. There was a bar chart, and one of the groups did appear slightly larger on the relevant variable than the other. But when I took a close look at where the error bars would have been (had the authors put error bars on the bar chart), it was clear that the difference between the two groups was clearly within the margin of error, and there was no way in hell that the difference between them was statistically significant at the p=0.05 level.

I guess the moral of this story is that science and medical writers need to take close and critical looks at the actual numbers in the studies they write about, and not assume that scientists with advanced degrees are capable of calculating a percentage.

Friday, December 15, 2006

Conflicts of Interest--Part 1 of a continuing series

Interesting post on the Health Care Renewal blog about conflict of interest (COI) in continuing medical education (CME) meetings. Quoting a Wall Street Journal article, Roy M. Poses, MD, president of the Foundation for Integrity and Responsibility in Medicine, discusses a case where GlaxoSmithKline, the huge pharmaceutical company, paid a doc $1,000 to $2,500 per talk to appear at CME meetings. At those meetings the doc would advocate certain off-label uses for one of GSK's drugs, while neglecting to disclose that GSK was paying for him to speak.

I'll be discussing COI frequently in this blog. Today I'll make a few points about disclosure, which is often touted as a COI antidote.

  • More docs seem to be following the disclosure rules these days than in years past, but many still neglect to mention their potential conflicts. Often meeting programs will include a page of speakers' disclosures, which is very helpful. But on that page, after a short list of speakers who have disclosed their various advisory boards, equity interests, and sources of research funding, is a much longer list of "Speakers Disclosing No Conflicts of Interest." This heading is quite misleading, of course. Some folks in this list may have affirmed that they have no conflicts of interest, but others may simply have failed to return the COI form the meeting organizers sent them, and they may have COIs up the wazoo.
  • Sometimes a speaker's disclosures appear on his first or second PowerPoint slide, which he leaves up for about a microsecond.
  • Sometimes a speaker will simply say that he sits on every advisory board or speakers' bureau for every pharmaceutical company with drugs in his specialty, implying that he has no motive for touting one company's drug over another.
  • Sometimes a speaker will state regretfully that he has no disclosures to report, but then jokes that he'd be delighted to talk to anyone in the audience who is willing to help him change that sad state of affairs.
  • Always the implication of these disclosures is that the speaker is far too principled a scientist and clinician to let any of these financial conflicts cloud his Solomon-like wisdom.

As far as I can tell, the majority of audience members give virtually no weight to what disclosures there are, unless the conflict is so extreme that it can't be ignored. For example, they may give a talk a few mental demerits if the speaker discloses that he holds a patent on the drug or device he's talking about. Otherwise, I believe, they have faith that the speaker would never, no never, let these conflicts affect his judgement. I believe that that faith is often misplaced.

Sunday, December 10, 2006

Why I Love Poster Sessions

When I was a graduate student, back in the Pleistocene, there was always a lot of excitement around our specialty's annual convention. The graduate students, postdocs, and faculty in the department all prepared abstracts months in advance of the meeting. (Those abstracts were often works of fiction, reporting data that we hadn't finished analyzing. The coming months were often a desperate scramble to finish the analysis before the meeting. But that's another story.)

We sent these carefully chiseled stone tablets to the meeting's program committee, and then we eagerly awaited their responses. Just about every abstract submitted was awarded a place in the meeting, but it was a special honor to be selected to deliver the paper as an oral presentation, and we were always disappointed when we learned that we'd been relegated to the poster session instead.

I carried that prejudice against poster sessions well into my science writing career. When covering a conference I attended oral sessions exclusively, never even glancing at the list of poster presentations. That all changed when I started a new job and had to return with a dozen or more articles from each meeting, not just the one or two top stories.

In desperation I began trolling the poster sessions, and I quickly discovered that they were well stocked with tasty fish. Here (in no particular order) are some of the reasons I love poster sessions in medical meetings.

  1. Walking through poster sessions is a far more efficient use of my time than sitting in oral presentations. Since scientists present information in the exact opposite way as journalists (conclusions at the end instead of in the lede), I usually have to sit through the entire presentation to figure out whether it's as newsworthy as I guessed, based on the title. Most oral presentations in research meetings are 10-15 minutes long, and in that time I can walk past and scan the conclusions of at least a dozen posters.
  2. Posters are often put up first thing in the morning and taken down either at lunchtime or at the end of the day. That gives me hours to take a look at it, and I can shoehorn it into any part of my daily schedule. But if I miss a 10 minute talk that I need to hear, I'm screwed.
  3. I like to walk through the poster room after the posters have been put up, but before the official start of the poster session. The room is quiet then and not too crowded. I can look at a poster and decide whether it's newsworthy without the author eagerly asking if I'd like an explanation of the experiment. If I need an explanation, further information, or a quote, I can always come back later, when the author is standing by the poster.
  4. With posters I can easily pull out my digital camera, and in one or two or three photos, I can capture every word and every number. In oral presentations, I need voice recordings plus photos of practically every PowerPoint slide to get the same coverage. And then I have to listen to the recording and look at 30 individual slides.
  5. Most program committees from most medical societies don't scrutinize the abstracts as closely as I believed they did when I was a graduate student. It's simply not true, in most meetings, that the most important studies are presented at oral sessions, and the second-rate studies are consigned to the poster sessions. In fact, I think it's often just the opposite. Many oral presentations are general overviews of a topic (in some meetings this is true of most oral presentations), which is relatively useless when you're looking for actual news. Poster presentations, on the other hand, are more likely to be highly focused reports on a single study.

Monday, December 04, 2006

The Best Unheralded Medical Meeting

As I’ve said before, journalists are pack animals. A small group of medical meetings each attracts hundreds of reporters, because they’re reliable sources of news year after year.

But there’s one meeting I think is better than the American Society for Clinical Oncology (ASCO), the American Heart Association (AHA), and the Radiological Society of North America (RSNA) put together. For some reason, however, hardly any reporters attend.

It’s the annual meeting of the Pediatric Academic Societies (PAS), a conference jointly sponsored by the American Pediatric Society, the Society for Pediatric Research, the Ambulatory Pediatric Association, and the American Academy of Pediatrics.

PAS is a big meeting, attracting thousands of researchers and clinicians. It even has a press room, of a sort, but only a handful of reporters register. Those reporters are both happy and busy; happy because there’s not much competition for terrific stories, and busy because there are so many worthwhile stories that it’s hard to choose.

It’s a four-day meeting, but the time I covered it I only attended three days’ worth. In that time I picked up no fewer than 40 solid news stories. If I had been three people, I could easily have picked up 120. This is no exaggeration.

It’s clearly the major pediatric research meeting of the year, but it’s my impression that more reporters attend the much less interesting annual meeting of the American Academy of Pediatrics. When I covered the AAP meeting I learned, to my surprise, that it’s mostly a CME meeting, with a relatively small proportion of original research presentations.

The PAS meeting, in contrast, is all original research. Just about every talk, and just about every poster presents the results of a clinical trial, and is thus a potential news story.

So why doesn’t the PAS meeting get the respect and the attention from journalists that it deserves? In part, it’s the name, I think. The biggest offender is the word “academic.” If I didn’t know better, I’d think that the meeting involved mostly basic research, with little of interest to physicians or to the general public. In fact, the overwhelming majority of the talks have direct clinical relevance.

The word “pediatric” is the next offender. If I didn’t know better, I’d think that the meeting was solely of interest to pediatricians. In fact, there are papers at this meeting of interest to virtually every medical specialty, from neonatal medicine, to neurology, to infectious diseases, to psychiatry, to child development, to name a few.

The third problem is with the word “societies.” Although I have little evidence to back this up, I’m guessing that the AAP, by far the biggest of the four sponsoring societies, invests more resources in promoting its own annual meeting to the press than the PAS meeting. The other three societies don't have the resources (or perhaps the knowhow) to promote the meeting on their own.

By the way, if you’re a medical journalist, please forget everything you’ve just read. You just keep running with the pack, and I’ll keep all those sweet PAS stories for myself.

The next PAS meeting will be held May 5-8, 2007 in Toronto.

Other Posts on this Blog